id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1801.03921 | Hanrong Chen | Hanrong Chen, Arup K. Chakraborty, Mehran Kardar | How nonuniform contact profiles of T cell receptors modulate thymic
selection outcomes | 10 pages, 4 figures, submitted to Phys. Rev. E | Phys. Rev. E 97, 032413 (2018) | 10.1103/PhysRevE.97.032413 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | T cell receptors (TCRs) bind foreign or self-peptides attached to major
histocompatibility complex (MHC) molecules, and the strength of this
interaction determines T cell activation. Optimizing the ability of T cells to
recognize a diversity of foreign peptides yet be tolerant of self-peptides is
crucial for the adaptive immune system to properly function. This is achieved
by selection of T cells in the thymus, where immature T cells expressing
unique, stochastically generated TCRs interact with a large number of
self-peptide-MHC; if a TCR does not bind strongly enough to any
self-peptide-MHC, or too strongly with at least one self-peptide-MHC, the T
cell dies. Past theoretical work cast thymic selection as an extreme value
problem, and characterized the statistical enrichment or depletion of amino
acids in the post-selection TCR repertoire, showing how T cells are selected to
be able to specifically recognize peptides derived from diverse pathogens, yet
have limited self-reactivity. Here, we investigate how the degree of enrichment
is modified by nonuniform contacts that a TCR makes with peptide-MHC.
Specifically, we were motivated by recent experiments showing that amino acids
at certain positions of a TCR sequence have large effects on thymic selection
outcomes, and crystal structure data that reveal a nonuniform contact profile
between a TCR and its peptide-MHC ligand. Using a representative TCR contact
profile as an illustration, we show via simulations that the degree of
enrichment now varies by position according to the contact profile, and,
importantly, it depends on the implementation of nonuniform contacts during
thymic selection. We explain these nontrivial results analytically. Our study
has implications for understanding the selection forces that shape the
functionality of the post-selection TCR repertoire.
| [
{
"created": "Thu, 11 Jan 2018 18:49:23 GMT",
"version": "v1"
}
] | 2018-03-23 | [
[
"Chen",
"Hanrong",
""
],
[
"Chakraborty",
"Arup K.",
""
],
[
"Kardar",
"Mehran",
""
]
] | T cell receptors (TCRs) bind foreign or self-peptides attached to major histocompatibility complex (MHC) molecules, and the strength of this interaction determines T cell activation. Optimizing the ability of T cells to recognize a diversity of foreign peptides yet be tolerant of self-peptides is crucial for the adaptive immune system to properly function. This is achieved by selection of T cells in the thymus, where immature T cells expressing unique, stochastically generated TCRs interact with a large number of self-peptide-MHC; if a TCR does not bind strongly enough to any self-peptide-MHC, or too strongly with at least one self-peptide-MHC, the T cell dies. Past theoretical work cast thymic selection as an extreme value problem, and characterized the statistical enrichment or depletion of amino acids in the post-selection TCR repertoire, showing how T cells are selected to be able to specifically recognize peptides derived from diverse pathogens, yet have limited self-reactivity. Here, we investigate how the degree of enrichment is modified by nonuniform contacts that a TCR makes with peptide-MHC. Specifically, we were motivated by recent experiments showing that amino acids at certain positions of a TCR sequence have large effects on thymic selection outcomes, and crystal structure data that reveal a nonuniform contact profile between a TCR and its peptide-MHC ligand. Using a representative TCR contact profile as an illustration, we show via simulations that the degree of enrichment now varies by position according to the contact profile, and, importantly, it depends on the implementation of nonuniform contacts during thymic selection. We explain these nontrivial results analytically. Our study has implications for understanding the selection forces that shape the functionality of the post-selection TCR repertoire. |
2011.10447 | Vincent Huin | Vincent Huin (JPArc), Vincent Deramecourt, Dominique
Caparros-Lefebvre, Claude-Alain Maurage, Charles Duyckaerts, Eniko Kovari,
Florence Pasquier, Val\'erie Bu\'ee-Scherrer, Julien Labreuche, H\'el\`ene
Behal, Luc Bu\'ee, Claire-Marie Dhaenens, Bernard Sablonni\`ere | The MAPT gene is differentially methylated in the progressive
supranuclear palsy brain | null | Movement Disorders, Wiley, 2016, 31 (12), pp.1883-1890 | 10.1002/mds.26820 | null | q-bio.GN q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Progressive supranuclear palsy (PSP) is a rare neurodegenerative
disease causing parkinsonian symptoms. Altered DNA methylation of the
microtubule-associated protein tau gene correlates with the expression changes
in Alzheimer's disease and Parkinson's disease brains. However, few studies
examine the sequences beyond the constitutive promoter.Objectives: Because
activating different microtubule associated protein tau gene control regions
via methylation might regulate the differential tau expression constituting the
specific signatures of individual tauopathies, we compared methylation of a
candidate promoter, intron 0.Methods: We assessed DNA methylation in the brains
of patients with different tauopathies (35 Alzheimer's disease, 10 corticobasal
degeneration, and 18 PSP) and 19 controls by intron 0 pyrosequencing. We also
evaluated methylation in an independent cohort of 11 PSP cases and 12 controls.
Frontal (affected by tau pathology) and occipital (unaffected) cortices were
analyzed.Results: In the initial samples, one CpG island site in intron 0
(CpG1) showed significant hypomethylation in PSP-affected frontal cortices when
compared with controls (p = 0.022). Such hypomethylation was observed in
replicate samples, but not in occipital cortices or other tauopathies. PSP and
control samples (combining the initial and replicate samples) remained
significantly different after adjustment for potential confounding factors
(age, H1/H1 diplotype; p = 0.0005). PSP-affected tissues exhibited
microtubule-associated protein tau RNA hyperexpression when compared with
controls (p = 0.004), although no correlation with CpG1 methylation was
observed.Conclusions: This exploratory study suggests that regions other than
the constitutive promoter may be involved in microtubule-associated protein tau
gene regulation in tauopathies and that intron 0 hypomethylation may be a
specific epigenetic signature of PSP. These preliminary findings require
confirmation.
| [
{
"created": "Fri, 20 Nov 2020 15:22:09 GMT",
"version": "v1"
}
] | 2020-11-23 | [
[
"Huin",
"Vincent",
"",
"JPArc"
],
[
"Deramecourt",
"Vincent",
""
],
[
"Caparros-Lefebvre",
"Dominique",
""
],
[
"Maurage",
"Claude-Alain",
""
],
[
"Duyckaerts",
"Charles",
""
],
[
"Kovari",
"Eniko",
""
],
[
"Pa... | Background: Progressive supranuclear palsy (PSP) is a rare neurodegenerative disease causing parkinsonian symptoms. Altered DNA methylation of the microtubule-associated protein tau gene correlates with the expression changes in Alzheimer's disease and Parkinson's disease brains. However, few studies examine the sequences beyond the constitutive promoter.Objectives: Because activating different microtubule associated protein tau gene control regions via methylation might regulate the differential tau expression constituting the specific signatures of individual tauopathies, we compared methylation of a candidate promoter, intron 0.Methods: We assessed DNA methylation in the brains of patients with different tauopathies (35 Alzheimer's disease, 10 corticobasal degeneration, and 18 PSP) and 19 controls by intron 0 pyrosequencing. We also evaluated methylation in an independent cohort of 11 PSP cases and 12 controls. Frontal (affected by tau pathology) and occipital (unaffected) cortices were analyzed.Results: In the initial samples, one CpG island site in intron 0 (CpG1) showed significant hypomethylation in PSP-affected frontal cortices when compared with controls (p = 0.022). Such hypomethylation was observed in replicate samples, but not in occipital cortices or other tauopathies. PSP and control samples (combining the initial and replicate samples) remained significantly different after adjustment for potential confounding factors (age, H1/H1 diplotype; p = 0.0005). PSP-affected tissues exhibited microtubule-associated protein tau RNA hyperexpression when compared with controls (p = 0.004), although no correlation with CpG1 methylation was observed.Conclusions: This exploratory study suggests that regions other than the constitutive promoter may be involved in microtubule-associated protein tau gene regulation in tauopathies and that intron 0 hypomethylation may be a specific epigenetic signature of PSP. These preliminary findings require confirmation. |
1912.07154 | Amir Toor | Abdullah A. Toor and Amir A. Toor MD | On the Order of Gene Distribution on Chromosomes Across the Animal
Kingdom | 13 pages, 3 tables and 7 figures | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background. The large-scale pattern of distribution of genes on the
chromosomes in the known animal genomes is not well characterized. We
hypothesized that individual genes will be distributed on chromosomes in a
mathematically ordered manner across the animal kingdom. Results. Twenty-one
animal genomes reported in the NCBI database were examined. Numerically, there
was a trend towards increasing overall gene content with increasing size of the
genome as reflected by the chromosomal complement. Gene frequency on individual
chromosomes in each animal genome was analyzed and demonstrated uniformity of
proportions within each animal with respect to both average gene frequency on
individual chromosomes and gene distribution across the unique genomes.
Further, average gene distribution across animal species followed a
relationship whereby it was, approximately, inversely proportional to the
square root of the number of chromosomes in the unique animal genomes,
consistent with the notion that there is an ordered increase in gene dispersion
as the complexity of the genome increased. To further corroborate these
findings a derived measure, termed gene spacing on chromosomes correlated with
gene frequency and gene distribution. Conclusion. As animal species have
evolved, the distribution of their genes on individual chromosomes and within
their genomes, when viewed on a large scale is not random, but follows a
mathematically ordered process, such that as the complexity of the organism
increases, the genes become less densely distributed on the chromosomes and
more dispersed across the genome.
| [
{
"created": "Mon, 16 Dec 2019 01:42:05 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"Toor",
"Abdullah A.",
""
],
[
"MD",
"Amir A. Toor",
""
]
] | Background. The large-scale pattern of distribution of genes on the chromosomes in the known animal genomes is not well characterized. We hypothesized that individual genes will be distributed on chromosomes in a mathematically ordered manner across the animal kingdom. Results. Twenty-one animal genomes reported in the NCBI database were examined. Numerically, there was a trend towards increasing overall gene content with increasing size of the genome as reflected by the chromosomal complement. Gene frequency on individual chromosomes in each animal genome was analyzed and demonstrated uniformity of proportions within each animal with respect to both average gene frequency on individual chromosomes and gene distribution across the unique genomes. Further, average gene distribution across animal species followed a relationship whereby it was, approximately, inversely proportional to the square root of the number of chromosomes in the unique animal genomes, consistent with the notion that there is an ordered increase in gene dispersion as the complexity of the genome increased. To further corroborate these findings a derived measure, termed gene spacing on chromosomes correlated with gene frequency and gene distribution. Conclusion. As animal species have evolved, the distribution of their genes on individual chromosomes and within their genomes, when viewed on a large scale is not random, but follows a mathematically ordered process, such that as the complexity of the organism increases, the genes become less densely distributed on the chromosomes and more dispersed across the genome. |
1703.08915 | Lucas Valdez D. | L.D. Valdez, G.J. Sibona, L.A. Diaz, M.S. Contigiani, C.A. Condat | Effects of rainfall on Culex mosquito population dynamics | In press in Journal of Theoretical Biology | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of a mosquito population depends heavily on climatic variables
such as temperature and precipitation. Since climate change models predict that
global warming will impact on the frequency and intensity of rainfall, it is
important to understand how these variables affect the mosquito populations. We
present a model of the dynamics of a {\it Culex quinquefasciatus} mosquito
population that incorporates the effect of rainfall and use it to study the
influence of the number of rainy days and the mean monthly precipitation on the
maximum yearly abundance of mosquitoes $M_{max}$. Additionally, using a
fracturing process, we investigate the influence of the variability in daily
rainfall on $M_{max}$. We find that, given a constant value of monthly
precipitation, there is an optimum number of rainy days for which $M_{max}$ is
a maximum. On the other hand, we show that increasing daily rainfall
variability reduces the dependence of $M_{max}$ on the number of rainy days,
leading also to a higher abundance of mosquitoes for the case of low mean
monthly precipitation. Finally, we explore the effect of the rainfall in the
months preceding the wettest season, and we obtain that a regimen with high
precipitations throughout the year and a higher variability tends to advance
slightly the time at which the peak mosquito abundance occurs, but could
significantly change the total mosquito abundance in a year.
| [
{
"created": "Mon, 27 Mar 2017 03:42:18 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2017 13:07:35 GMT",
"version": "v2"
}
] | 2017-03-29 | [
[
"Valdez",
"L. D.",
""
],
[
"Sibona",
"G. J.",
""
],
[
"Diaz",
"L. A.",
""
],
[
"Contigiani",
"M. S.",
""
],
[
"Condat",
"C. A.",
""
]
] | The dynamics of a mosquito population depends heavily on climatic variables such as temperature and precipitation. Since climate change models predict that global warming will impact on the frequency and intensity of rainfall, it is important to understand how these variables affect the mosquito populations. We present a model of the dynamics of a {\it Culex quinquefasciatus} mosquito population that incorporates the effect of rainfall and use it to study the influence of the number of rainy days and the mean monthly precipitation on the maximum yearly abundance of mosquitoes $M_{max}$. Additionally, using a fracturing process, we investigate the influence of the variability in daily rainfall on $M_{max}$. We find that, given a constant value of monthly precipitation, there is an optimum number of rainy days for which $M_{max}$ is a maximum. On the other hand, we show that increasing daily rainfall variability reduces the dependence of $M_{max}$ on the number of rainy days, leading also to a higher abundance of mosquitoes for the case of low mean monthly precipitation. Finally, we explore the effect of the rainfall in the months preceding the wettest season, and we obtain that a regimen with high precipitations throughout the year and a higher variability tends to advance slightly the time at which the peak mosquito abundance occurs, but could significantly change the total mosquito abundance in a year. |
2211.12591 | Andrey Chetverikov | Andrey Chetverikov, \'Arni Kristj\'ansson | Probabilistic representations as building blocks for higher-level vision | Accepted for publication in Neurons, Behavior, Data Analysis and
Theory (NBDT) | Neurons, Behavior, Data analysis, and Theory 1 (2022), 1-32 | 10.51628/001c.24910 | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Current theories of perception suggest that the brain represents features of
the world as probability distributions, but can such uncertain foundations
provide the basis for everyday vision? Perceiving objects and scenes requires
knowing not just how features (e.g., colors) are distributed but also where
they are and which other features they are combined with. Using a Bayesian
computational model, we recovered probabilistic representations used by human
observers to search for odd stimuli among distractors. Importantly, we found
that the brain integrates information between feature dimensions and spatial
locations, leading to more precise representations compared to when information
integration is not possible. We also uncovered representational asymmetries and
biases, showing their spatial organization and explain how this structure
argues against "summary statistics" accounts of visual representations. Our
results confirm that probabilistically encoded visual features are bound with
other features and to particular locations, providing a powerful demonstration
of how probabilistic representations can be a foundation for higher-level
vision.
| [
{
"created": "Tue, 22 Nov 2022 21:26:05 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Chetverikov",
"Andrey",
""
],
[
"Kristjánsson",
"Árni",
""
]
] | Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against "summary statistics" accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision. |
2108.08214 | Mariana da Silva | Mariana Da Silva, Carole H. Sudre, Kara Garcia, Cher Bass, M. Jorge
Cardoso, and Emma C. Robinson | Distinguishing Healthy Ageing from Dementia: a Biomechanical Simulation
of Brain Atrophy using Deep Networks | MLCN 2021 | null | null | null | q-bio.NC cs.LG eess.IV q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Biomechanical modeling of tissue deformation can be used to simulate
different scenarios of longitudinal brain evolution. In this work,we present a
deep learning framework for hyper-elastic strain modelling of brain atrophy,
during healthy ageing and in Alzheimer's Disease. The framework directly models
the effects of age, disease status, and scan interval to regress regional
patterns of atrophy, from which a strain-based model estimates deformations.
This model is trained and validated using 3D structural magnetic resonance
imaging data from the ADNI cohort. Results show that the framework can estimate
realistic deformations, following the known course of Alzheimer's disease, that
clearly differentiate between healthy and demented patterns of ageing. This
suggests the framework has potential to be incorporated into explainable models
of disease, for the exploration of interventions and counterfactual examples.
| [
{
"created": "Wed, 18 Aug 2021 15:58:53 GMT",
"version": "v1"
}
] | 2021-08-19 | [
[
"Da Silva",
"Mariana",
""
],
[
"Sudre",
"Carole H.",
""
],
[
"Garcia",
"Kara",
""
],
[
"Bass",
"Cher",
""
],
[
"Cardoso",
"M. Jorge",
""
],
[
"Robinson",
"Emma C.",
""
]
] | Biomechanical modeling of tissue deformation can be used to simulate different scenarios of longitudinal brain evolution. In this work,we present a deep learning framework for hyper-elastic strain modelling of brain atrophy, during healthy ageing and in Alzheimer's Disease. The framework directly models the effects of age, disease status, and scan interval to regress regional patterns of atrophy, from which a strain-based model estimates deformations. This model is trained and validated using 3D structural magnetic resonance imaging data from the ADNI cohort. Results show that the framework can estimate realistic deformations, following the known course of Alzheimer's disease, that clearly differentiate between healthy and demented patterns of ageing. This suggests the framework has potential to be incorporated into explainable models of disease, for the exploration of interventions and counterfactual examples. |
1010.0413 | Michael Deem | Michael W. Deem and Pooya Hejazi | Theoretical Aspects of Immunity | 49 pages, 12 figures, to appear in Annual Reviews | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The immune system recognizes a myriad of invading pathogens and their toxic
products. It does so with a finite repertoire of antibodies and T cell
receptors. We here describe theories that quantify the immune system dynamics.
We describe how the immune system recognizes antigens by searching the large
space of receptor molecules. We consider in some detail the theories that
quantify the immune response to influenza and dengue fever. We review
theoretical descriptions of the complementary evolution of pathogens that
occurs in response to immune system pressure. Methods including bioinformatics,
molecular simulation, random energy models, and quantum field theory contribute
to a theoretical understanding of aspects of immunity.
| [
{
"created": "Sun, 3 Oct 2010 15:12:36 GMT",
"version": "v1"
}
] | 2010-10-05 | [
[
"Deem",
"Michael W.",
""
],
[
"Hejazi",
"Pooya",
""
]
] | The immune system recognizes a myriad of invading pathogens and their toxic products. It does so with a finite repertoire of antibodies and T cell receptors. We here describe theories that quantify the immune system dynamics. We describe how the immune system recognizes antigens by searching the large space of receptor molecules. We consider in some detail the theories that quantify the immune response to influenza and dengue fever. We review theoretical descriptions of the complementary evolution of pathogens that occurs in response to immune system pressure. Methods including bioinformatics, molecular simulation, random energy models, and quantum field theory contribute to a theoretical understanding of aspects of immunity. |
1110.3933 | Sergey Murik | Sergey E. Murik | About the Neuronal Mechanism of Lateral Hypothalamic Self-Stimulation
Response | 17 pages, 4 figures; Journal article in Russian | Bulletin of Irkutsk State University. A Biology and Ecology
series. 2010, vol.,3, No.2, pp.65-74 | null | null | q-bio.NC q-bio.CB | http://creativecommons.org/licenses/by/3.0/ | The experimental part of this study has shown that hunger motivation may be
evoked by a long-term (10=180 s) continuous electrical stimulation of the
"hunger center" at a current of 133.6{\pm}8.1 mkA. Positive emotions were
caused by electrostimulation at the same current intensity but short-term
duration (0.3-0.5 s). A positive feeling elicited by electrostimulation of the
motivation center can be explained in terms of the adaptation (polarization)
theory of motivation and emotion (Murik, 2001, 2005).
| [
{
"created": "Tue, 18 Oct 2011 10:47:38 GMT",
"version": "v1"
}
] | 2011-10-19 | [
[
"Murik",
"Sergey E.",
""
]
] | The experimental part of this study has shown that hunger motivation may be evoked by a long-term (10=180 s) continuous electrical stimulation of the "hunger center" at a current of 133.6{\pm}8.1 mkA. Positive emotions were caused by electrostimulation at the same current intensity but short-term duration (0.3-0.5 s). A positive feeling elicited by electrostimulation of the motivation center can be explained in terms of the adaptation (polarization) theory of motivation and emotion (Murik, 2001, 2005). |
1903.05067 | Miguel Ramos-Pascual | Miguel Ramos Pascual | HIV-1 virus cycle replication: a review of RNA polymerase II
transcription, alternative splicing and protein synthesis | Review paper not published, uploaded in Arxiv.org for collaborations,
comments and peer-reviewings | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HIV virus replication is a time-related process that includes several stages.
Focusing on the core steps, RNA polymerase II transcripts in an early stage
pre-mRNA containing regulator proteins (i.e nef,tat,rev,vif,vpr,vpu), which are
completely spliced by the spliceosome complex (0.9kb and 1.8kb) and exported to
the ribosome for protein synthesis. These splicing and export processes are
regulated by tat protein, which binds on Trans-activation response (TAR)
element, and by rev protein, which binds to the Rev-responsive Element (RRE).
As long as these regulators are synthesized, splicing is progressively
inhibited (from 4.0kb to 9.0kb) and mRNAs are translated into structural and
enzymatic proteins (env, gag-pol). During this RNAPII scanning and splicing,
around 40 different multi-cystronic mRNA have been produced. Long-read
sequencing has been applied to the HIV-1 virus genome (type HXB2CG) with the
HIV.pro software, a fortran 90 code for simulating the virus replication cycle,
specially RNAPII transcription, exon/intron splicing and ribosome protein
synthesis, including the frameshift at gag/pol gene and the ribosome pause at
env gene. All HIV-1 virus proteins have been identified as far as other ORFs.
As observed, tat/rev protein regulators have different length depending on the
splicing cleavage site: tat protein varies from 224aa to a final state of 72aa,
whereas rev protein from 25aa to 27aa, with a maximum of 119aa. Furthermore,
several ORFs coding for small polypeptides sPEP (less than 10 amino acids) and
for other unidentified proteins have been localised with unknown functionality.
The detailed analysis of the HIV virus replication and the virus proteomics are
important for identifying which antigens are presented by macrophages to CD4
cells, for localizing reactive epitopes or for creating transfer vectors to
develop new HIV vaccines and effective therapies.
| [
{
"created": "Tue, 12 Mar 2019 17:20:18 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Apr 2020 12:35:52 GMT",
"version": "v2"
}
] | 2020-04-03 | [
[
"Pascual",
"Miguel Ramos",
""
]
] | HIV virus replication is a time-related process that includes several stages. Focusing on the core steps, RNA polymerase II transcripts in an early stage pre-mRNA containing regulator proteins (i.e nef,tat,rev,vif,vpr,vpu), which are completely spliced by the spliceosome complex (0.9kb and 1.8kb) and exported to the ribosome for protein synthesis. These splicing and export processes are regulated by tat protein, which binds on Trans-activation response (TAR) element, and by rev protein, which binds to the Rev-responsive Element (RRE). As long as these regulators are synthesized, splicing is progressively inhibited (from 4.0kb to 9.0kb) and mRNAs are translated into structural and enzymatic proteins (env, gag-pol). During this RNAPII scanning and splicing, around 40 different multi-cystronic mRNA have been produced. Long-read sequencing has been applied to the HIV-1 virus genome (type HXB2CG) with the HIV.pro software, a fortran 90 code for simulating the virus replication cycle, specially RNAPII transcription, exon/intron splicing and ribosome protein synthesis, including the frameshift at gag/pol gene and the ribosome pause at env gene. All HIV-1 virus proteins have been identified as far as other ORFs. As observed, tat/rev protein regulators have different length depending on the splicing cleavage site: tat protein varies from 224aa to a final state of 72aa, whereas rev protein from 25aa to 27aa, with a maximum of 119aa. Furthermore, several ORFs coding for small polypeptides sPEP (less than 10 amino acids) and for other unidentified proteins have been localised with unknown functionality. The detailed analysis of the HIV virus replication and the virus proteomics are important for identifying which antigens are presented by macrophages to CD4 cells, for localizing reactive epitopes or for creating transfer vectors to develop new HIV vaccines and effective therapies. |
1405.5935 | Julio Augusto Freyre-Gonz\'alez | Julio A. Freyre-Gonz\'alez, Luis G. Trevi\~no-Quintanilla, Ilse A.
Valtierra-Guti\'errez, Rosa Mar\'ia Guti\'errez-R\'ios, Jos\'e A.
Alonso-Pav\'on | Prokaryotic regulatory systems biology: Common principles governing the
functional architectures of Bacillus subtilis and Escherichia coli unveiled
by the natural decomposition approach | 22 pages, 5 figures, 3 tables | Journal of Biotechnology 161(3):278-286 (2012) | 10.1016/j.jbiotec.2012.03.028 | null | q-bio.MN q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Escherichia coli and Bacillus subtilis are two of the best-studied
prokaryotic model organisms. Previous analyses of their transcriptional
regulatory networks have shown that they exhibit high plasticity during
evolution and suggested that both converge to scale-free-like structures.
Nevertheless, beyond this suggestion, no analyses have been carried out to
identify the common systems-level components and principles governing these
organisms. Here we show that these two phylogenetically distant organisms
follow a set of common novel biologically consistent systems principles
revealed by the mathematically and biologically founded natural decomposition
approach. The discovered common functional architecture is a diamond-shaped,
matryoshka-like, three-layer (coordination, processing, and integration)
hierarchy exhibiting feedback, which is shaped by four systems-level
components: global transcription factors (global TFs), locally autonomous
modules, basal machinery and intermodular genes. The first mathematical
criterion to identify global TFs, the $\kappa$-value, was reassessed on B.
subtilis and confirmed its high predictive power by identifying all the
previously reported, plus three potential, master regulators and eight sigma
factors. The functional conserved cores of modules, basal cell machinery, and a
set of non-orthologous common physiological global responses were identified
via both orthologous genes and non-orthologous conserved functions. This study
reveals novel common systems principles maintained between two phylogenetically
distant organisms and provides a comparison of their lifestyle adaptations. Our
results shed new light on the systems-level principles and the fundamental
functions required by bacteria to sustain life.
| [
{
"created": "Fri, 23 May 2014 00:08:06 GMT",
"version": "v1"
}
] | 2014-05-26 | [
[
"Freyre-González",
"Julio A.",
""
],
[
"Treviño-Quintanilla",
"Luis G.",
""
],
[
"Valtierra-Gutiérrez",
"Ilse A.",
""
],
[
"Gutiérrez-Ríos",
"Rosa María",
""
],
[
"Alonso-Pavón",
"José A.",
""
]
] | Escherichia coli and Bacillus subtilis are two of the best-studied prokaryotic model organisms. Previous analyses of their transcriptional regulatory networks have shown that they exhibit high plasticity during evolution and suggested that both converge to scale-free-like structures. Nevertheless, beyond this suggestion, no analyses have been carried out to identify the common systems-level components and principles governing these organisms. Here we show that these two phylogenetically distant organisms follow a set of common novel biologically consistent systems principles revealed by the mathematically and biologically founded natural decomposition approach. The discovered common functional architecture is a diamond-shaped, matryoshka-like, three-layer (coordination, processing, and integration) hierarchy exhibiting feedback, which is shaped by four systems-level components: global transcription factors (global TFs), locally autonomous modules, basal machinery and intermodular genes. The first mathematical criterion to identify global TFs, the $\kappa$-value, was reassessed on B. subtilis and confirmed its high predictive power by identifying all the previously reported, plus three potential, master regulators and eight sigma factors. The functional conserved cores of modules, basal cell machinery, and a set of non-orthologous common physiological global responses were identified via both orthologous genes and non-orthologous conserved functions. This study reveals novel common systems principles maintained between two phylogenetically distant organisms and provides a comparison of their lifestyle adaptations. Our results shed new light on the systems-level principles and the fundamental functions required by bacteria to sustain life. |
1405.0233 | John Barton | J. P. Barton and S. Cocco and E. De Leonardis and R. Monasson | Large Pseudo-Counts and $L_2$-Norm Penalties Are Necessary for the
Mean-Field Inference of Ising and Potts Models | 25 pages, 17 figures | Phys Rev E 90 (2014) 012132 | 10.1103/PhysRevE.90.012132 | null | q-bio.QM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mean field (MF) approximation offers a simple, fast way to infer direct
interactions between elements in a network of correlated variables, a common,
computationally challenging problem with practical applications in fields
ranging from physics and biology to the social sciences. However, MF methods
achieve their best performance with strong regularization, well beyond Bayesian
expectations, an empirical fact that is poorly understood. In this work, we
study the influence of pseudo-count and $L_2$-norm regularization schemes on
the quality of inferred Ising or Potts interaction networks from correlation
data within the MF approximation. We argue, based on the analysis of small
systems, that the optimal value of the regularization strength remains finite
even if the sampling noise tends to zero, in order to correct for systematic
biases introduced by the MF approximation. Our claim is corroborated by
extensive numerical studies of diverse model systems and by the analytical
study of the $m$-component spin model, for large but finite $m$. Additionally
we find that pseudo-count regularization is robust against sampling noise, and
often outperforms $L_2$-norm regularization, particularly when the underlying
network of interactions is strongly heterogeneous. Much better performances are
generally obtained for the Ising model than for the Potts model, for which only
couplings incoming onto medium-frequency symbols are reliably inferred.
| [
{
"created": "Thu, 1 May 2014 17:40:44 GMT",
"version": "v1"
}
] | 2014-11-11 | [
[
"Barton",
"J. P.",
""
],
[
"Cocco",
"S.",
""
],
[
"De Leonardis",
"E.",
""
],
[
"Monasson",
"R.",
""
]
] | Mean field (MF) approximation offers a simple, fast way to infer direct interactions between elements in a network of correlated variables, a common, computationally challenging problem with practical applications in fields ranging from physics and biology to the social sciences. However, MF methods achieve their best performance with strong regularization, well beyond Bayesian expectations, an empirical fact that is poorly understood. In this work, we study the influence of pseudo-count and $L_2$-norm regularization schemes on the quality of inferred Ising or Potts interaction networks from correlation data within the MF approximation. We argue, based on the analysis of small systems, that the optimal value of the regularization strength remains finite even if the sampling noise tends to zero, in order to correct for systematic biases introduced by the MF approximation. Our claim is corroborated by extensive numerical studies of diverse model systems and by the analytical study of the $m$-component spin model, for large but finite $m$. Additionally we find that pseudo-count regularization is robust against sampling noise, and often outperforms $L_2$-norm regularization, particularly when the underlying network of interactions is strongly heterogeneous. Much better performances are generally obtained for the Ising model than for the Potts model, for which only couplings incoming onto medium-frequency symbols are reliably inferred. |
1808.00756 | Lee Susman | Lee Susman, Naama Brenner and Omri Barak | Stable memory with unstable synapses | In review with Nature Communications. 30 pages, including appendix | null | 10.1038/s41467-019-12306-2 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | What is the physiological basis of long-term memory? The prevailing view in
neuroscience attributes changes in synaptic efficacy to memory acquisition.
This view implies that stable memories correspond to stable connectivity
patterns. However, an increasing body of experimental evidence points to
significant, activity-independent dynamics in synaptic strengths. Motivated by
these observations, we explore the possibility of memory storage within a
global component of network connectivity, while individual connections
fluctuate. We find a simple and general principle, stemming from stability
arguments, that links eigenvalues in the complex plane to memories.
Specifically, imaginary-coded memories are more resilient to noise and
homeostatic plasticity than their real-coded counterparts. Memory
representations are stored as time-varying attractors in neural state-space and
support associative retrieval of learned information. Our results suggest a
link between the properties of learning rules and those of network-level memory
representations, and point at measurable signatures to be sought in
experimental data.
| [
{
"created": "Thu, 2 Aug 2018 11:08:56 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Aug 2018 08:48:43 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Feb 2019 15:26:16 GMT",
"version": "v3"
}
] | 2019-10-09 | [
[
"Susman",
"Lee",
""
],
[
"Brenner",
"Naama",
""
],
[
"Barak",
"Omri",
""
]
] | What is the physiological basis of long-term memory? The prevailing view in neuroscience attributes changes in synaptic efficacy to memory acquisition. This view implies that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent dynamics in synaptic strengths. Motivated by these observations, we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find a simple and general principle, stemming from stability arguments, that links eigenvalues in the complex plane to memories. Specifically, imaginary-coded memories are more resilient to noise and homeostatic plasticity than their real-coded counterparts. Memory representations are stored as time-varying attractors in neural state-space and support associative retrieval of learned information. Our results suggest a link between the properties of learning rules and those of network-level memory representations, and point at measurable signatures to be sought in experimental data. |
1509.01972 | Sebastian Bitzer | Sebastian Bitzer and Stefan J. Kiebel | The Brain Uses Reliability of Stimulus Information when Making
Perceptual Decisions | submission to NIPS 2015 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In simple perceptual decisions the brain has to identify a stimulus based on
noisy sensory samples from the stimulus. Basic statistical considerations state
that the reliability of the stimulus information, i.e., the amount of noise in
the samples, should be taken into account when the decision is made. However,
for perceptual decision making experiments it has been questioned whether the
brain indeed uses the reliability for making decisions when confronted with
unpredictable changes in stimulus reliability. We here show that even the basic
drift diffusion model, which has frequently been used to explain experimental
findings in perceptual decision making, implicitly relies on estimates of
stimulus reliability. We then show that only those variants of the drift
diffusion model which allow stimulus-specific reliabilities are consistent with
neurophysiological findings. Our analysis suggests that the brain estimates the
reliability of the stimulus on a short time scale of at most a few hundred
milliseconds.
| [
{
"created": "Mon, 7 Sep 2015 10:21:42 GMT",
"version": "v1"
}
] | 2015-09-08 | [
[
"Bitzer",
"Sebastian",
""
],
[
"Kiebel",
"Stefan J.",
""
]
] | In simple perceptual decisions the brain has to identify a stimulus based on noisy sensory samples from the stimulus. Basic statistical considerations state that the reliability of the stimulus information, i.e., the amount of noise in the samples, should be taken into account when the decision is made. However, for perceptual decision making experiments it has been questioned whether the brain indeed uses the reliability for making decisions when confronted with unpredictable changes in stimulus reliability. We here show that even the basic drift diffusion model, which has frequently been used to explain experimental findings in perceptual decision making, implicitly relies on estimates of stimulus reliability. We then show that only those variants of the drift diffusion model which allow stimulus-specific reliabilities are consistent with neurophysiological findings. Our analysis suggests that the brain estimates the reliability of the stimulus on a short time scale of at most a few hundred milliseconds. |
2311.08825 | Nadav M. Shnerb | Immanuel Meyer, Ami Taitelbaum, Michael Assaf and Nadav M. Shnerb | Emergence of a Novel Phase in Population and Community Dynamics Due to
Fat-Tailed Environmental Correlations | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Temporal environmental noise (EN) is a prevalent natural phenomenon that
controls population and community dynamics, shaping the destiny of biological
species and genetic types. Conventional theoretical models often depict EN as a
Markovian process with an exponential distribution of correlation times,
resulting in two distinct qualitative dynamical categories: quenched
(pertaining to short demographic timescales) and annealed (pertaining to long
timescales). However, numerous empirical studies demonstrate a fat-tailed decay
of correlation times. Here, we study the consequences of power-law correlated
EN on the dynamics of isolated and competing populations. We reveal the
emergence of a novel intermediate phase that lies between the quenched and
annealed regimes. Within this phase, dynamics are primarily driven by rare, yet
not exceedingly rare, long periods of almost-steady environmental conditions.
For an isolated population, the time to extinction in this phase exhibits a
novel scaling with the abundance, and also a non-monotonic dependence on the
spectral exponent.
| [
{
"created": "Wed, 15 Nov 2023 09:57:39 GMT",
"version": "v1"
}
] | 2023-11-16 | [
[
"Meyer",
"Immanuel",
""
],
[
"Taitelbaum",
"Ami",
""
],
[
"Assaf",
"Michael",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | Temporal environmental noise (EN) is a prevalent natural phenomenon that controls population and community dynamics, shaping the destiny of biological species and genetic types. Conventional theoretical models often depict EN as a Markovian process with an exponential distribution of correlation times, resulting in two distinct qualitative dynamical categories: quenched (pertaining to short demographic timescales) and annealed (pertaining to long timescales). However, numerous empirical studies demonstrate a fat-tailed decay of correlation times. Here, we study the consequences of power-law correlated EN on the dynamics of isolated and competing populations. We reveal the emergence of a novel intermediate phase that lies between the quenched and annealed regimes. Within this phase, dynamics are primarily driven by rare, yet not exceedingly rare, long periods of almost-steady environmental conditions. For an isolated population, the time to extinction in this phase exhibits a novel scaling with the abundance, and also a non-monotonic dependence on the spectral exponent. |
2010.09588 | Neta Maimon | Maxim Bez, Neta B. Maimon, Denis Ddobot, Lior Molcho, Nathan Intrator,
Eli Kakiashvilli, and Amitai Bickel | Continuous monitoring of cognitive load using advanced computerized
analysis of brain signals during virtual simulator training for laparoscopic
surgery, reflects laparoscopic dexterity. A comparative study using a novel
wireless device | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulation-based training is an effective tool for acquiring practical
skills, specifically to train new surgeons in a controlled and hazard-free
environment, it is however important to measure participants cognitive load to
decide whether they are ready to go into a real surgery. In the present study
we measured performance on a surgery simulator of medical students and interns,
while their brain activity was monitored by a mobile EEG device. 38 medical
studentswere underwent 3 experiments undergoing a task with Simbionix
simulator, while their brain activity was measured using a single-channel EEG
device (Aurora by Neurosteer). On each experiment, participants performed 3
repeats of a simulator task using laparoscopic hands. The retention between
tasks was different on each experiment, to examine changes in performance and
cognitive load biomarkers that occur during the task or as a results of night
sleep consolidation. The participants behavioral performance improved with
trial repetition in all 3 experiments. In Exps. 1 & 2, the theta band activity
significantly decreased with better individual performance, as exhibited by
some of the behavioral measurements of the simulator. The novel VC9 biomarker
(previously shown to correlate with cognitive load), exhibited a significant
decrease with better individual performance shown by all behavioral
measurements. In correspondence with previous research, theta decreased with
lower cognitive load and higher performance and the novel biomarker, VC9,
showed higher sensitivity to load changes. Together, these measurements might
be for neuroimaging assessment of cognitive load while performing simulator
laparoscopic tasks. This could potentially be expanded to evaluate efficacy of
different medical simulations to provide more efficient training to medical
staff and to measure cognitive and mental load in real laparoscopic surgeries.
| [
{
"created": "Mon, 19 Oct 2020 15:11:48 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 17:55:43 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Bez",
"Maxim",
""
],
[
"Maimon",
"Neta B.",
""
],
[
"Ddobot",
"Denis",
""
],
[
"Molcho",
"Lior",
""
],
[
"Intrator",
"Nathan",
""
],
[
"Kakiashvilli",
"Eli",
""
],
[
"Bickel",
"Amitai",
""
]
] | Simulation-based training is an effective tool for acquiring practical skills, specifically to train new surgeons in a controlled and hazard-free environment, it is however important to measure participants cognitive load to decide whether they are ready to go into a real surgery. In the present study we measured performance on a surgery simulator of medical students and interns, while their brain activity was monitored by a mobile EEG device. 38 medical studentswere underwent 3 experiments undergoing a task with Simbionix simulator, while their brain activity was measured using a single-channel EEG device (Aurora by Neurosteer). On each experiment, participants performed 3 repeats of a simulator task using laparoscopic hands. The retention between tasks was different on each experiment, to examine changes in performance and cognitive load biomarkers that occur during the task or as a results of night sleep consolidation. The participants behavioral performance improved with trial repetition in all 3 experiments. In Exps. 1 & 2, the theta band activity significantly decreased with better individual performance, as exhibited by some of the behavioral measurements of the simulator. The novel VC9 biomarker (previously shown to correlate with cognitive load), exhibited a significant decrease with better individual performance shown by all behavioral measurements. In correspondence with previous research, theta decreased with lower cognitive load and higher performance and the novel biomarker, VC9, showed higher sensitivity to load changes. Together, these measurements might be for neuroimaging assessment of cognitive load while performing simulator laparoscopic tasks. This could potentially be expanded to evaluate efficacy of different medical simulations to provide more efficient training to medical staff and to measure cognitive and mental load in real laparoscopic surgeries. |
1809.04450 | Istvan Kiss Z | Istv\'an Z. Kiss, Joel C. Miller, P\'eter L. Simon | Fast variables determine the epidemic threshold in the pairwise model
with an improved closure | 12 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:1806.06135 | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise models are used widely to model epidemic spread on networks. These
include the modelling of susceptible-infected-removed (SIR) epidemics on
regular networks and extensions to SIS dynamics and contact tracing on more
exotic networks exhibiting degree heterogeneity, directed and/or weighted links
and clustering. However, extra features of the disease dynamics or of the
network lead to an increase in system size and analytical tractability becomes
problematic. Various `closures' can be used to keep the system tractable.
Focusing on SIR epidemics on regular but clustered networks, we show that even
for the most complex closure we can determine the epidemic threshold as an
asymptotic expansion in terms of the clustering coefficient.We do this by
exploiting the presence of a system of fast variables, specified by the
correlation structure of the epidemic, whose steady state determines the
epidemic threshold. While we do not find the steady state analytically, we
create an elegant asymptotic expansion of it. We validate this new threshold by
comparing it to the numerical solution of the full system and find excellent
agreement over a wide range of values of the clustering coefficient,
transmission rate and average degree of the network. The technique carries over
to pairwise models with other closures [1] and we note that the epidemic
threshold will be model dependent. This emphasises the importance of model
choice when dealing with realistic outbreaks.
| [
{
"created": "Tue, 11 Sep 2018 09:25:49 GMT",
"version": "v1"
}
] | 2018-09-24 | [
[
"Kiss",
"István Z.",
""
],
[
"Miller",
"Joel C.",
""
],
[
"Simon",
"Péter L.",
""
]
] | Pairwise models are used widely to model epidemic spread on networks. These include the modelling of susceptible-infected-removed (SIR) epidemics on regular networks and extensions to SIS dynamics and contact tracing on more exotic networks exhibiting degree heterogeneity, directed and/or weighted links and clustering. However, extra features of the disease dynamics or of the network lead to an increase in system size and analytical tractability becomes problematic. Various `closures' can be used to keep the system tractable. Focusing on SIR epidemics on regular but clustered networks, we show that even for the most complex closure we can determine the epidemic threshold as an asymptotic expansion in terms of the clustering coefficient.We do this by exploiting the presence of a system of fast variables, specified by the correlation structure of the epidemic, whose steady state determines the epidemic threshold. While we do not find the steady state analytically, we create an elegant asymptotic expansion of it. We validate this new threshold by comparing it to the numerical solution of the full system and find excellent agreement over a wide range of values of the clustering coefficient, transmission rate and average degree of the network. The technique carries over to pairwise models with other closures [1] and we note that the epidemic threshold will be model dependent. This emphasises the importance of model choice when dealing with realistic outbreaks. |
1807.11659 | Ka Yee Yeung | Dimitar Kumanov, Ling-Hong Hung, Wes Lloyd, Ka Yee Yeung | Serverless computing provides on-demand high performance computing for
biomedical research | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing offers on-demand, scalable computing and storage, and has
become an essential resource for the analyses of big biomedical data. The usual
approach to cloud computing requires users to reserve and provision virtual
servers. An emerging alternative is to have the provider allocate machine
resources dynamically. This type of serverless computing has tremendous
potential for biomedical research in terms of ease-of-use, instantaneous
scalability and cost effectiveness. In our proof of concept example, we
demonstrate how serverless computing provides low cost access to hundreds of
CPUs, on demand, with little or no setup. In particular, we illustrate that the
all-against-all pairwise comparison among all unique human proteins can be
accomplished in approximately 2 minutes, at a cost of less than $1, using
Amazon Web Services Lambda. This is a 250x speedup compared to running the same
task on a typical laptop computer.
| [
{
"created": "Tue, 31 Jul 2018 04:46:45 GMT",
"version": "v1"
}
] | 2018-08-01 | [
[
"Kumanov",
"Dimitar",
""
],
[
"Hung",
"Ling-Hong",
""
],
[
"Lloyd",
"Wes",
""
],
[
"Yeung",
"Ka Yee",
""
]
] | Cloud computing offers on-demand, scalable computing and storage, and has become an essential resource for the analyses of big biomedical data. The usual approach to cloud computing requires users to reserve and provision virtual servers. An emerging alternative is to have the provider allocate machine resources dynamically. This type of serverless computing has tremendous potential for biomedical research in terms of ease-of-use, instantaneous scalability and cost effectiveness. In our proof of concept example, we demonstrate how serverless computing provides low cost access to hundreds of CPUs, on demand, with little or no setup. In particular, we illustrate that the all-against-all pairwise comparison among all unique human proteins can be accomplished in approximately 2 minutes, at a cost of less than $1, using Amazon Web Services Lambda. This is a 250x speedup compared to running the same task on a typical laptop computer. |
1906.03183 | Amir Karami | Amir Karami, Mehdi Ghasemi, Souvik Sen, Marcos Moraes, Vishal Shah | Exploring Diseases and Syndromes in Neurology Case Reports from 1955 to
2017 with Text Mining | null | null | null | null | q-bio.QM cs.CL cs.IR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: A large number of neurology case reports have been published, but
it is a challenging task for human medical experts to explore all of these
publications. Text mining offers a computational approach to investigate
neurology literature and capture meaningful patterns. The overarching goal of
this study is to provide a new perspective on case reports of neurological
disease and syndrome analysis over the last six decades using text mining.
Methods: We extracted diseases and syndromes (DsSs) from more than 65,000
neurology case reports from 66 journals in PubMed over the last six decades
from 1955 to 2017. Text mining was applied to reports on the detected DsSs to
investigate high-frequency DsSs, categorize them, and explore the linear trends
over the 63-year time frame.
Results: The text mining methods explored high-frequency neurologic DsSs and
their trends and the relationships between them from 1955 to 2017. We detected
more than 18,000 unique DsSs and found 10 categories of neurologic DsSs. While
the trend analysis showed the increasing trends in the case reports for top-10
high-frequency DsSs, the categories had mixed trends.
Conclusion: Our study provided new insights into the application of text
mining methods to investigate DsSs in a large number of medical case reports
that occur over several decades. The proposed approach can be used to provide a
macro level analysis of medical literature by discovering interesting patterns
and tracking them over several years to help physicians explore these case
reports more efficiently.
| [
{
"created": "Thu, 23 May 2019 17:38:06 GMT",
"version": "v1"
}
] | 2019-06-10 | [
[
"Karami",
"Amir",
""
],
[
"Ghasemi",
"Mehdi",
""
],
[
"Sen",
"Souvik",
""
],
[
"Moraes",
"Marcos",
""
],
[
"Shah",
"Vishal",
""
]
] | Background: A large number of neurology case reports have been published, but it is a challenging task for human medical experts to explore all of these publications. Text mining offers a computational approach to investigate neurology literature and capture meaningful patterns. The overarching goal of this study is to provide a new perspective on case reports of neurological disease and syndrome analysis over the last six decades using text mining. Methods: We extracted diseases and syndromes (DsSs) from more than 65,000 neurology case reports from 66 journals in PubMed over the last six decades from 1955 to 2017. Text mining was applied to reports on the detected DsSs to investigate high-frequency DsSs, categorize them, and explore the linear trends over the 63-year time frame. Results: The text mining methods explored high-frequency neurologic DsSs and their trends and the relationships between them from 1955 to 2017. We detected more than 18,000 unique DsSs and found 10 categories of neurologic DsSs. While the trend analysis showed the increasing trends in the case reports for top-10 high-frequency DsSs, the categories had mixed trends. Conclusion: Our study provided new insights into the application of text mining methods to investigate DsSs in a large number of medical case reports that occur over several decades. The proposed approach can be used to provide a macro level analysis of medical literature by discovering interesting patterns and tracking them over several years to help physicians explore these case reports more efficiently. |
2209.07794 | William Duncan Martinson | W. Duncan Martinson, Rebecca McLennan, Jessica M. Teddy, Mary C.
McKinney, Lance A. Davidson, Ruth E. Baker, Helen M. Byrne, Paul M. Kulesa,
Philip K. Maini | Dynamic fibronectin assembly and remodeling by leader neural crest cells
prevents jamming in collective cell migration | 66 pages, 19 figures (of which 14 are supplementary) | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Collective cell migration plays an essential role in vertebrate development,
yet the extent to which dynamically changing microenvironments influence this
phenomenon remains unclear. Observations of the distribution of the
extracellular matrix (ECM) component fibronectin during the migration of
loosely connected neural crest cells (NCCs) lead us to hypothesize that NCC
remodeling of an initially punctate ECM creates a scaffold for trailing cells,
enabling them to form robust and coherent stream patterns. We evaluate this
idea in a theoretical setting by developing an individual-based computational
model that incorporates reciprocal interactions between NCCs and their ECM. ECM
remodeling, haptotaxis, contact guidance, and cell-cell repulsion are
sufficient for cells to establish streams in silico, however additional
mechanisms, such as chemotaxis, are required to consistently guide cells along
the correct target corridor. Further model investigations imply that contact
guidance and differential cell-cell repulsion between leader and follower cells
are key contributors to robust collective cell migration by preventing stream
breakage. Global sensitivity analysis and simulated gain- and loss-of-function
experiments suggest that long-distance migration without jamming is most likely
to occur when leading cells specialize in creating ECM fibers, and trailing
cells specialize in responding to environmental cues by upregulating mechanisms
such as contact guidance.
| [
{
"created": "Fri, 16 Sep 2022 08:50:25 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Apr 2023 11:50:28 GMT",
"version": "v2"
}
] | 2023-04-20 | [
[
"Martinson",
"W. Duncan",
""
],
[
"McLennan",
"Rebecca",
""
],
[
"Teddy",
"Jessica M.",
""
],
[
"McKinney",
"Mary C.",
""
],
[
"Davidson",
"Lance A.",
""
],
[
"Baker",
"Ruth E.",
""
],
[
"Byrne",
"Helen M.",
... | Collective cell migration plays an essential role in vertebrate development, yet the extent to which dynamically changing microenvironments influence this phenomenon remains unclear. Observations of the distribution of the extracellular matrix (ECM) component fibronectin during the migration of loosely connected neural crest cells (NCCs) lead us to hypothesize that NCC remodeling of an initially punctate ECM creates a scaffold for trailing cells, enabling them to form robust and coherent stream patterns. We evaluate this idea in a theoretical setting by developing an individual-based computational model that incorporates reciprocal interactions between NCCs and their ECM. ECM remodeling, haptotaxis, contact guidance, and cell-cell repulsion are sufficient for cells to establish streams in silico, however additional mechanisms, such as chemotaxis, are required to consistently guide cells along the correct target corridor. Further model investigations imply that contact guidance and differential cell-cell repulsion between leader and follower cells are key contributors to robust collective cell migration by preventing stream breakage. Global sensitivity analysis and simulated gain- and loss-of-function experiments suggest that long-distance migration without jamming is most likely to occur when leading cells specialize in creating ECM fibers, and trailing cells specialize in responding to environmental cues by upregulating mechanisms such as contact guidance. |
1001.1681 | Ilya M. Nemenman | Adam A. Margolin, Kai Wang, Andrea Califano, Ilya Nemenman | Multivariate dependence and genetic networks inference | 35 pages, expanded version of q-bio/0406015 | IET Syst Biol 4, 428, 2010 | 10.1049/iet-syb.2010.0009 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A critical task in systems biology is the identification of genes that
interact to control cellular processes by transcriptional activation of a set
of target genes. Many methods have been developed to use statistical
correlations in high-throughput datasets to infer such interactions. However,
cellular pathways are highly cooperative, often requiring the joint effect of
many molecules, and few methods have been proposed to explicitly identify such
higher-order interactions, partially due to the fact that the notion of
multivariate statistical dependency itself remains imprecisely defined. We
define the concept of dependence among multiple variables using maximum entropy
techniques and introduce computational tests for their identification.
Synthetic network results reveal that this procedure uncovers dependencies even
in undersampled regimes, when the joint probability distribution cannot be
reliably estimated. Analysis of microarray data from human B cells reveals that
third-order statistics, but not second-order ones, uncover relationships
between genes that interact in a pathway to cooperatively regulate a common set
of targets.
| [
{
"created": "Mon, 11 Jan 2010 16:06:30 GMT",
"version": "v1"
}
] | 2010-11-24 | [
[
"Margolin",
"Adam A.",
""
],
[
"Wang",
"Kai",
""
],
[
"Califano",
"Andrea",
""
],
[
"Nemenman",
"Ilya",
""
]
] | A critical task in systems biology is the identification of genes that interact to control cellular processes by transcriptional activation of a set of target genes. Many methods have been developed to use statistical correlations in high-throughput datasets to infer such interactions. However, cellular pathways are highly cooperative, often requiring the joint effect of many molecules, and few methods have been proposed to explicitly identify such higher-order interactions, partially due to the fact that the notion of multivariate statistical dependency itself remains imprecisely defined. We define the concept of dependence among multiple variables using maximum entropy techniques and introduce computational tests for their identification. Synthetic network results reveal that this procedure uncovers dependencies even in undersampled regimes, when the joint probability distribution cannot be reliably estimated. Analysis of microarray data from human B cells reveals that third-order statistics, but not second-order ones, uncover relationships between genes that interact in a pathway to cooperatively regulate a common set of targets. |
2009.10931 | Kang-Lin Hsieh | Kanglin Hsieh, Yinyin Wang, Luyao Chen, Zhongming Zhao, Sean Savitz,
Xiaoqian Jiang, Jing Tang, Yejin Kim | Drug repurposing for COVID-19 using graph neural network and harmonizing
multiple evidence | 13 pages | Sci Rep 11, 23179 (2021) | 10.1038/s41598-021-02353-5 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Amid the pandemic of 2019 novel coronavirus disease (COVID-19) infected by
SARS-CoV-2, a vast amount of drug research for prevention and treatment has
been quickly conducted, but these efforts have been unsuccessful thus far. Our
objective is to prioritize repurposable drugs using a drug repurposing pipeline
that systematically integrates multiple SARS-CoV-2 and drug interactions, deep
graph neural networks, and in-vitro/population-based validations. We first
collected all the available drugs (n= 3,635) involved in COVID-19 patient
treatment through CTDbase. We built a SARS-CoV-2 knowledge graph based on the
interactions among virus baits, host genes, pathways, drugs, and phenotypes. A
deep graph neural network approach was used to derive the candidate
representation based on the biological interactions. We prioritized the
candidate drugs using clinical trial history, and then validated them with
their genetic profiles, in vitro experimental efficacy, and electronic health
records. We highlight the top 22 drugs including Azithromycin, Atorvastatin,
Aspirin, Acetaminophen, and Albuterol. We further pinpointed drug combinations
that may synergistically target COVID-19. In summary, we demonstrated that the
integration of extensive interactions, deep neural networks, and rigorous
validation can facilitate the rapid identification of candidate drugs for
COVID-19 treatment. This is a post-peer-review, pre-copyedit version of an
article published in Scientific Reports The final authenticated version is
available online at: https://www.nature.com/articles/s41598-021-02353-5
| [
{
"created": "Wed, 23 Sep 2020 04:47:59 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Nov 2021 19:19:35 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Feb 2022 20:33:18 GMT",
"version": "v3"
}
] | 2022-02-03 | [
[
"Hsieh",
"Kanglin",
""
],
[
"Wang",
"Yinyin",
""
],
[
"Chen",
"Luyao",
""
],
[
"Zhao",
"Zhongming",
""
],
[
"Savitz",
"Sean",
""
],
[
"Jiang",
"Xiaoqian",
""
],
[
"Tang",
"Jing",
""
],
[
"Kim",
... | Amid the pandemic of 2019 novel coronavirus disease (COVID-19) infected by SARS-CoV-2, a vast amount of drug research for prevention and treatment has been quickly conducted, but these efforts have been unsuccessful thus far. Our objective is to prioritize repurposable drugs using a drug repurposing pipeline that systematically integrates multiple SARS-CoV-2 and drug interactions, deep graph neural networks, and in-vitro/population-based validations. We first collected all the available drugs (n= 3,635) involved in COVID-19 patient treatment through CTDbase. We built a SARS-CoV-2 knowledge graph based on the interactions among virus baits, host genes, pathways, drugs, and phenotypes. A deep graph neural network approach was used to derive the candidate representation based on the biological interactions. We prioritized the candidate drugs using clinical trial history, and then validated them with their genetic profiles, in vitro experimental efficacy, and electronic health records. We highlight the top 22 drugs including Azithromycin, Atorvastatin, Aspirin, Acetaminophen, and Albuterol. We further pinpointed drug combinations that may synergistically target COVID-19. In summary, we demonstrated that the integration of extensive interactions, deep neural networks, and rigorous validation can facilitate the rapid identification of candidate drugs for COVID-19 treatment. This is a post-peer-review, pre-copyedit version of an article published in Scientific Reports The final authenticated version is available online at: https://www.nature.com/articles/s41598-021-02353-5 |
2312.03427 | Judit Aizpuru | Judit Aizpuru, Maxim Borisyak, Peter Neubauer, M. Nicolas Cruz
Bournazou | Latent State Space Extension for interpretable hybrid mechanistic models | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanistic growth models play a major role in bioprocess engineering,
design, and control. Their reasonable predictive power and their high level of
interpretability make them an essential tool for computer aided engineering
methods. Additionally, since they contain knowledge about cell physiology, the
parameter estimates provide meaningful insights into the metabolism of the
microorganism under study. However, the assumption of time invariance of the
model parameters is often violated in real experiments, limiting their capacity
to fully explain the observed dynamics. In this work, we propose a framework
for identifying such violations and producing insights into misspecified
mechanisms. The framework achieves this by allowing kinetic and process
parameters to vary in time. We demonstrate the framework's capabilities by
fitting a hybrid model based on a simple mechanistic growth model for E. coli
with data generated in-silico by a much more complex one and identifying
missing kinetics.
| [
{
"created": "Wed, 6 Dec 2023 11:19:24 GMT",
"version": "v1"
}
] | 2023-12-07 | [
[
"Aizpuru",
"Judit",
""
],
[
"Borisyak",
"Maxim",
""
],
[
"Neubauer",
"Peter",
""
],
[
"Bournazou",
"M. Nicolas Cruz",
""
]
] | Mechanistic growth models play a major role in bioprocess engineering, design, and control. Their reasonable predictive power and their high level of interpretability make them an essential tool for computer aided engineering methods. Additionally, since they contain knowledge about cell physiology, the parameter estimates provide meaningful insights into the metabolism of the microorganism under study. However, the assumption of time invariance of the model parameters is often violated in real experiments, limiting their capacity to fully explain the observed dynamics. In this work, we propose a framework for identifying such violations and producing insights into misspecified mechanisms. The framework achieves this by allowing kinetic and process parameters to vary in time. We demonstrate the framework's capabilities by fitting a hybrid model based on a simple mechanistic growth model for E. coli with data generated in-silico by a much more complex one and identifying missing kinetics. |
0912.5232 | William Bialek | Greg J. Stephens, William S. Ryu and William Bialek | The emergence of stereotyped behaviors in C. elegans | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animal behaviors are sometimes decomposable into discrete, stereotyped
elements. In one model, such behaviors are triggered by specific commands; in
the extreme case, the discreteness of behavior is traced to the discreteness of
action potentials in the individual command neurons. We use the crawling
behavior of the nematode C. elegans to explore the opposite extreme, in which
discreteness and stereotypy emerges from the dynamics of the entire behavior. A
simple stochastic model for the worm's continuously changing body shape during
crawling has attractors corresponding to forward and backward motion;
noise-driven transitions between these attractors correspond to abrupt
reversals. We show that, with no free parameters, this model generates
reversals at a rate within error bars of that observed experimentally, and the
relatively stereotyped trajectories in the neighborhood of the reversal also
are predicted correctly.
| [
{
"created": "Tue, 29 Dec 2009 01:19:45 GMT",
"version": "v1"
}
] | 2009-12-31 | [
[
"Stephens",
"Greg J.",
""
],
[
"Ryu",
"William S.",
""
],
[
"Bialek",
"William",
""
]
] | Animal behaviors are sometimes decomposable into discrete, stereotyped elements. In one model, such behaviors are triggered by specific commands; in the extreme case, the discreteness of behavior is traced to the discreteness of action potentials in the individual command neurons. We use the crawling behavior of the nematode C. elegans to explore the opposite extreme, in which discreteness and stereotypy emerges from the dynamics of the entire behavior. A simple stochastic model for the worm's continuously changing body shape during crawling has attractors corresponding to forward and backward motion; noise-driven transitions between these attractors correspond to abrupt reversals. We show that, with no free parameters, this model generates reversals at a rate within error bars of that observed experimentally, and the relatively stereotyped trajectories in the neighborhood of the reversal also are predicted correctly. |
1511.06997 | Oleg Gradov V. | O.V. Gradov, M.A. Gradova | "MS-Patch-Clamp" or the Possibility of Mass Spectrometry Hybridization
with Patch-Clamp Setups for Single Cell Metabolomics and Channelomics | null | Advances in Biochemistry, Vol. 3, pp. 66-71 (2015) | 10.11648/j.ab.20150306.11 | null | q-bio.SC physics.bio-ph | http://creativecommons.org/publicdomain/zero/1.0/ | In this projecting work we propose a mass spectrometric patch-clamp equipment
with the capillary performing both a local potential registration at the cell
membrane and the analyte suction simultaneously. This paper provides a current
literature analysis comparing the possibilities of the novel approach proposed
with the known methods, such as scanning patch-clamp, scanning ion conductance
microscopy, patch clamp based on scanning probe microscopy technology,
quantitative subcellular secondary ion mass spectrometry or "ion microscopy",
live single-cell mass spectrometry, in situ cell-by-cell imaging, single-cell
video-mass spectrometry, etc. We also consider the ways to improve the
informativeness of these methods and particularly emphasize the trend at the
increasing of the analysis complexity. We propose here the way to improve the
efficiency of the cell trapping to the capillary during MS-path-clamp, as well
as to provide laser surface ionization using laser trapping and tweezing of
cells with the laser beam transmitted through the capillary as a waveguide. It
is also possible to combine the above system with the microcolumn separation
system or capillary electrophoresis as an optional direction of further
development of the complex of analytical techniques emerging from the MS
variation of patch-clamp.
| [
{
"created": "Sun, 22 Nov 2015 11:34:31 GMT",
"version": "v1"
}
] | 2015-11-24 | [
[
"Gradov",
"O. V.",
""
],
[
"Gradova",
"M. A.",
""
]
] | In this projecting work we propose a mass spectrometric patch-clamp equipment with the capillary performing both a local potential registration at the cell membrane and the analyte suction simultaneously. This paper provides a current literature analysis comparing the possibilities of the novel approach proposed with the known methods, such as scanning patch-clamp, scanning ion conductance microscopy, patch clamp based on scanning probe microscopy technology, quantitative subcellular secondary ion mass spectrometry or "ion microscopy", live single-cell mass spectrometry, in situ cell-by-cell imaging, single-cell video-mass spectrometry, etc. We also consider the ways to improve the informativeness of these methods and particularly emphasize the trend at the increasing of the analysis complexity. We propose here the way to improve the efficiency of the cell trapping to the capillary during MS-path-clamp, as well as to provide laser surface ionization using laser trapping and tweezing of cells with the laser beam transmitted through the capillary as a waveguide. It is also possible to combine the above system with the microcolumn separation system or capillary electrophoresis as an optional direction of further development of the complex of analytical techniques emerging from the MS variation of patch-clamp. |
0706.2285 | Michele Caselle | L. Martignetti and M. Caselle | Universal power law behaviors in genomic sequences and evolutionary
models | 15 pages, 3 figures | null | 10.1103/PhysRevE.76.021902 | null | q-bio.GN cond-mat.other physics.bio-ph q-bio.QM | null | We study the length distribution of a particular class of DNA sequences known
as 5'UTR exons. These exons belong to the messanger RNA of protein coding
genes, but they are not coding (they are located upstream of the coding portion
of the mRNA) and are thus less constrained from an evolutionary point of view.
We show that both in mouse and in human these exons show a very clean power law
decay in their length distribution and suggest a simple evolutionary model
which may explain this finding. We conjecture that this power law behaviour
could indeed be a general feature of higher eukaryotes.
| [
{
"created": "Fri, 15 Jun 2007 12:46:27 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Martignetti",
"L.",
""
],
[
"Caselle",
"M.",
""
]
] | We study the length distribution of a particular class of DNA sequences known as 5'UTR exons. These exons belong to the messanger RNA of protein coding genes, but they are not coding (they are located upstream of the coding portion of the mRNA) and are thus less constrained from an evolutionary point of view. We show that both in mouse and in human these exons show a very clean power law decay in their length distribution and suggest a simple evolutionary model which may explain this finding. We conjecture that this power law behaviour could indeed be a general feature of higher eukaryotes. |
2304.05484 | Marcelo Kuperman | Claudia Huaylla, Marcelo N Kuperman, Lucas A. Garibaldi | Statistical measures of complexity applied to ecological networks | null | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks are a convenient way to represent many interactions among different
entities as they provide an efficient and clear methodology to evaluate and
organize relevant data. While there are many features for characterizing
networks there is a quantity that seems rather elusive: Complexity. The
quantification of the complexity of networks is nowadays a fundamental problem.
Here, we present a novel tool for identifying the complexity of ecological
networks. We compare the behavior of two relevant indices of complexity:
K-complexity and Single value decomposition (SVD) entropy. For that, we use
real data and null models. Both null models consist of randomized networks
built by swapping a controlled number of links of the original ones. We analyze
23 plant-pollinator and 19 host-parasite networks as case studies. Our results
show interesting features in the behavior for the K-complexity and SVD entropy
with clear differences between pollinator-plant and host-parasite networks,
especially when the degree distribution is not preserved. Although SVD entropy
has been widely used to characterize network complexity, our analyses show that
K-complexity is a more reliable tool. Additionally, we show that degree
distribution and density are important drivers of network complexity and should
be accounted for in future studies.
| [
{
"created": "Tue, 11 Apr 2023 20:31:38 GMT",
"version": "v1"
}
] | 2023-04-13 | [
[
"Huaylla",
"Claudia",
""
],
[
"Kuperman",
"Marcelo N",
""
],
[
"Garibaldi",
"Lucas A.",
""
]
] | Networks are a convenient way to represent many interactions among different entities as they provide an efficient and clear methodology to evaluate and organize relevant data. While there are many features for characterizing networks there is a quantity that seems rather elusive: Complexity. The quantification of the complexity of networks is nowadays a fundamental problem. Here, we present a novel tool for identifying the complexity of ecological networks. We compare the behavior of two relevant indices of complexity: K-complexity and Single value decomposition (SVD) entropy. For that, we use real data and null models. Both null models consist of randomized networks built by swapping a controlled number of links of the original ones. We analyze 23 plant-pollinator and 19 host-parasite networks as case studies. Our results show interesting features in the behavior for the K-complexity and SVD entropy with clear differences between pollinator-plant and host-parasite networks, especially when the degree distribution is not preserved. Although SVD entropy has been widely used to characterize network complexity, our analyses show that K-complexity is a more reliable tool. Additionally, we show that degree distribution and density are important drivers of network complexity and should be accounted for in future studies. |
1305.4051 | Marco Zoli | Marco Zoli | Helix untwisting and bubble formation in circular DNA | The Journal of Chemical Physics, vol. 138 (2013), in press | J. Chem. Phys. Vol.138, 205103 (2013) | 10.1063/1.4807381 | null | q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The base pair fluctuations and helix untwisting are examined for a circular
molecule. A realistic mesoscopic model including twisting degrees of freedom
and bending of the molecular axis is proposed. The computational method, based
on path integral techniques, simulates a distribution of topoisomers with
various twist numbers and finds the energetically most favorable molecular
conformation as a function of temperature. The method can predict helical
repeat, openings loci and bubble sizes for specific sequences in a broad
temperature range. Some results are presented for a short DNA circle recently
identified in mammalian cells.
| [
{
"created": "Fri, 17 May 2013 11:58:02 GMT",
"version": "v1"
}
] | 2013-05-30 | [
[
"Zoli",
"Marco",
""
]
] | The base pair fluctuations and helix untwisting are examined for a circular molecule. A realistic mesoscopic model including twisting degrees of freedom and bending of the molecular axis is proposed. The computational method, based on path integral techniques, simulates a distribution of topoisomers with various twist numbers and finds the energetically most favorable molecular conformation as a function of temperature. The method can predict helical repeat, openings loci and bubble sizes for specific sequences in a broad temperature range. Some results are presented for a short DNA circle recently identified in mammalian cells. |
1810.09879 | Sebastian Kahl | Sebastian Kahl, Stefan Kopp | A predictive processing model of perception and action for self-other
distinction | Main text including supplementary materials. This manuscript ist
currently under review at Frontiers in Psychology Cognitive Science | null | 10.3389/fpsyg.2018.02421 | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During interaction with others, we perceive and produce social actions in
close temporal distance or even simultaneously. It has been argued that the
motor system is involved in perception and action, playing a fundamental role
in the handling of actions produced by oneself and by others. But how does it
distinguish in this processing between self and other, thus contributing to
self-other distinction? In this paper we propose a hierarchical model of
sensorimotor coordination based on principles of perception-action coupling and
predictive processing in which self-other distinction arises during action and
perception. For this we draw on mechanisms assumed for the integration of cues
for a sense of agency, i.e., the sense that an action is self-generated. We
report results from simulations of different scenarios, showing that the model
is not only able to minimize free energy during perception and action, but also
showing that the model can correctly attribute sense of agency to own actions.
| [
{
"created": "Tue, 23 Oct 2018 14:27:30 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Oct 2018 11:44:00 GMT",
"version": "v2"
}
] | 2018-12-04 | [
[
"Kahl",
"Sebastian",
""
],
[
"Kopp",
"Stefan",
""
]
] | During interaction with others, we perceive and produce social actions in close temporal distance or even simultaneously. It has been argued that the motor system is involved in perception and action, playing a fundamental role in the handling of actions produced by oneself and by others. But how does it distinguish in this processing between self and other, thus contributing to self-other distinction? In this paper we propose a hierarchical model of sensorimotor coordination based on principles of perception-action coupling and predictive processing in which self-other distinction arises during action and perception. For this we draw on mechanisms assumed for the integration of cues for a sense of agency, i.e., the sense that an action is self-generated. We report results from simulations of different scenarios, showing that the model is not only able to minimize free energy during perception and action, but also showing that the model can correctly attribute sense of agency to own actions. |
2204.06070 | Satyaki Mazumder | Avishek Chatterjee, satyaki Mazumder, Koel Das | Reversing Food Craving Preference Through Multisensory Exposure | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experiencing food craving is nearly ubiquitous and has several negative
pathological impacts, but effective intervention strategies to control or
reverse craving remain limited. Food cue-reactivity tasks are often used to
study food craving but most paradigms ignore individual food preferences, which
could confound the findings. We explored the possibility of reversing food
craving preference using psychophysical tasks on human participants considering
their individual food preferences in a multisensory food exposure set-up.
Participants were grouped into Positive Control (PC), Negative Control (NC),
and Neutral Control (NEC) based on their preference for sweet and savory items.
Participants reported their momentary craving of the displayed food stimuli
through desire scale and bidding scale (willingness to pay) pre and post
multisensory exposure. Participants were exposed to food items they either
liked or disliked. Our results asserted the effect of the multisensory food
exposure showing statistically significant increase in food craving for
negative control post-exposure to disliked food items.
Using computational model and statistical methods we also show that desire
for food does not necessarily translate to willingness to pay every time and
instantaneous subjective valuation of food craving is an important parameter
for subsequent action.
Our results further demonstrate the role of parietal N200 and centro-parietal
P300 in reversing craving preference.
| [
{
"created": "Tue, 12 Apr 2022 20:18:35 GMT",
"version": "v1"
}
] | 2022-04-14 | [
[
"Chatterjee",
"Avishek",
""
],
[
"Mazumder",
"satyaki",
""
],
[
"Das",
"Koel",
""
]
] | Experiencing food craving is nearly ubiquitous and has several negative pathological impacts, but effective intervention strategies to control or reverse craving remain limited. Food cue-reactivity tasks are often used to study food craving but most paradigms ignore individual food preferences, which could confound the findings. We explored the possibility of reversing food craving preference using psychophysical tasks on human participants considering their individual food preferences in a multisensory food exposure set-up. Participants were grouped into Positive Control (PC), Negative Control (NC), and Neutral Control (NEC) based on their preference for sweet and savory items. Participants reported their momentary craving of the displayed food stimuli through desire scale and bidding scale (willingness to pay) pre and post multisensory exposure. Participants were exposed to food items they either liked or disliked. Our results asserted the effect of the multisensory food exposure showing statistically significant increase in food craving for negative control post-exposure to disliked food items. Using computational model and statistical methods we also show that desire for food does not necessarily translate to willingness to pay every time and instantaneous subjective valuation of food craving is an important parameter for subsequent action. Our results further demonstrate the role of parietal N200 and centro-parietal P300 in reversing craving preference. |
1902.01511 | Michael B\"orsch | Andr\'e Dathe, Thomas Heitkamp, Iv\'an P\'erez, Hendrik Sielaff, Anika
Westphal, Stefanie Reuter, Ralf Mrowka, Michael B\"orsch | Observing monomer - dimer transitions of neurotensin receptors 1 in
single SMALPs by homoFRET and in an ABELtrap | 12 pages, 6 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | G protein-coupled receptors (GPCRs) are a large superfamily of membrane
proteins that are activated by extracellular small molecules or photons.
Neurotensin receptor 1 (NTSR1) is a GPCR that is activated by neurotensin, i.e.
a 13 amino acid peptide. Binding of neurotensin induces conformational changes
in the receptor that trigger the intracellular signaling processes. While
recent single-molecule studies have reported a dynamic monomer - dimer
equilibrium of NTSR1 in vitro, a biophysical characterization of the
oligomerization status of NTSR1 in living mammalian cells is complicated. Here
we report on the oligomerization state of the human NTSR1 tagged with mRuby3 by
dissolving the plasma membranes of living HEK293T cells into 10 nm-sized
soluble lipid nanoparticles by addition of styrene-maleic acid copolymers
(SMALPs). Single SMALPs were analyzed one after another in solution by
multi-parameter single molecule spectroscopy including brightness, fluorescence
lifetime and anisotropy for homoFRET. Brightness analysis was improved using
single SMALP detection in a confocal ABELtrap for extended observation times in
solution. A bimodal brightness distribution indicated a significant fraction of
dimeric NTSR1 in SMALPs or in the plasma membrane, respectively, before
addition of neurotensin.
| [
{
"created": "Tue, 5 Feb 2019 01:25:56 GMT",
"version": "v1"
}
] | 2019-02-06 | [
[
"Dathe",
"André",
""
],
[
"Heitkamp",
"Thomas",
""
],
[
"Pérez",
"Iván",
""
],
[
"Sielaff",
"Hendrik",
""
],
[
"Westphal",
"Anika",
""
],
[
"Reuter",
"Stefanie",
""
],
[
"Mrowka",
"Ralf",
""
],
[
"B... | G protein-coupled receptors (GPCRs) are a large superfamily of membrane proteins that are activated by extracellular small molecules or photons. Neurotensin receptor 1 (NTSR1) is a GPCR that is activated by neurotensin, i.e. a 13 amino acid peptide. Binding of neurotensin induces conformational changes in the receptor that trigger the intracellular signaling processes. While recent single-molecule studies have reported a dynamic monomer - dimer equilibrium of NTSR1 in vitro, a biophysical characterization of the oligomerization status of NTSR1 in living mammalian cells is complicated. Here we report on the oligomerization state of the human NTSR1 tagged with mRuby3 by dissolving the plasma membranes of living HEK293T cells into 10 nm-sized soluble lipid nanoparticles by addition of styrene-maleic acid copolymers (SMALPs). Single SMALPs were analyzed one after another in solution by multi-parameter single molecule spectroscopy including brightness, fluorescence lifetime and anisotropy for homoFRET. Brightness analysis was improved using single SMALP detection in a confocal ABELtrap for extended observation times in solution. A bimodal brightness distribution indicated a significant fraction of dimeric NTSR1 in SMALPs or in the plasma membrane, respectively, before addition of neurotensin. |
1909.12426 | Hernan Solari Dr | Mario A. Natiello and Hern\'an G. Solari | Modeling population dynamics based on experimental trials with
genetically modified (RIDL) mosquitoes | 39 pages, 5 figures, 2 tables | Ecological Modeling 424, May 2020, 108986 | 10.1016/j.ecolmodel.2020.108986 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the RIDL-SIT technology has been field-tested for control of Aedes
aegypti. The technique consists of releasing genetically modified mosquitoes
carrying a "lethal gene". In 2016 the World Health Organisation (WHO) and the
Pan-American Health Organization (PAHO) recommend to their constituent
countries to test the new technologies proposed to control Aedes aegypti
populations. However, issues concerning effectiveness and ecological impact
have not been thoroughly studied so far. In order to study these issues, we
develop an ecological model compatible with the information available as of
2016. It presents an interdependent dynamics of mosquito populations and food
in an homogeneous setting. Mosquito populations are described in an stochastic
compartmental setup in terms of reaction norms depending on the available food
in the environment. The development of the model allows us to indicate some
critical biological knowledge that is missing and could (should) be produced.
Hybridisation levels, release numbers during and after intervention and
population recovery time after the intervention as a function of intervention
duration and target are calculated under different hypotheses with regard to
the fitness of hybrids and compared with two field studies of actual
interventions. The minimal model should serve as a basis for detailed models
when the necessary information to construct them is produced. For the time
being, the model shows that nature will not clean the non-lethal introgressed
genes.
| [
{
"created": "Thu, 26 Sep 2019 22:50:49 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Oct 2019 23:28:25 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Feb 2020 11:43:36 GMT",
"version": "v3"
}
] | 2020-06-02 | [
[
"Natiello",
"Mario A.",
""
],
[
"Solari",
"Hernán G.",
""
]
] | Recently, the RIDL-SIT technology has been field-tested for control of Aedes aegypti. The technique consists of releasing genetically modified mosquitoes carrying a "lethal gene". In 2016 the World Health Organisation (WHO) and the Pan-American Health Organization (PAHO) recommend to their constituent countries to test the new technologies proposed to control Aedes aegypti populations. However, issues concerning effectiveness and ecological impact have not been thoroughly studied so far. In order to study these issues, we develop an ecological model compatible with the information available as of 2016. It presents an interdependent dynamics of mosquito populations and food in an homogeneous setting. Mosquito populations are described in an stochastic compartmental setup in terms of reaction norms depending on the available food in the environment. The development of the model allows us to indicate some critical biological knowledge that is missing and could (should) be produced. Hybridisation levels, release numbers during and after intervention and population recovery time after the intervention as a function of intervention duration and target are calculated under different hypotheses with regard to the fitness of hybrids and compared with two field studies of actual interventions. The minimal model should serve as a basis for detailed models when the necessary information to construct them is produced. For the time being, the model shows that nature will not clean the non-lethal introgressed genes. |
2006.12578 | Richard Futrell | Richard Futrell | An information-theoretic account of semantic interference in word
production | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I present a computational-level model of semantic interference effects in
word production. Word production is cast as a rate-distortion problem where an
agent selects words to minimize a measure of cost while also minimizing the
resources used to compute the output word based on perceptual input and
behavioral goals. I show that similarity-based interference among words arises
naturally in this setup, and I present a series of simulations showing that the
model captures some of the key empirical patterns observed in Stroop and
Picture-Word Interference paradigms. I argue that the rate-distortion account
of interference provides a high-level formalization of computational principles
that are instantiated more mechanistically in existing models.
| [
{
"created": "Mon, 22 Jun 2020 19:29:33 GMT",
"version": "v1"
}
] | 2020-06-24 | [
[
"Futrell",
"Richard",
""
]
] | I present a computational-level model of semantic interference effects in word production. Word production is cast as a rate-distortion problem where an agent selects words to minimize a measure of cost while also minimizing the resources used to compute the output word based on perceptual input and behavioral goals. I show that similarity-based interference among words arises naturally in this setup, and I present a series of simulations showing that the model captures some of the key empirical patterns observed in Stroop and Picture-Word Interference paradigms. I argue that the rate-distortion account of interference provides a high-level formalization of computational principles that are instantiated more mechanistically in existing models. |
1911.02243 | Khaled Khleifat Dr | M. Jaafreh, K.M. Khleifat, H. Qaralleh, M.O Al-limoun | Antibacterial and Antioxidant Activities of Centeurea damascena
Methanolic Extract | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The family Asteraceae include large number of Centaurea species which have
been applied in folk medicine. One of the family Asteraceae members is the
Centaurea damascena which authentically been tested for its antibacterial and
antioxidant activity as well as its toxicity. The aims of the study were to
determine the antimicrobial and antioxidant activities and toxicity of
methanolic plant extracts of Centaurea damascena. The methanolic extracts were
screened for their antibacterial activity against nine bacteria (Staphylococcus
aureus ATCC 43300, Bacillus subtilis ATCC 6633, Micrococcus luteus ATCC 10240,
and Staphylococcus epidermidis ATCC 12228, Escherichia coli ATCC 11293,
Pseudomonas aerugino and Klebsiella pneumoniae, Enterobacter aerogenes ATCC
13048 and Salmonella typhi ATCC19430). The antibacterial activity was assessed
by using the disc diffusion methods and the minimum inhibition concentrations
(MIC) using microdilution method. The extracts from Centaurea damascena
possessed antibacterial activity against several of the tested microorganisms.
The MIC of methanol extract of C. damascena ranged from 60 to 1100 microgram
per mL. Free radical scavenging capacity of the C. damascena methanol extract
was calculated by DPPH and FRAP test. DPPH radicals were scavenged with an IC50
value of 17.08 microgram per mL. Antioxidant capacities obtained by the FRAP
was 51.9 and expressed in mg Trolox per gram dry weight. The total phenolic
compounds of the methanol extracts of aerial parts, as estimated by
Folin_Ciocalteu reagent method, was about 460 mg GAE per gram. The phenolic
contents in the extracts highly correlate with their antioxidant activity,
confirming that the antioxidant activity of this plant extracts is considerably
phenolic contents-dependent.
| [
{
"created": "Wed, 6 Nov 2019 08:01:38 GMT",
"version": "v1"
}
] | 2019-11-07 | [
[
"Jaafreh",
"M.",
""
],
[
"Khleifat",
"K. M.",
""
],
[
"Qaralleh",
"H.",
""
],
[
"Al-limoun",
"M. O",
""
]
] | The family Asteraceae include large number of Centaurea species which have been applied in folk medicine. One of the family Asteraceae members is the Centaurea damascena which authentically been tested for its antibacterial and antioxidant activity as well as its toxicity. The aims of the study were to determine the antimicrobial and antioxidant activities and toxicity of methanolic plant extracts of Centaurea damascena. The methanolic extracts were screened for their antibacterial activity against nine bacteria (Staphylococcus aureus ATCC 43300, Bacillus subtilis ATCC 6633, Micrococcus luteus ATCC 10240, and Staphylococcus epidermidis ATCC 12228, Escherichia coli ATCC 11293, Pseudomonas aerugino and Klebsiella pneumoniae, Enterobacter aerogenes ATCC 13048 and Salmonella typhi ATCC19430). The antibacterial activity was assessed by using the disc diffusion methods and the minimum inhibition concentrations (MIC) using microdilution method. The extracts from Centaurea damascena possessed antibacterial activity against several of the tested microorganisms. The MIC of methanol extract of C. damascena ranged from 60 to 1100 microgram per mL. Free radical scavenging capacity of the C. damascena methanol extract was calculated by DPPH and FRAP test. DPPH radicals were scavenged with an IC50 value of 17.08 microgram per mL. Antioxidant capacities obtained by the FRAP was 51.9 and expressed in mg Trolox per gram dry weight. The total phenolic compounds of the methanol extracts of aerial parts, as estimated by Folin_Ciocalteu reagent method, was about 460 mg GAE per gram. The phenolic contents in the extracts highly correlate with their antioxidant activity, confirming that the antioxidant activity of this plant extracts is considerably phenolic contents-dependent. |
q-bio/0401008 | Per Arne Rikvold | R.K.P. Zia (Virginia Tech) and Per Arne Rikvold (Florida State Univ.) | Fluctuations and correlations in an individual-based model of biological
coevolution | 25 pages, 2 figures | J. Phys. A: Mathematical and General 37, 5135 - 5155 (2004). | 10.1088/0305-4470/37/19/003 | null | q-bio.PE cond-mat.stat-mech | null | We extend our study of a simple model of biological coevolution to its
statistical properties. Staring with a complete description in terms of a
master equation, we provide its relation to the deterministic evolution
equations used in previous investigations. The stationary states of the
mutationless model are generally well approximated by Gaussian distributions,
so that the fluctuations and correlations of the populations can be computed
analytically. Several specific cases are studied by Monte Carlo simulations,
and there is excellent agreement between the data and the theoretical
predictions.
| [
{
"created": "Wed, 7 Jan 2004 21:18:24 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Zia",
"R. K. P.",
"",
"Virginia Tech"
],
[
"Rikvold",
"Per Arne",
"",
"Florida State Univ."
]
] | We extend our study of a simple model of biological coevolution to its statistical properties. Staring with a complete description in terms of a master equation, we provide its relation to the deterministic evolution equations used in previous investigations. The stationary states of the mutationless model are generally well approximated by Gaussian distributions, so that the fluctuations and correlations of the populations can be computed analytically. Several specific cases are studied by Monte Carlo simulations, and there is excellent agreement between the data and the theoretical predictions. |
2405.12998 | Lior Pachter | Laura Luebbert and Lior Pachter | The miscalibration of the honeybee odometer | 16 pages | null | null | null | q-bio.OT physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | We examine a series of articles on honeybee odometry and navigation published
between 1996 and 2010, and find inconsistencies in results, duplicated figures,
indications of data manipulation, and incorrect calculations. This suggests
that redoing the experiments in question is warranted.
| [
{
"created": "Wed, 8 May 2024 23:50:35 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Luebbert",
"Laura",
""
],
[
"Pachter",
"Lior",
""
]
] | We examine a series of articles on honeybee odometry and navigation published between 1996 and 2010, and find inconsistencies in results, duplicated figures, indications of data manipulation, and incorrect calculations. This suggests that redoing the experiments in question is warranted. |
1407.7821 | Michael Sheinman | Michael Sheinman and Florian Massip and Peter F. Arndt | Statistical Properties of Pairwise Distances between Leaves on a Random
Yule Tree | 14 pages, 8 figures | PLoS ONE 10(3): e0120206 | 10.1371/journal.pone.0120206 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Yule tree is the result of a branching process with constant birth and
death rates. Such a process serves as an instructive null model of many
empirical systems, for instance, the evolution of species leading to a
phylogenetic tree. However, often in phylogeny the only available information
is the pairwise distances between a small fraction of extant species
representing the leaves of the tree. In this article we study statistical
properties of the pairwise distances in a Yule tree. Using a method based on a
recursion, we derive an exact, analytic and compact formula for the expected
number of pairs separated by a certain time distance. This number turns out to
follow a increasing exponential function. This property of a Yule tree can
serve as a simple test for empirical data to be well described by a Yule
process. We further use this recursive method to calculate the expected number
of the $n$-most closely related pairs of leaves and the number of cherries
separated by a certain time distance. To make our results more useful for
realistic scenarios, we explicitly take into account that the leaves of a tree
may be incompletely sampled and derive a criterion for poorly sampled
phylogenies. We show that our result can account for empirical data, using two
families of birds species.
| [
{
"created": "Tue, 29 Jul 2014 18:49:36 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Aug 2014 16:09:59 GMT",
"version": "v2"
}
] | 2015-04-02 | [
[
"Sheinman",
"Michael",
""
],
[
"Massip",
"Florian",
""
],
[
"Arndt",
"Peter F.",
""
]
] | A Yule tree is the result of a branching process with constant birth and death rates. Such a process serves as an instructive null model of many empirical systems, for instance, the evolution of species leading to a phylogenetic tree. However, often in phylogeny the only available information is the pairwise distances between a small fraction of extant species representing the leaves of the tree. In this article we study statistical properties of the pairwise distances in a Yule tree. Using a method based on a recursion, we derive an exact, analytic and compact formula for the expected number of pairs separated by a certain time distance. This number turns out to follow a increasing exponential function. This property of a Yule tree can serve as a simple test for empirical data to be well described by a Yule process. We further use this recursive method to calculate the expected number of the $n$-most closely related pairs of leaves and the number of cherries separated by a certain time distance. To make our results more useful for realistic scenarios, we explicitly take into account that the leaves of a tree may be incompletely sampled and derive a criterion for poorly sampled phylogenies. We show that our result can account for empirical data, using two families of birds species. |
1611.04842 | Francesco Fumarola | Francesco Fumarola | The Role of Word Length in Semantic Topology | 17 pages, 10 figures | null | null | null | q-bio.NC cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A topological argument is presented concering the structure of semantic
space, based on the negative correlation between polysemy and word length. The
resulting graph structure is applied to the modeling of free-recall
experiments, resulting in predictions on the comparative values of recall
probabilities. Associative recall is found to favor longer words whereas
sequential recall is found to favor shorter words. Data from the PEERS
experiments of Lohnas et al. (2015) and Healey and Kahana (2016) confirm both
predictons, with correlation coefficients $r_{seq}= -0.17$ and $r_{ass}=
+0.17$. The argument is then applied to predicting global properties of list
recall, which leads to a novel explanation for the word-length effect based on
the optimization of retrieval strategies.
| [
{
"created": "Tue, 15 Nov 2016 14:12:41 GMT",
"version": "v1"
}
] | 2016-11-16 | [
[
"Fumarola",
"Francesco",
""
]
] | A topological argument is presented concering the structure of semantic space, based on the negative correlation between polysemy and word length. The resulting graph structure is applied to the modeling of free-recall experiments, resulting in predictions on the comparative values of recall probabilities. Associative recall is found to favor longer words whereas sequential recall is found to favor shorter words. Data from the PEERS experiments of Lohnas et al. (2015) and Healey and Kahana (2016) confirm both predictons, with correlation coefficients $r_{seq}= -0.17$ and $r_{ass}= +0.17$. The argument is then applied to predicting global properties of list recall, which leads to a novel explanation for the word-length effect based on the optimization of retrieval strategies. |
2008.06718 | Jo\~ao Gondim | Jo\~ao A. M. Gondim, Thiago Yukio Tanaka | SEIRD model in heterogenous populations: The role of commuting and
social inequalities in the COVID-19 dynamics | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we analyze the effects of commuting and social inequalities for
the epidemic development of the novel coronavirus (COVID-19). With this aim we
consider a SEIRD (susceptible, exposed, infected, recovered and dead by
disease) model without vital dynamics in a population divided into patches that
have different economic resources and in which the individuals can commute from
one patch to another (bilaterally). In the modeling we choose the social and
commuting parameters arbitrarily. We calculate the basic reproductive number
$R_0$ with the next generation approach and analyze the sensitivity of $R_0$
with respect to the parameters. Furthermore, we run numerical simulations
considering a population divided into two patches to bring some conclusions on
the number of total infected individuals and cumulative deaths for our model
considering heterogeneous populations.
| [
{
"created": "Sat, 15 Aug 2020 13:48:12 GMT",
"version": "v1"
}
] | 2020-08-18 | [
[
"Gondim",
"João A. M.",
""
],
[
"Tanaka",
"Thiago Yukio",
""
]
] | In this paper we analyze the effects of commuting and social inequalities for the epidemic development of the novel coronavirus (COVID-19). With this aim we consider a SEIRD (susceptible, exposed, infected, recovered and dead by disease) model without vital dynamics in a population divided into patches that have different economic resources and in which the individuals can commute from one patch to another (bilaterally). In the modeling we choose the social and commuting parameters arbitrarily. We calculate the basic reproductive number $R_0$ with the next generation approach and analyze the sensitivity of $R_0$ with respect to the parameters. Furthermore, we run numerical simulations considering a population divided into two patches to bring some conclusions on the number of total infected individuals and cumulative deaths for our model considering heterogeneous populations. |
1406.1347 | Karsten Kruse | Mike Bonny and Jakob Schweizer and Martin Loose and Ingolf M\"onch and
Petra Schwille and Karsten Kruse | Response to Halatek and Frey: Effective two-dimensional model does
account for geometry sensing by self-organized proteins patterns | 16 pages, 6 figures, 2 supplementary movies | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Min proteins from Escherichia coli can self-organize into traveling waves
on supported lipid bilayers. In Proc. Natl. Acad. Sci. USA 109, 15283 (2012) we
showed that these waves are guided along the boundaries of membrane patches. We
introduced an effective two-dimensional model reproducing the observed
patterns. In arXiv:1403.5934v1, Jacob Halatek and Erwin Frey contest the
ability of our effective two-dimensional model to describe the dynamics of Min
proteins on patterned supported lipid bilayers. We thank Halatek and Frey for
their interest in our work and for again highlighting the importance of
dimensionality and geometry for pattern formation by the Min proteins. Here we
reply in detail to the objections by Halatek and Frey and show that (1) our
effective two-dimensional model reproduces the observed patterns on isolated
patches and that (2) a three-dimensional version of our model produces similar
patterns on square patches.
| [
{
"created": "Thu, 5 Jun 2014 11:41:31 GMT",
"version": "v1"
}
] | 2014-06-06 | [
[
"Bonny",
"Mike",
""
],
[
"Schweizer",
"Jakob",
""
],
[
"Loose",
"Martin",
""
],
[
"Mönch",
"Ingolf",
""
],
[
"Schwille",
"Petra",
""
],
[
"Kruse",
"Karsten",
""
]
] | The Min proteins from Escherichia coli can self-organize into traveling waves on supported lipid bilayers. In Proc. Natl. Acad. Sci. USA 109, 15283 (2012) we showed that these waves are guided along the boundaries of membrane patches. We introduced an effective two-dimensional model reproducing the observed patterns. In arXiv:1403.5934v1, Jacob Halatek and Erwin Frey contest the ability of our effective two-dimensional model to describe the dynamics of Min proteins on patterned supported lipid bilayers. We thank Halatek and Frey for their interest in our work and for again highlighting the importance of dimensionality and geometry for pattern formation by the Min proteins. Here we reply in detail to the objections by Halatek and Frey and show that (1) our effective two-dimensional model reproduces the observed patterns on isolated patches and that (2) a three-dimensional version of our model produces similar patterns on square patches. |
2203.15415 | Sean Knight | Sean Knight, Navjot Gadda | Spatiotemporal Patterns in Neurobiology: An Overview for Future
Artificial Intelligence | 8 pages | null | null | null | q-bio.NC cs.AI q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been increasing interest in developing models and
tools to address the complex patterns of connectivity found in brain tissue.
Specifically, this is due to a need to understand how emergent properties
emerge from these network structures at multiple spatiotemporal scales. We
argue that computational models are key tools for elucidating the possible
functionalities that can emerge from interactions of heterogeneous neurons
connected by complex networks on multi-scale temporal and spatial domains. Here
we review several classes of models including spiking neurons, integrate and
fire neurons with short term plasticity (STP), conductance based
integrate-and-fire models with STP, and population density neural field (PDNF)
models using simple examples with emphasis on neuroscience applications while
also providing some potential future research directions for AI. These
computational approaches allow us to explore the impact of changing underlying
mechanisms on resulting network function both experimentally as well as
theoretically. Thus we hope these studies will inform future developments in
artificial intelligence algorithms as well as help validate our understanding
of brain processes based on experiments in animals or humans.
| [
{
"created": "Tue, 29 Mar 2022 10:28:01 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Apr 2022 15:20:15 GMT",
"version": "v2"
}
] | 2022-04-15 | [
[
"Knight",
"Sean",
""
],
[
"Gadda",
"Navjot",
""
]
] | In recent years, there has been increasing interest in developing models and tools to address the complex patterns of connectivity found in brain tissue. Specifically, this is due to a need to understand how emergent properties emerge from these network structures at multiple spatiotemporal scales. We argue that computational models are key tools for elucidating the possible functionalities that can emerge from interactions of heterogeneous neurons connected by complex networks on multi-scale temporal and spatial domains. Here we review several classes of models including spiking neurons, integrate and fire neurons with short term plasticity (STP), conductance based integrate-and-fire models with STP, and population density neural field (PDNF) models using simple examples with emphasis on neuroscience applications while also providing some potential future research directions for AI. These computational approaches allow us to explore the impact of changing underlying mechanisms on resulting network function both experimentally as well as theoretically. Thus we hope these studies will inform future developments in artificial intelligence algorithms as well as help validate our understanding of brain processes based on experiments in animals or humans. |
q-bio/0406002 | Lewis Geer | Lewis Y. Geer, Sanford P. Markey, Jeffrey A. Kowalak, Lukas Wagner,
Ming Xu, Dawn M. Maynard, Xiaoyu Yang, Wenyao Shi, Stephen H. Bryant | Open Mass Spectrometry Search Algorithm | null | null | null | null | q-bio.QM | null | Large numbers of MS/MS peptide spectra generated in proteomics experiments
require efficient, sensitive and specific algorithms for peptide
identification. In the Open Mass Spectrometry Search Algorithm [OMSSA],
specificity is calculated by a classic probability score using an explicit
model for matching experimental spectra to sequences. At default thresholds,
OMSSA matches more spectra from a standard protein cocktail than a comparable
algorithm. OMSSA is designed to be faster than published algorithms in
searching large MS/MS datasets.
| [
{
"created": "Tue, 1 Jun 2004 13:27:31 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Geer",
"Lewis Y.",
""
],
[
"Markey",
"Sanford P.",
""
],
[
"Kowalak",
"Jeffrey A.",
""
],
[
"Wagner",
"Lukas",
""
],
[
"Xu",
"Ming",
""
],
[
"Maynard",
"Dawn M.",
""
],
[
"Yang",
"Xiaoyu",
""
],
[
... | Large numbers of MS/MS peptide spectra generated in proteomics experiments require efficient, sensitive and specific algorithms for peptide identification. In the Open Mass Spectrometry Search Algorithm [OMSSA], specificity is calculated by a classic probability score using an explicit model for matching experimental spectra to sequences. At default thresholds, OMSSA matches more spectra from a standard protein cocktail than a comparable algorithm. OMSSA is designed to be faster than published algorithms in searching large MS/MS datasets. |
1404.7729 | Maba Matadi Boniface | Maba B. Matadi and Kesh S. Govinder | Singularity and Symmetry Analyses for Tuberculosis Epidemics | 12 Pages, 2 figures | null | null | null | q-bio.QM math.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse the model of Tuberculosis due to Blower (Nature Medecine 1(8)
815-821) from the point of view of symmetry and singularity analysis. From the
study we provide a demonstration of the integrability of the model to present
an explicit solution.
| [
{
"created": "Thu, 24 Apr 2014 11:30:57 GMT",
"version": "v1"
}
] | 2014-07-18 | [
[
"Matadi",
"Maba B.",
""
],
[
"Govinder",
"Kesh S.",
""
]
] | We analyse the model of Tuberculosis due to Blower (Nature Medecine 1(8) 815-821) from the point of view of symmetry and singularity analysis. From the study we provide a demonstration of the integrability of the model to present an explicit solution. |
1209.3456 | Justin Blumenstiel | Justin P. Blumenstiel, Xi Chen, Miaomiao He and Casey M. Bergman | An age-of-allele test of neutrality for transposable element insertions | 40 pages, 6 figures, Supplemental Data available: jblumens@ku.edu | Genetics. 2014 Feb;196(2):523-38 | 10.1534/genetics.113.158147 | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | How natural selection acts to limit the proliferation of transposable
elements (TEs) in genomes has been of interest to evolutionary biologists for
many years. To describe TE dynamics in populations, many previous studies have
used models of transposition-selection equilibrium that rely on the assumption
of a constant rate of transposition. However, since TE invasions are known to
happen in bursts through time, this assumption may not be reasonable in natural
populations. Here we propose a test of neutrality for TE insertions that does
not rely on the assumption of a constant transposition rate. We consider the
case of TE insertions that have been ascertained from a single haploid
reference genome sequence and have subsequently had their allele frequency
estimated in a population sample. By conditioning on the age of an individual
TE insertion (using information contained in the number of substitutions that
have occurred within the TE sequence since insertion), we determine the
probability distribution for the insertion allele frequency in a population
sample under neutrality. Taking models of varying population size into account,
we then evaluate predictions of our model against allele frequency data from
190 retrotransposon insertions sampled from North American and African
populations of Drosophila melanogaster. Using this non-equilibrium model, we
are able to explain about 80% of the variance in TE insertion allele
frequencies based on age alone. Controlling both for nonequilibrium dynamics of
transposition and host demography, we provide evidence for negative selection
acting against most TEs as well as for positive selection acting on a small
subset of TEs. Our work establishes a new framework for the analysis of the
evolutionary forces governing large insertion mutations like TEs, gene
duplications or other copy number variants.
| [
{
"created": "Sun, 16 Sep 2012 03:20:44 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2013 03:00:20 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Oct 2013 03:13:55 GMT",
"version": "v3"
}
] | 2014-03-04 | [
[
"Blumenstiel",
"Justin P.",
""
],
[
"Chen",
"Xi",
""
],
[
"He",
"Miaomiao",
""
],
[
"Bergman",
"Casey M.",
""
]
] | How natural selection acts to limit the proliferation of transposable elements (TEs) in genomes has been of interest to evolutionary biologists for many years. To describe TE dynamics in populations, many previous studies have used models of transposition-selection equilibrium that rely on the assumption of a constant rate of transposition. However, since TE invasions are known to happen in bursts through time, this assumption may not be reasonable in natural populations. Here we propose a test of neutrality for TE insertions that does not rely on the assumption of a constant transposition rate. We consider the case of TE insertions that have been ascertained from a single haploid reference genome sequence and have subsequently had their allele frequency estimated in a population sample. By conditioning on the age of an individual TE insertion (using information contained in the number of substitutions that have occurred within the TE sequence since insertion), we determine the probability distribution for the insertion allele frequency in a population sample under neutrality. Taking models of varying population size into account, we then evaluate predictions of our model against allele frequency data from 190 retrotransposon insertions sampled from North American and African populations of Drosophila melanogaster. Using this non-equilibrium model, we are able to explain about 80% of the variance in TE insertion allele frequencies based on age alone. Controlling both for nonequilibrium dynamics of transposition and host demography, we provide evidence for negative selection acting against most TEs as well as for positive selection acting on a small subset of TEs. Our work establishes a new framework for the analysis of the evolutionary forces governing large insertion mutations like TEs, gene duplications or other copy number variants. |
1910.06899 | Maxim Vaysburd | Maxim Vaysburd | Identifying Epigenetic Signature of Breast Cancer with Machine Learning | 8 pages, 4 figures, 2 tables | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The research reported in this paper identifies the epigenetic biomarker
(methylation beta pattern) of breast cancer. Many cancers are triggered by
abnormal gene expression levels caused by aberrant methylation of CpG sites in
the DNA. In order to develop early diagnostics of cancer-causing methylations
and to develop a treatment, it is necessary to identify a few dozen key
cancer-related CpG methylation sites out of the millions of locations in the
DNA. This research used public TCGA dataset to train a TensorFlow machine
learning model to classify breast cancer versus non-breast-cancer tissue
samples, based on over 300,000 methylation beta values in each sample. L1
regularization was applied to identify the CpG methylation sites most important
for accurate classification. It was hypothesized that CpG sites with the
highest learned model weights correspond to DNA locations most relevant to
breast cancer. A reduced model trained on methylation betas of just the 25 CpG
sites having the highest weights in the full model (trained on methylation
betas at over 300,000 CpG sites) has achieved over 94% accuracy on evaluation
data, confirming that the identified 25 CpG sites are indeed a biomarker of
breast cancer.
| [
{
"created": "Sat, 12 Oct 2019 19:46:17 GMT",
"version": "v1"
}
] | 2019-10-16 | [
[
"Vaysburd",
"Maxim",
""
]
] | The research reported in this paper identifies the epigenetic biomarker (methylation beta pattern) of breast cancer. Many cancers are triggered by abnormal gene expression levels caused by aberrant methylation of CpG sites in the DNA. In order to develop early diagnostics of cancer-causing methylations and to develop a treatment, it is necessary to identify a few dozen key cancer-related CpG methylation sites out of the millions of locations in the DNA. This research used public TCGA dataset to train a TensorFlow machine learning model to classify breast cancer versus non-breast-cancer tissue samples, based on over 300,000 methylation beta values in each sample. L1 regularization was applied to identify the CpG methylation sites most important for accurate classification. It was hypothesized that CpG sites with the highest learned model weights correspond to DNA locations most relevant to breast cancer. A reduced model trained on methylation betas of just the 25 CpG sites having the highest weights in the full model (trained on methylation betas at over 300,000 CpG sites) has achieved over 94% accuracy on evaluation data, confirming that the identified 25 CpG sites are indeed a biomarker of breast cancer. |
0807.3238 | Alain Destexhe | Martin Pospischil, Zuzanna Piwkowska, Thierry Bal and Alain Destexhe | Extracting synaptic conductances from single membrane potential traces | Neuroscience (in press) | Neuroscience 158: 545-552, 2009. | 10.1016/j.neuroscience.2008.10.033 | null | q-bio.NC q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In awake animals, the activity of the cerebral cortex is highly complex, with
neurons firing irregularly with apparent Poisson statistics. One way to
characterize this complexity is to take advantage of the high interconnectivity
of cerebral cortex and use intracellular recordings of cortical neurons, which
contain information about the activity of thousands of other cortical neurons.
Identifying the membrane potential (Vm) to a stochastic process enables the
extraction of important statistical signatures of this complex synaptic
activity. Typically, one estimates the total synaptic conductances (excitatory
and inhibitory) but this type of estimation requires at least two Vm levels and
therefore cannot be applied to single Vm traces. We propose here a method to
extract excitatory and inhibitory conductances (mean and variance) from single
Vm traces. This "VmT method" estimates conductance parameters using maximum
likelihood criteria, under the assumption are that synaptic conductances are
described by Gaussian stochastic processes and are integrated by a passive
leaky membrane. The method is illustrated using models and is tested on
guinea-pig visual cortex neurons in vitro using dynamic-clamp experiments. The
VmT method holds promises for extracting conductances from single-trial
measurements, which has a high potential for in vivo applications.
| [
{
"created": "Mon, 21 Jul 2008 13:47:04 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Jan 2009 16:24:28 GMT",
"version": "v2"
}
] | 2009-04-29 | [
[
"Pospischil",
"Martin",
""
],
[
"Piwkowska",
"Zuzanna",
""
],
[
"Bal",
"Thierry",
""
],
[
"Destexhe",
"Alain",
""
]
] | In awake animals, the activity of the cerebral cortex is highly complex, with neurons firing irregularly with apparent Poisson statistics. One way to characterize this complexity is to take advantage of the high interconnectivity of cerebral cortex and use intracellular recordings of cortical neurons, which contain information about the activity of thousands of other cortical neurons. Identifying the membrane potential (Vm) to a stochastic process enables the extraction of important statistical signatures of this complex synaptic activity. Typically, one estimates the total synaptic conductances (excitatory and inhibitory) but this type of estimation requires at least two Vm levels and therefore cannot be applied to single Vm traces. We propose here a method to extract excitatory and inhibitory conductances (mean and variance) from single Vm traces. This "VmT method" estimates conductance parameters using maximum likelihood criteria, under the assumption are that synaptic conductances are described by Gaussian stochastic processes and are integrated by a passive leaky membrane. The method is illustrated using models and is tested on guinea-pig visual cortex neurons in vitro using dynamic-clamp experiments. The VmT method holds promises for extracting conductances from single-trial measurements, which has a high potential for in vivo applications. |
q-bio/0606010 | Edouard Yeramian | E. Yeramian, E. Debonneuil | Probabilistic sequence alignments: realistic models with efficient
algorithms | null | null | 10.1103/PhysRevLett.98.078101 | null | q-bio.GN cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph physics.comp-ph | null | Alignment algorithms usually rely on simplified models of gaps for
computational efficiency. Based on an isomorphism between alignments and
physical helix-coil models, we show in statistical mechanics that alignments
with realistic laws for gaps can be computed with fast algorithms. Improved
performances of probabilistic alignments with realistic models of gaps are
illustrated. Probabilistic and optimization formulations are compared, with
potential implications in many fields and perspectives for computationally
efficient extensions to Markov models with realistic long-range interactions.
| [
{
"created": "Fri, 9 Jun 2006 17:36:07 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Yeramian",
"E.",
""
],
[
"Debonneuil",
"E.",
""
]
] | Alignment algorithms usually rely on simplified models of gaps for computational efficiency. Based on an isomorphism between alignments and physical helix-coil models, we show in statistical mechanics that alignments with realistic laws for gaps can be computed with fast algorithms. Improved performances of probabilistic alignments with realistic models of gaps are illustrated. Probabilistic and optimization formulations are compared, with potential implications in many fields and perspectives for computationally efficient extensions to Markov models with realistic long-range interactions. |
2301.10768 | Dozie Iwuh | Dozie Iwuh | Can linking the recall system to addiction enable a better understanding
of the dopaminergic pathway? | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human addiction, as a learned behaviour, has and is constantly being treated
psychologically, with specific and timely interventions from Neuroscience. We
endorse that human addiction can receive further boost as regards treatment,
when we firmly understand how it works from a quantum scale. This is majorly
because the dopaminergic pathway (DP) that is well elaborated in the brain of
every addict is connected to the memory pathway. This further implies that the
recall process in the brain of the addict, as regards his/her addiction is
fully functional in line with the pleasure that arises from the element of
his/her addiction. This dopamine-led pathway shows itself as prominent in what
pertains to addiction, this is because of the role it plays in reward. As a
neurotransmitter, dopamine flickers when reward is in the offing. It should be
noted that a full understanding of the dimensions of addiction in the human
person has not be attained to, therefore, we seek to add to this ongoing
research, by considering excerpts arising from Quantum Field Theory. We are
introducing excerpts from QFT, because DP, is an attendant element in the
process of reward and motivation. In clear terms, we are alluding that it all
begins with the memory.
| [
{
"created": "Thu, 15 Dec 2022 11:50:25 GMT",
"version": "v1"
}
] | 2023-01-27 | [
[
"Iwuh",
"Dozie",
""
]
] | Human addiction, as a learned behaviour, has and is constantly being treated psychologically, with specific and timely interventions from Neuroscience. We endorse that human addiction can receive further boost as regards treatment, when we firmly understand how it works from a quantum scale. This is majorly because the dopaminergic pathway (DP) that is well elaborated in the brain of every addict is connected to the memory pathway. This further implies that the recall process in the brain of the addict, as regards his/her addiction is fully functional in line with the pleasure that arises from the element of his/her addiction. This dopamine-led pathway shows itself as prominent in what pertains to addiction, this is because of the role it plays in reward. As a neurotransmitter, dopamine flickers when reward is in the offing. It should be noted that a full understanding of the dimensions of addiction in the human person has not be attained to, therefore, we seek to add to this ongoing research, by considering excerpts arising from Quantum Field Theory. We are introducing excerpts from QFT, because DP, is an attendant element in the process of reward and motivation. In clear terms, we are alluding that it all begins with the memory. |
2111.08939 | Pamela K. Douglas | PK Douglas, DB Douglas | Reconsidering Spatial Priors In EEG Source Estimation: Does White Matter
Contribute to EEG Rhythms? | null | 2019 IEEE 7th International Winter Conference on Brain-Computer
Interface (BCI), 2019, pp. 1-12, | 10.1109/IWW-BCI.2019.8737307 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electroencephalogram (EEG) has been a core tool used in functional
neuroimaging in humans for nearly a hundred years. Because it is inexpensive,
easy to implement, and noninvasive, it also represents an excellent candidate
modality for use in the BCI setting. Nonetheless, a complete understanding of
how EEG measurements (voltage fluctuations) relate to information processing in
the brain remains somewhat elusive. A deeper understanding of the
neuroanatomical underpinnings of the EEG signal may help explain
inter-individual variability in evoked and induced potentials, which may
improve BCI therapies targeted to the individual. According to classic
biophysical models, EEG fluctuations are primarily a reflection of locally
synchronized neuronal oscillations within the gray matter oriented
approximately orthogonal to the scalp. In contrast, global models ignore local
signals due to dendritic processing, and suggest that propagation delays due to
white matter architecture are responsible for the EEG signal, and are capable
of explaining the coherence between numerous rhythms (e.g., alpha) at spatially
distinct areas of the scalp. Recently, combined local-global models suggest
that the EEG signal may reflect a superposition of local processing along with
global contributors including transduction along white matter tracts in the
brain. Incorporating both local and global (e.g., white matter) priors into EEG
source models may therefore improve source estimates. These models may also
help disentangle which aspects of the EEG signal are predicted to colocalize
spatially with measurements from functional MRI (fMRI). Here, we explore the
possibility that white matter conductivity contributes to EEG measurements via
a generative model based on classic axonal transduction models, and discuss its
potential implications for source estimation.
| [
{
"created": "Wed, 17 Nov 2021 07:08:37 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Douglas",
"PK",
""
],
[
"Douglas",
"DB",
""
]
] | Electroencephalogram (EEG) has been a core tool used in functional neuroimaging in humans for nearly a hundred years. Because it is inexpensive, easy to implement, and noninvasive, it also represents an excellent candidate modality for use in the BCI setting. Nonetheless, a complete understanding of how EEG measurements (voltage fluctuations) relate to information processing in the brain remains somewhat elusive. A deeper understanding of the neuroanatomical underpinnings of the EEG signal may help explain inter-individual variability in evoked and induced potentials, which may improve BCI therapies targeted to the individual. According to classic biophysical models, EEG fluctuations are primarily a reflection of locally synchronized neuronal oscillations within the gray matter oriented approximately orthogonal to the scalp. In contrast, global models ignore local signals due to dendritic processing, and suggest that propagation delays due to white matter architecture are responsible for the EEG signal, and are capable of explaining the coherence between numerous rhythms (e.g., alpha) at spatially distinct areas of the scalp. Recently, combined local-global models suggest that the EEG signal may reflect a superposition of local processing along with global contributors including transduction along white matter tracts in the brain. Incorporating both local and global (e.g., white matter) priors into EEG source models may therefore improve source estimates. These models may also help disentangle which aspects of the EEG signal are predicted to colocalize spatially with measurements from functional MRI (fMRI). Here, we explore the possibility that white matter conductivity contributes to EEG measurements via a generative model based on classic axonal transduction models, and discuss its potential implications for source estimation. |
0710.4181 | Molei Tao | Molei Tao | Thermodynamic and structural consensus principle predicts mature miRNA
location and structure, categorizes conserved interspecies miRNA subgroups,
and hints new possible mechanisms of miRNA maturization | 24 pages, 1 figure failed to update abstract last time | null | null | null | q-bio.BM q-bio.GN | null | Although conservation of thermodynamics is much less studied than of
sequences and structures, thermodynamic details are biophysical features
different from but as important as structural features. As a succession of
previous research which revealed the important relationships between
thermodynamic features and miRNA maturization, this article applies
interspecies conservation of miRNA thermodynamics and structures to study miRNA
maturization. According to a thermodynamic and structural consensus principle,
miRBase is categorized by conservation subgroups, which imply various
functions. These subgroups are divided without the introduction of functional
information. This suggests the consistency between the two processes of miRNA
maturization and functioning. Different from prevailing methods which predict
extended miRNA precursors, a learning-based algorithm is proposed to predict
~22bp mature parts of 2780 test miRNA genes in 44 species with a rate of 79.4%.
This is the first attempt of a general interspecies prediction of mature
miRNAs. Suboptimal structures that most fit the consensus thermodynamic and
structural profiles are chosen to improve structure prediction. Distribution of
miRNA locations on corresponding pri-miRNA stem-loop structures is then
studied. Existing research on Drosha cleavage site is not generally true across
species. Instead, the distance between mature miRNA and center loop normalized
by stem length is a more conserved structural feature in animals, and the
normalized distance between mature miRNA and ss-RNA tail is the counterpart in
plants. This suggests two possibly-updating mechanisms of miRNA maturization in
animals and plants. All in all, conservations of thermodynamics together with
other features are shown closely related to miRNA maturization.
| [
{
"created": "Tue, 23 Oct 2007 03:44:02 GMT",
"version": "v1"
}
] | 2007-10-24 | [
[
"Tao",
"Molei",
""
]
] | Although conservation of thermodynamics is much less studied than of sequences and structures, thermodynamic details are biophysical features different from but as important as structural features. As a succession of previous research which revealed the important relationships between thermodynamic features and miRNA maturization, this article applies interspecies conservation of miRNA thermodynamics and structures to study miRNA maturization. According to a thermodynamic and structural consensus principle, miRBase is categorized by conservation subgroups, which imply various functions. These subgroups are divided without the introduction of functional information. This suggests the consistency between the two processes of miRNA maturization and functioning. Different from prevailing methods which predict extended miRNA precursors, a learning-based algorithm is proposed to predict ~22bp mature parts of 2780 test miRNA genes in 44 species with a rate of 79.4%. This is the first attempt of a general interspecies prediction of mature miRNAs. Suboptimal structures that most fit the consensus thermodynamic and structural profiles are chosen to improve structure prediction. Distribution of miRNA locations on corresponding pri-miRNA stem-loop structures is then studied. Existing research on Drosha cleavage site is not generally true across species. Instead, the distance between mature miRNA and center loop normalized by stem length is a more conserved structural feature in animals, and the normalized distance between mature miRNA and ss-RNA tail is the counterpart in plants. This suggests two possibly-updating mechanisms of miRNA maturization in animals and plants. All in all, conservations of thermodynamics together with other features are shown closely related to miRNA maturization. |
1701.05627 | Caroline Holmes | Caroline Holmes, Mahan Ghafari, Abbas Anzar, Varun Saravanan, Ilya
Nemenman | Luria-Delbruck, revisited: The classic experiment does not rule out
Lamarckian evolution | 13 pages, 5 figures. Presented at the 10th q-bio Conference, July
27-30, 2016, Nashville, TN | null | 10.1088/1478-3975/aa8230 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We re-examined data from the classic Luria-Delbruck fluctuation experiment,
which is often credited with establishing a Darwinian basis for evolution. We
argue that, for the Lamarckian model of evolution to be ruled out by the
experiment, the experiment must favor pure Darwinian evolution over both the
Lamarckian model and a model that allows both Darwinian and Lamarckian
mechanisms. Analysis of the combined model was not performed in the original
1943 paper. The Luria-Delbruck paper also did not consider the possibility of
neither model fitting the experiment. Using Bayesian model selection, we find
that the Luria-Delbruck experiment, indeed, favors the Darwinian evolution over
purely Lamarckian. However, our analysis does not rule out the combined model,
and hence cannot rule out Lamarckian contributions to the evolutionary
dynamics.
| [
{
"created": "Thu, 19 Jan 2017 22:23:09 GMT",
"version": "v1"
}
] | 2017-09-13 | [
[
"Holmes",
"Caroline",
""
],
[
"Ghafari",
"Mahan",
""
],
[
"Anzar",
"Abbas",
""
],
[
"Saravanan",
"Varun",
""
],
[
"Nemenman",
"Ilya",
""
]
] | We re-examined data from the classic Luria-Delbruck fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms. Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbruck paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbruck experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics. |
2203.05767 | Partha Dutta | Subhendu Bhandary, Debabrata Biswas, Tanmoy Banerjee, Partha Sharathi
Dutta | Effects of time-varying habitat connectivity on metacommunity
persistence | Comments are welcome | null | 10.1103/PhysRevE.106.014309 | null | q-bio.PE nlin.CD | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Network structure or connectivity pattern is critical in determining
collective dynamics among interacting species in ecosystems. Conventional
research on species persistence in spatial populations has focused on static
network structure, though most real network structures change in time, forming
time-varying networks. This raises the question, in metacommunities, how does
the pattern of synchrony vary with temporal evolution in the network structure.
The synchronous dynamics among species are known to reduce metacommunity
persistence. Here, we consider a time-varying metacommunity small-world network
consisting of a chaotic three-species food chain oscillator in each patch/node.
The rate of change in the network connectivity is determined by the natural
frequency or its subharmonics of the constituent oscillator to allow sufficient
time for the evolution of species in between successive rewirings. We find that
over a range of coupling strengths and rewiring periods, even higher rewiring
probabilities drive a network from asynchrony towards synchrony. Moreover, in
networks with a small rewiring period, an increase in average degree (more
connected networks) pushes the asynchronous dynamics to synchrony. On the
contrary, in networks with a low average degree, a higher rewiring period
drives the synchronous dynamics to asynchrony resulting in increased species
persistence. Our results also follow the calculation of synchronization time
and robust across other ecosystem models. Overall, our study opens the
possibility of developing temporal connectivity strategies to increase species
persistence in ecological networks.
| [
{
"created": "Fri, 11 Mar 2022 06:07:43 GMT",
"version": "v1"
}
] | 2022-08-10 | [
[
"Bhandary",
"Subhendu",
""
],
[
"Biswas",
"Debabrata",
""
],
[
"Banerjee",
"Tanmoy",
""
],
[
"Dutta",
"Partha Sharathi",
""
]
] | Network structure or connectivity pattern is critical in determining collective dynamics among interacting species in ecosystems. Conventional research on species persistence in spatial populations has focused on static network structure, though most real network structures change in time, forming time-varying networks. This raises the question, in metacommunities, how does the pattern of synchrony vary with temporal evolution in the network structure. The synchronous dynamics among species are known to reduce metacommunity persistence. Here, we consider a time-varying metacommunity small-world network consisting of a chaotic three-species food chain oscillator in each patch/node. The rate of change in the network connectivity is determined by the natural frequency or its subharmonics of the constituent oscillator to allow sufficient time for the evolution of species in between successive rewirings. We find that over a range of coupling strengths and rewiring periods, even higher rewiring probabilities drive a network from asynchrony towards synchrony. Moreover, in networks with a small rewiring period, an increase in average degree (more connected networks) pushes the asynchronous dynamics to synchrony. On the contrary, in networks with a low average degree, a higher rewiring period drives the synchronous dynamics to asynchrony resulting in increased species persistence. Our results also follow the calculation of synchronization time and robust across other ecosystem models. Overall, our study opens the possibility of developing temporal connectivity strategies to increase species persistence in ecological networks. |
1301.0050 | Urs K\"oster | Urs K\"oster and Jascha Sohl-Dickstein and Charles M. Gray and Bruno
A. Olshausen | Higher Order Correlations within Cortical Layers Dominate Functional
Connectivity in Microcolumns | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report on simultaneous recordings from cells in all layers of visual
cortex and models developed to capture the higher order structure of population
spiking activity. Specifically, we use Ising, Restricted Boltzmann Machine
(RBM) and semi-Restricted Boltzmann Machine (sRBM) models to reveal laminar
patterns of activity. While the Ising model describes only pairwise couplings,
the RBM and sRBM capture higher-order dependencies using hidden units. Applied
to 32- channel polytrode data recorded from cat visual cortex, the higher-order
mod- els discover functional connectivity preferentially linking groups of
cells within a cortical layer. Both RBM and sRBM models outperform Ising models
in log- likelihood. Additionally, we train all three models on spatiotemporal
sequences of states, exposing temporal structure and allowing us to predict
spiking from network history. This demonstrates the importance of modeling
higher order in- teractions across space and time when characterizing activity
in cortical networks.
| [
{
"created": "Tue, 1 Jan 2013 03:06:46 GMT",
"version": "v1"
}
] | 2013-01-03 | [
[
"Köster",
"Urs",
""
],
[
"Sohl-Dickstein",
"Jascha",
""
],
[
"Gray",
"Charles M.",
""
],
[
"Olshausen",
"Bruno A.",
""
]
] | We report on simultaneous recordings from cells in all layers of visual cortex and models developed to capture the higher order structure of population spiking activity. Specifically, we use Ising, Restricted Boltzmann Machine (RBM) and semi-Restricted Boltzmann Machine (sRBM) models to reveal laminar patterns of activity. While the Ising model describes only pairwise couplings, the RBM and sRBM capture higher-order dependencies using hidden units. Applied to 32- channel polytrode data recorded from cat visual cortex, the higher-order mod- els discover functional connectivity preferentially linking groups of cells within a cortical layer. Both RBM and sRBM models outperform Ising models in log- likelihood. Additionally, we train all three models on spatiotemporal sequences of states, exposing temporal structure and allowing us to predict spiking from network history. This demonstrates the importance of modeling higher order in- teractions across space and time when characterizing activity in cortical networks. |
1708.00632 | Tim Rogers | Qian Yang, Tim Rogers, Jonathan H P Dawes | Demographic noise slows down cycles of dominance | 25 pages, 11 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the phenomenon of cyclic dominance in the paradigmatic
Rock--Paper--Scissors model, as occurring in both stochastic individual-based
models of finite populations and in the deterministic replicator equations. The
mean-field replicator equations are valid in the limit of large populations
and, in the presence of mutation and unbalanced payoffs, they exhibit an
attracting limit cycle. The period of this cycle depends on the rate of
mutation; specifically, the period grows logarithmically as the mutation rate
tends to zero. We find that this behaviour is not reproduced in stochastic
simulations with a fixed finite population size. Instead, demographic noise
present in the individual-based model dramatically slows down the progress of
the limit cycle, with the typical period growing as the reciprocal of the
mutation rate. Here we develop a theory that explains these scaling regimes and
delineates them in terms of population size and mutation rate. We identify a
further intermediate regime in which we construct a stochastic differential
equation model describing the transition between stochastically-dominated and
mean-field behaviour.
| [
{
"created": "Wed, 2 Aug 2017 08:12:39 GMT",
"version": "v1"
}
] | 2017-08-03 | [
[
"Yang",
"Qian",
""
],
[
"Rogers",
"Tim",
""
],
[
"Dawes",
"Jonathan H P",
""
]
] | We study the phenomenon of cyclic dominance in the paradigmatic Rock--Paper--Scissors model, as occurring in both stochastic individual-based models of finite populations and in the deterministic replicator equations. The mean-field replicator equations are valid in the limit of large populations and, in the presence of mutation and unbalanced payoffs, they exhibit an attracting limit cycle. The period of this cycle depends on the rate of mutation; specifically, the period grows logarithmically as the mutation rate tends to zero. We find that this behaviour is not reproduced in stochastic simulations with a fixed finite population size. Instead, demographic noise present in the individual-based model dramatically slows down the progress of the limit cycle, with the typical period growing as the reciprocal of the mutation rate. Here we develop a theory that explains these scaling regimes and delineates them in terms of population size and mutation rate. We identify a further intermediate regime in which we construct a stochastic differential equation model describing the transition between stochastically-dominated and mean-field behaviour. |
1202.3800 | Jozsef Farkas | Azmy S. Ackleh and Jozsef Z. Farkas | On the net reproduction rate of continuous structured populations with
distributed states at birth | To appear in Computers and Mathematics with Applications | Computers and Mathematics with Applications 66 (2013) | 10.1016/j.camwa.2013.04.010 | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a nonlinear structured population model with a distributed
recruitment term. The question of the existence of non-trivial steady states
can be treated (at least!) in three different ways. One approach is to study
spectral properties of a parametrized family of unbounded operators. The
alternative approach, on which we focus here, is based on the reformulation of
the problem as an integral equation. In this context we introduce a density
dependent net reproduction rate and discuss its relationship to a biologically
meaningful quantity. Finally, we briefly discuss a third approach, which is
based on the finite rank approximation of the recruitment operator.
| [
{
"created": "Thu, 16 Feb 2012 21:27:58 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Apr 2013 10:57:32 GMT",
"version": "v2"
}
] | 2019-03-06 | [
[
"Ackleh",
"Azmy S.",
""
],
[
"Farkas",
"Jozsef Z.",
""
]
] | We consider a nonlinear structured population model with a distributed recruitment term. The question of the existence of non-trivial steady states can be treated (at least!) in three different ways. One approach is to study spectral properties of a parametrized family of unbounded operators. The alternative approach, on which we focus here, is based on the reformulation of the problem as an integral equation. In this context we introduce a density dependent net reproduction rate and discuss its relationship to a biologically meaningful quantity. Finally, we briefly discuss a third approach, which is based on the finite rank approximation of the recruitment operator. |
2105.01545 | Bin Liu | Bin Liu and Grigorios Tsoumakas | Optimizing Area Under the Curve Measures via Matrix Factorization for
Predicting Drug-Target Interaction with Multiple Similarities | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In drug discovery, identifying drug-target interactions (DTIs) via
experimental approaches is a tedious and expensive procedure. Computational
methods efficiently predict DTIs and recommend a small part of potential
interacting pairs for further experimental confirmation, accelerating the drug
discovery process. Although it has been shown that fusing heterogeneous drug
and target similarities can improve the prediction ability, the existing
similarity combination methods ignore the interaction consistency for neighbour
entities which is more crucial for the DTI prediction model. Furthermore, area
under the precision-recall curve (AUPR) that emphasizes the accuracy of
top-ranked pairs and area under the receiver operating characteristic curve
(AUC) that heavily punishes the existence of low ranked interacting pairs are
two widely used evaluation metrics in DTI prediction. However, the two metrics
are seldom considered as losses within existing DTI prediction methods. This
paper first proposes two matrix factorization (MF) methods that optimize AUPR
and AUC using convex surrogate losses respectively, and then develops an
ensemble MF approach takes advantage of the two area under the curve metrics by
combining the two single metric based MF models. Both three proposed approaches
incorporate a novel local interaction consistency aware similarity interaction
method to generate fused drug and target similarities that preserve vital
information from the more reliable view. Experimental results over five
datasets under different prediction settings show that the proposed methods
outperform various competitors in terms of the metric(s) they optimize. In
addition, the validation on the top ranked novel predictions confirms the
ability of our methods to discover potential new DTIs.
| [
{
"created": "Sat, 1 May 2021 14:48:32 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jan 2022 04:51:05 GMT",
"version": "v2"
}
] | 2022-01-19 | [
[
"Liu",
"Bin",
""
],
[
"Tsoumakas",
"Grigorios",
""
]
] | In drug discovery, identifying drug-target interactions (DTIs) via experimental approaches is a tedious and expensive procedure. Computational methods efficiently predict DTIs and recommend a small part of potential interacting pairs for further experimental confirmation, accelerating the drug discovery process. Although it has been shown that fusing heterogeneous drug and target similarities can improve the prediction ability, the existing similarity combination methods ignore the interaction consistency for neighbour entities which is more crucial for the DTI prediction model. Furthermore, area under the precision-recall curve (AUPR) that emphasizes the accuracy of top-ranked pairs and area under the receiver operating characteristic curve (AUC) that heavily punishes the existence of low ranked interacting pairs are two widely used evaluation metrics in DTI prediction. However, the two metrics are seldom considered as losses within existing DTI prediction methods. This paper first proposes two matrix factorization (MF) methods that optimize AUPR and AUC using convex surrogate losses respectively, and then develops an ensemble MF approach takes advantage of the two area under the curve metrics by combining the two single metric based MF models. Both three proposed approaches incorporate a novel local interaction consistency aware similarity interaction method to generate fused drug and target similarities that preserve vital information from the more reliable view. Experimental results over five datasets under different prediction settings show that the proposed methods outperform various competitors in terms of the metric(s) they optimize. In addition, the validation on the top ranked novel predictions confirms the ability of our methods to discover potential new DTIs. |
1909.00651 | Istvan Kiss Z | Nicos Georgiou, Istv\'An Z. Kiss, P\'Eter Simon | Theoretical and numerical considerations of the assumptions behind
triple closures in epidemic models on networks | 20 pages, 5 figures | null | null | null | q-bio.QM physics.soc-ph q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Networks are widely used to model the contact structure within a population
and in the resulting models of disease spread. While networks provide a high
degree of realism, the analysis of the exact model is out of reach and even
numerical methods fail for modest network size. Hence, mean-field models (e.g.
pairwise) focusing on describing the evolution of some summary statistics from
the exact model gained a lot of traction over the last few decades. In this
paper we revisit the problem of deriving triple closures for pairwise models
and we investigate in detail the assumptions behind some of the well-known
closures as well as their validity. Using a top-down approach we start at the
level of the entire graph and work down to the level of triples and combine
this with information around nodes and pairs. We use our approach to derive
many of the existing closures and propose new ones and theoretically connect
the two well-studied models of multinomial link and Poisson link selection. The
theoretical work is backed up by numerical examples to highlight where the
commonly used assumptions may fail and provide some recommendations for how to
choose the most appropriate closure when using graphs with no or modest degree
heterogeneity.
| [
{
"created": "Mon, 2 Sep 2019 10:28:47 GMT",
"version": "v1"
}
] | 2019-09-15 | [
[
"Georgiou",
"Nicos",
""
],
[
"Kiss",
"IstvÁn Z.",
""
],
[
"Simon",
"PÉter",
""
]
] | Networks are widely used to model the contact structure within a population and in the resulting models of disease spread. While networks provide a high degree of realism, the analysis of the exact model is out of reach and even numerical methods fail for modest network size. Hence, mean-field models (e.g. pairwise) focusing on describing the evolution of some summary statistics from the exact model gained a lot of traction over the last few decades. In this paper we revisit the problem of deriving triple closures for pairwise models and we investigate in detail the assumptions behind some of the well-known closures as well as their validity. Using a top-down approach we start at the level of the entire graph and work down to the level of triples and combine this with information around nodes and pairs. We use our approach to derive many of the existing closures and propose new ones and theoretically connect the two well-studied models of multinomial link and Poisson link selection. The theoretical work is backed up by numerical examples to highlight where the commonly used assumptions may fail and provide some recommendations for how to choose the most appropriate closure when using graphs with no or modest degree heterogeneity. |
2012.00551 | Sajid Muhaimin Choudhury | Mohammad Muntasir Hassan, Farhan Sadik Sium, Fariba Islam, Sajid
Muhaimin Choudhury | Plasmonic metamaterial based virus detection system: a review | 18 pages, 8 figures | Sensing and Biosensing Research 33 (2021) | 10.1016/j.sbsr.2021.100429 | null | q-bio.QM cond-mat.mtrl-sci physics.optics | http://creativecommons.org/licenses/by/4.0/ | Our atmosphere is constantly changing and new pathogens are erupting now and
then and the existing pathogens are mutating continuously. Some of these
pathogens, such as SARS-CoV-2, become so deadly that they put the whole
technological advancement of healthcare under challenge. Within this very
decade several other deadly virus outbreaks were witnessed by humans such as
Zika virus, Ebola virus, MERS-coronavirus etc. Though conventional techniques
have succeeded in detecting these viruses to some extent, these techniques are
time-consuming, costly, and require trained human-resources. Plasmonic
metamaterial-based biosensors might pave the way to low-cost rapid virus
detection. So this review discusses in details the latest development in
plasmonics and metamaterial-based biosensors for virus, viral particles and
antigen detection and the future direction of research in this field. Emergence
of quantum properties in biosensing, application of machine learning,
artificial intelligence and novel materials in biosensing is also discussed in
brief.
| [
{
"created": "Sun, 29 Nov 2020 02:25:26 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Dec 2020 19:16:26 GMT",
"version": "v2"
},
{
"created": "Wed, 5 May 2021 10:12:44 GMT",
"version": "v3"
},
{
"created": "Thu, 20 May 2021 10:19:55 GMT",
"version": "v4"
},
{
"cr... | 2021-05-26 | [
[
"Hassan",
"Mohammad Muntasir",
""
],
[
"Sium",
"Farhan Sadik",
""
],
[
"Islam",
"Fariba",
""
],
[
"Choudhury",
"Sajid Muhaimin",
""
]
] | Our atmosphere is constantly changing and new pathogens are erupting now and then and the existing pathogens are mutating continuously. Some of these pathogens, such as SARS-CoV-2, become so deadly that they put the whole technological advancement of healthcare under challenge. Within this very decade several other deadly virus outbreaks were witnessed by humans such as Zika virus, Ebola virus, MERS-coronavirus etc. Though conventional techniques have succeeded in detecting these viruses to some extent, these techniques are time-consuming, costly, and require trained human-resources. Plasmonic metamaterial-based biosensors might pave the way to low-cost rapid virus detection. So this review discusses in details the latest development in plasmonics and metamaterial-based biosensors for virus, viral particles and antigen detection and the future direction of research in this field. Emergence of quantum properties in biosensing, application of machine learning, artificial intelligence and novel materials in biosensing is also discussed in brief. |
2102.03849 | Igor Ovchinnikov V. | Igor V. Ovchinnikov and Skirmantas Janusonis | Toward an Effective Theory of Neurodynamics: Topological Supersymmetry
Breaking, Network Coarse-Graining, and Instanton Interaction | revtex 4-1, 2 figures | null | null | null | q-bio.NC math-ph math.MP nlin.AO nlin.PS | http://creativecommons.org/licenses/by/4.0/ | Experimental research has shown that the brain's fast electrochemical
dynamics, or neurodynamics (ND), is strongly stochastic, chaotic, and instanton
(neuroavalanche)-dominated. It is also partly scale-invariant which has been
loosely associated with critical phenomena. It has been recently demonstrated
that the supersymmetric theory of stochastics (STS) offers a theoretical
framework that can explain all of the above ND features. In the STS, all
stochastic models possess a topological supersymmetry (TS), and the
"criticality" of ND and similar stochastic processes is associated with
noise-induced, spontaneous breakdown of this TS (due to instanton condensation
near the border with ordinary chaos in which TS is broken by
non-integrability). Here, we propose a new approach that may be useful for the
construction of low-energy effective theories of ND. Its centerpiece is a
coarse-graining procedure of neural networks based on simplicial complexes and
the concept of the "enveloping lattice." It represents a neural network as a
continuous, high-dimensional base space whose rich topology reflects that of
the original network. The reduced one-instanton state space is determined by
the de Rham cohomology classes of this base space, and the effective ND
dynamics can be recognized as interactions of the instantons in the spirit of
the Segal-Atiyah formalism.
| [
{
"created": "Sun, 7 Feb 2021 17:06:20 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Ovchinnikov",
"Igor V.",
""
],
[
"Janusonis",
"Skirmantas",
""
]
] | Experimental research has shown that the brain's fast electrochemical dynamics, or neurodynamics (ND), is strongly stochastic, chaotic, and instanton (neuroavalanche)-dominated. It is also partly scale-invariant which has been loosely associated with critical phenomena. It has been recently demonstrated that the supersymmetric theory of stochastics (STS) offers a theoretical framework that can explain all of the above ND features. In the STS, all stochastic models possess a topological supersymmetry (TS), and the "criticality" of ND and similar stochastic processes is associated with noise-induced, spontaneous breakdown of this TS (due to instanton condensation near the border with ordinary chaos in which TS is broken by non-integrability). Here, we propose a new approach that may be useful for the construction of low-energy effective theories of ND. Its centerpiece is a coarse-graining procedure of neural networks based on simplicial complexes and the concept of the "enveloping lattice." It represents a neural network as a continuous, high-dimensional base space whose rich topology reflects that of the original network. The reduced one-instanton state space is determined by the de Rham cohomology classes of this base space, and the effective ND dynamics can be recognized as interactions of the instantons in the spirit of the Segal-Atiyah formalism. |
2202.06159 | Justin Jude | Justin Jude, Matthew G Perich, Lee E Miller, Matthias H Hennig | Robust alignment of cross-session recordings of neural population
activity by behaviour via unsupervised domain adaptation | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neural population activity relating to behaviour is assumed to be inherently
low-dimensional despite the observed high dimensionality of data recorded using
multi-electrode arrays. Therefore, predicting behaviour from neural population
recordings has been shown to be most effective when using latent variable
models. Over time however, the activity of single neurons can drift, and
different neurons will be recorded due to movement of implanted neural probes.
This means that a decoder trained to predict behaviour on one day performs
worse when tested on a different day. On the other hand, evidence suggests that
the latent dynamics underlying behaviour may be stable even over months and
years. Based on this idea, we introduce a model capable of inferring
behaviourally relevant latent dynamics from previously unseen data recorded
from the same animal, without any need for decoder recalibration. We show that
unsupervised domain adaptation combined with a sequential variational
autoencoder, trained on several sessions, can achieve good generalisation to
unseen data and correctly predict behaviour where conventional methods fail.
Our results further support the hypothesis that behaviour-related neural
dynamics are low-dimensional and stable over time, and will enable more
effective and flexible use of brain computer interface technologies.
| [
{
"created": "Sat, 12 Feb 2022 22:17:30 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Feb 2022 14:13:59 GMT",
"version": "v2"
}
] | 2022-02-17 | [
[
"Jude",
"Justin",
""
],
[
"Perich",
"Matthew G",
""
],
[
"Miller",
"Lee E",
""
],
[
"Hennig",
"Matthias H",
""
]
] | Neural population activity relating to behaviour is assumed to be inherently low-dimensional despite the observed high dimensionality of data recorded using multi-electrode arrays. Therefore, predicting behaviour from neural population recordings has been shown to be most effective when using latent variable models. Over time however, the activity of single neurons can drift, and different neurons will be recorded due to movement of implanted neural probes. This means that a decoder trained to predict behaviour on one day performs worse when tested on a different day. On the other hand, evidence suggests that the latent dynamics underlying behaviour may be stable even over months and years. Based on this idea, we introduce a model capable of inferring behaviourally relevant latent dynamics from previously unseen data recorded from the same animal, without any need for decoder recalibration. We show that unsupervised domain adaptation combined with a sequential variational autoencoder, trained on several sessions, can achieve good generalisation to unseen data and correctly predict behaviour where conventional methods fail. Our results further support the hypothesis that behaviour-related neural dynamics are low-dimensional and stable over time, and will enable more effective and flexible use of brain computer interface technologies. |
1911.11900 | Samuel Nkrumah | Samuel Nkrumah | Protein Folding from the Perspective of Chaperone Action | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Predicting the three-dimensional (3D) functional structures of proteins
remains an important computational milestone in molecular biology to be
achieved. This feat is hinged on a clear understanding of the mechanism which
proteins use to fold into their native structures. Since Levinthal's paradox,
there has been a lot of progress in understanding this mechanism. Most of the
earlier attempts were caught between assigning either hydrophobic interactions
or hydrogen bonding as the dominant folding force. However, a consensus now
seems to be emerging about hydrogen bonding being a stronger force.
Interestingly, a view from chaperone action may further throw some light on the
nature of the folding mechanism. Thus the very mechanisms which prevent protein
aggregation and misfolding, could help us have a better understanding of the
folding mechanism itself.
| [
{
"created": "Wed, 27 Nov 2019 00:57:53 GMT",
"version": "v1"
}
] | 2019-11-28 | [
[
"Nkrumah",
"Samuel",
""
]
] | Predicting the three-dimensional (3D) functional structures of proteins remains an important computational milestone in molecular biology to be achieved. This feat is hinged on a clear understanding of the mechanism which proteins use to fold into their native structures. Since Levinthal's paradox, there has been a lot of progress in understanding this mechanism. Most of the earlier attempts were caught between assigning either hydrophobic interactions or hydrogen bonding as the dominant folding force. However, a consensus now seems to be emerging about hydrogen bonding being a stronger force. Interestingly, a view from chaperone action may further throw some light on the nature of the folding mechanism. Thus the very mechanisms which prevent protein aggregation and misfolding, could help us have a better understanding of the folding mechanism itself. |
1310.1853 | Arne Traulsen | Benedikt Bauer, Reiner Siebert, and Arne Traulsen | Cancer initiation with epistatic interactions between driver and
passenger mutations | null | Journal of Theoretical Biology 358, 52-60 (2014) | 10.1016/j.jtbi.2014.05.018 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the dynamics of cancer initiation in a mathematical model with
one driver mutation and several passenger mutations. Our analysis is based on a
multi type branching process: We model individual cells which can either divide
or undergo apoptosis. In case of a cell division, the two daughter cells can
mutate, which potentially confers a change in fitness to the cell. In contrast
to previous models, the change in fitness induced by the driver mutation
depends on the genetic context of the cell, in our case on the number of
passenger mutations. The passenger mutations themselves have no or only a very
small impact on the cell's fitness. While our model is not designed as a
specific model for a particular cancer, the underlying idea is motivated by
clinical and experimental observations in Burkitt Lymphoma. In this tumor, the
hallmark mutation leads to deregulation of the MYC oncogene which increases the
rate of apoptosis, but also the proliferation rate of cells. This increase in
the rate of apoptosis hence needs to be overcome by mutations affecting
apoptotic pathways, naturally leading to an epistatic fitness landscape. This
model shows a very interesting dynamical behavior which is distinct from the
dynamics of cancer initiation in the absence of epistasis. Since the driver
mutation is deleterious to a cell with only a few passenger mutations, there is
a period of stasis in the number of cells until a clone of cells with enough
passenger mutations emerges. Only when the driver mutation occurs in one of
those cells, the cell population starts to grow rapidly.
| [
{
"created": "Fri, 4 Oct 2013 06:45:30 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Apr 2015 07:21:35 GMT",
"version": "v2"
}
] | 2015-04-14 | [
[
"Bauer",
"Benedikt",
""
],
[
"Siebert",
"Reiner",
""
],
[
"Traulsen",
"Arne",
""
]
] | We investigate the dynamics of cancer initiation in a mathematical model with one driver mutation and several passenger mutations. Our analysis is based on a multi type branching process: We model individual cells which can either divide or undergo apoptosis. In case of a cell division, the two daughter cells can mutate, which potentially confers a change in fitness to the cell. In contrast to previous models, the change in fitness induced by the driver mutation depends on the genetic context of the cell, in our case on the number of passenger mutations. The passenger mutations themselves have no or only a very small impact on the cell's fitness. While our model is not designed as a specific model for a particular cancer, the underlying idea is motivated by clinical and experimental observations in Burkitt Lymphoma. In this tumor, the hallmark mutation leads to deregulation of the MYC oncogene which increases the rate of apoptosis, but also the proliferation rate of cells. This increase in the rate of apoptosis hence needs to be overcome by mutations affecting apoptotic pathways, naturally leading to an epistatic fitness landscape. This model shows a very interesting dynamical behavior which is distinct from the dynamics of cancer initiation in the absence of epistasis. Since the driver mutation is deleterious to a cell with only a few passenger mutations, there is a period of stasis in the number of cells until a clone of cells with enough passenger mutations emerges. Only when the driver mutation occurs in one of those cells, the cell population starts to grow rapidly. |
2004.04903 | Takuro Shimaya | Takuro Shimaya, Reiko Okura, Yuichi Wakamoto and Kazumasa A. Takeuchi | Scale invariance of cell size fluctuations in starving bacteria | 15+23 pages, 5+11 figures and 2 tables | Communications Physics 4, 238 (2021) | 10.1038/s42005-021-00739-5 | null | q-bio.CB cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In stable environments, cell size fluctuations are thought to be governed by
simple physical principles, as suggested by recent findings of scaling
properties. Here, by developing a microfluidic device and using E. coli, we
investigate the response of cell size fluctuations against starvation. By
abruptly switching to non-nutritious medium, we find that the cell size
distribution changes but satisfies scale invariance: the rescaled distribution
is kept unchanged and determined by the growth condition before starvation.
These findings are underpinned by a model based on cell growth and cell cycle.
Further, we numerically determine the range of validity of the scale invariance
over various characteristic times of the starvation process, and find the
violation of the scale invariance for slow starvation. Our results, combined
with theoretical arguments, suggest the relevance of the multifork replication,
which helps retaining information of cell cycle states and may thus result in
the scale invariance.
| [
{
"created": "Fri, 10 Apr 2020 04:22:50 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jun 2020 10:00:45 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Sep 2020 02:21:15 GMT",
"version": "v3"
},
{
"created": "Thu, 24 Jun 2021 07:43:59 GMT",
"version": "v4"
},
{
"c... | 2021-11-12 | [
[
"Shimaya",
"Takuro",
""
],
[
"Okura",
"Reiko",
""
],
[
"Wakamoto",
"Yuichi",
""
],
[
"Takeuchi",
"Kazumasa A.",
""
]
] | In stable environments, cell size fluctuations are thought to be governed by simple physical principles, as suggested by recent findings of scaling properties. Here, by developing a microfluidic device and using E. coli, we investigate the response of cell size fluctuations against starvation. By abruptly switching to non-nutritious medium, we find that the cell size distribution changes but satisfies scale invariance: the rescaled distribution is kept unchanged and determined by the growth condition before starvation. These findings are underpinned by a model based on cell growth and cell cycle. Further, we numerically determine the range of validity of the scale invariance over various characteristic times of the starvation process, and find the violation of the scale invariance for slow starvation. Our results, combined with theoretical arguments, suggest the relevance of the multifork replication, which helps retaining information of cell cycle states and may thus result in the scale invariance. |
1801.01399 | Ricardo Martinez-Garcia | Ricardo Martinez-Garcia, Cristobal Lopez | From scale-dependent feedbacks to long-range competition alone: a short
review on pattern-forming mechanisms in arid ecosystems | 17 pages, 4 figures | null | null | null | q-bio.PE nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vegetation patterns are abundant in arid and semiarid ecosystems, but how
they form remains unclear. One of the most extended theories lies in the
existence of scale-dependent feedbacks (SDF) in plant-to-plant and plant-water
interactions. Short distances are dominated by facilitative interactions,
whereas competitive interactions dominate at larger scales. These feedbacks
shape spatially inhomogeneous distributions of water that ultimately drive the
emergence of patterns of vegetation. Even though the presence of facilitative
and competitive interactions is clear, they are often hard to disentangle in
the field, and therefore their relevance in vegetation pattern formation is
still disputable. Here, we review the biological processes that have been
proposed to explain pattern formation in arid ecosystems and how they have been
implemented in mathematical models. We conclude by discussing the existence of
similar structures in different biological and physical systems.
| [
{
"created": "Thu, 4 Jan 2018 15:20:26 GMT",
"version": "v1"
}
] | 2018-01-08 | [
[
"Martinez-Garcia",
"Ricardo",
""
],
[
"Lopez",
"Cristobal",
""
]
] | Vegetation patterns are abundant in arid and semiarid ecosystems, but how they form remains unclear. One of the most extended theories lies in the existence of scale-dependent feedbacks (SDF) in plant-to-plant and plant-water interactions. Short distances are dominated by facilitative interactions, whereas competitive interactions dominate at larger scales. These feedbacks shape spatially inhomogeneous distributions of water that ultimately drive the emergence of patterns of vegetation. Even though the presence of facilitative and competitive interactions is clear, they are often hard to disentangle in the field, and therefore their relevance in vegetation pattern formation is still disputable. Here, we review the biological processes that have been proposed to explain pattern formation in arid ecosystems and how they have been implemented in mathematical models. We conclude by discussing the existence of similar structures in different biological and physical systems. |
1612.03483 | Cengiz Pehlevan | Reza Abbasi-Asl, Cengiz Pehlevan, Bin Yu, and Dmitri B. Chklovskii | Do retinal ganglion cells project natural scenes to their principal
subspace and whiten them? | 2016 Asilomar Conference on Signals, Systems and Computers | null | 10.1109/ACSSC.2016.7869658 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several theories of early sensory processing suggest that it whitens sensory
stimuli. Here, we test three key predictions of the whitening theory using
recordings from 152 ganglion cells in salamander retina responding to natural
movies. We confirm the previous finding that firing rates of ganglion cells are
less correlated compared to natural scenes, although significant correlations
remain. We show that while the power spectrum of ganglion cells decays less
steeply than that of natural scenes, it is not completely flattened. Finally,
we find evidence that only the top principal components of the visual stimulus
are transmitted.
| [
{
"created": "Sun, 11 Dec 2016 21:28:23 GMT",
"version": "v1"
}
] | 2017-03-21 | [
[
"Abbasi-Asl",
"Reza",
""
],
[
"Pehlevan",
"Cengiz",
""
],
[
"Yu",
"Bin",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | Several theories of early sensory processing suggest that it whitens sensory stimuli. Here, we test three key predictions of the whitening theory using recordings from 152 ganglion cells in salamander retina responding to natural movies. We confirm the previous finding that firing rates of ganglion cells are less correlated compared to natural scenes, although significant correlations remain. We show that while the power spectrum of ganglion cells decays less steeply than that of natural scenes, it is not completely flattened. Finally, we find evidence that only the top principal components of the visual stimulus are transmitted. |
1008.1386 | Jun-Sok Huhh | Jun-Sok Huhh | The Role of Opportunistic Punishment in the Evolution of Cooperation: An
application of stochastic dynamics to public good game | 30 page, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the role of opportunistic punisher who may act selfishly
to free-ride cooperators or not to be exploited by defectors. To consider
opportunistic punisher, we make a change to the sequence of one-shot public
good game; instead of putting action choice first before punishment, the
commitment of punishment is declared first before choosing the action of each
participant. In this commitment-first setting, punisher may use information
about her team, and may defect to increase her fitness in the team. Reversing
sequence of public good game can induce different behavior of punisher, which
cannot be considered in standard setting where punisher always chooses
cooperation. Based on stochastic dynamics developed by evolutionary economists
and biologists, we show that opportunistic punisher can make cooperation evolve
where cooperative punisher fails. This alternative route for the evolution of
cooperation relies paradoxically on the players' selfishness to profit from
others' unconditional cooperation and defection.
| [
{
"created": "Sun, 8 Aug 2010 06:03:18 GMT",
"version": "v1"
}
] | 2010-08-10 | [
[
"Huhh",
"Jun-Sok",
""
]
] | This paper discusses the role of opportunistic punisher who may act selfishly to free-ride cooperators or not to be exploited by defectors. To consider opportunistic punisher, we make a change to the sequence of one-shot public good game; instead of putting action choice first before punishment, the commitment of punishment is declared first before choosing the action of each participant. In this commitment-first setting, punisher may use information about her team, and may defect to increase her fitness in the team. Reversing sequence of public good game can induce different behavior of punisher, which cannot be considered in standard setting where punisher always chooses cooperation. Based on stochastic dynamics developed by evolutionary economists and biologists, we show that opportunistic punisher can make cooperation evolve where cooperative punisher fails. This alternative route for the evolution of cooperation relies paradoxically on the players' selfishness to profit from others' unconditional cooperation and defection. |
1407.7412 | Jarad Mellard | Jarad P. Mellard, Ford Ballantyne IV | Conflict between dynamical and evolutionary stability in simple
ecosystems | null | null | 10.1007/s12080-014-0217-9 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, we address the essential question of whether, in the context of
evolving populations, ecosystems attain properties that enable persistence of
the ecosystem itself. We use a simple ecosystem model describing resource,
producer, and consumer dynamics to analyze how evolution affects dynamical
stability properties of the ecosystem. In particular, we compare resilience of
the entire system after allowing the producer and consumer populations to
evolve to their evolutionarily stable strategy (ESS), to the maximum attainable
resilience. We find a substantial reduction in ecosystem resilience when
producers and consumers are allowed to evolve compared to the maximal
attainable resilience. This study illustrates the inherent difference and
possible conflict between maximizing individual-level fitness and maximizing
resilience of entire ecosystems.
| [
{
"created": "Mon, 28 Jul 2014 13:32:50 GMT",
"version": "v1"
}
] | 2014-07-29 | [
[
"Mellard",
"Jarad P.",
""
],
[
"Ballantyne",
"Ford",
"IV"
]
] | Here, we address the essential question of whether, in the context of evolving populations, ecosystems attain properties that enable persistence of the ecosystem itself. We use a simple ecosystem model describing resource, producer, and consumer dynamics to analyze how evolution affects dynamical stability properties of the ecosystem. In particular, we compare resilience of the entire system after allowing the producer and consumer populations to evolve to their evolutionarily stable strategy (ESS), to the maximum attainable resilience. We find a substantial reduction in ecosystem resilience when producers and consumers are allowed to evolve compared to the maximal attainable resilience. This study illustrates the inherent difference and possible conflict between maximizing individual-level fitness and maximizing resilience of entire ecosystems. |
1403.2292 | Samir Suweis Dr. | Samir Suweis and Paolo D'Odorico | Early warning signs in social-ecological networks | 14 pages, 4 figures. Supplementary Information available upon request | null | 10.1371/journal.pone.0101851 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social ecological systems are often difficult to investigate and manage
because of their inherent complexity1. Small variations in external drivers can
lead to abrupt changes associated with instabilities and bifurcations in the
underlying dynamics2-4. Anticipating critical transitions and divergence from
the present state of the system is particularly crucial to the prevention or
mitigation of the effects of unwanted and irreversible changes5-10. Recent
research in ecology has focused on leading indicators of regime shift in
ecosystems characterized by one state variable5,7,11,12. The case of systems
with several mutually interacting components, however, has remained poorly
investigated13, while the connection between network stability and research on
indicators for loss of resilience has been elusive14. Here we develop a
theoretical framework to analyze early warning signs of instability and regime
shift in social ecological networks. We provide analytical expressions for a
set of precursors of instability in social ecological systems with additive
noise for a variety of network structures. In particular, we show that the
covariance matrix of the dynamics can effectively anticipate the emergence of
instability. We also compare signals of early warning based on the dynamics of
suitably selected nodes, to indicators based on the integrated behavior of the
whole network. We find that the performances of these indicators are affected
by the network structure and the type of interaction among nodes. These results
provide new advances in multidimensional early warning analysis and offer a
framework to evaluate the resilience of social ecological networks.
| [
{
"created": "Fri, 7 Mar 2014 09:22:07 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Suweis",
"Samir",
""
],
[
"D'Odorico",
"Paolo",
""
]
] | Social ecological systems are often difficult to investigate and manage because of their inherent complexity1. Small variations in external drivers can lead to abrupt changes associated with instabilities and bifurcations in the underlying dynamics2-4. Anticipating critical transitions and divergence from the present state of the system is particularly crucial to the prevention or mitigation of the effects of unwanted and irreversible changes5-10. Recent research in ecology has focused on leading indicators of regime shift in ecosystems characterized by one state variable5,7,11,12. The case of systems with several mutually interacting components, however, has remained poorly investigated13, while the connection between network stability and research on indicators for loss of resilience has been elusive14. Here we develop a theoretical framework to analyze early warning signs of instability and regime shift in social ecological networks. We provide analytical expressions for a set of precursors of instability in social ecological systems with additive noise for a variety of network structures. In particular, we show that the covariance matrix of the dynamics can effectively anticipate the emergence of instability. We also compare signals of early warning based on the dynamics of suitably selected nodes, to indicators based on the integrated behavior of the whole network. We find that the performances of these indicators are affected by the network structure and the type of interaction among nodes. These results provide new advances in multidimensional early warning analysis and offer a framework to evaluate the resilience of social ecological networks. |
2103.12495 | Mar\'ia Vallet-Regi | M.V. Cabanas, D. Lozano, A. Torres-Pardo, C. Sobrino, J.
Gonzalez-Calbet, D. Arcos, M. Vallet-Regi | Features of aminopropyl modified mesoporous silica nanoparticles.
Implications on the active targeting capability | null | Mater. Chem. Phys. 220, 260-269 (2018) | 10.1016/j.matchemphys.2018.09.005 | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Aminopropyl modified mesoporous SiO2 nanoparticles, MCM-41 type, have been
synthesized by the co-condensation method from tetraethylorthosilicate (TEOS)
and aminopropyltriethoxysilane (APTES). By means of modifying TEOS/APTES ratio
we have carried out an in-depth characterization of the nanoparticles as a
function of APTES content. Surface charge and nanoparticles morphology were
strongly influenced by the amount of APTES and particles changed from hexagonal
to bean-like morphology insofar APTES increased. Besides, the porous structure
was also affected, showing a contraction of the lattice parameter and pore
size, while increasing the wall thickness. These results bring about new
insights about the nanoparticles formation during the co-condensation process.
The model proposed herein considers that different interactions stablished
between TEOS and APTES with the structure directing agent have consequences on
pore size, wall thickness and particle morphology. Finally, APTES is an
excellent linker to covalently attach active targeting agents such as folate
groups. We have hypothesized that APTES could also play a role in the
biological behavior of the nanoparticles. So, the internalization efficiency of
the nanoparticles has been tested with cancerous LNCaP and non-cancerous
preosteoblast-like MC3T3-E1 cells. The results indicate a cooperative effect
between aminopropylsilane presence and folic acid, only for the cancerous LNCaP
cell line.
| [
{
"created": "Tue, 23 Mar 2021 12:43:49 GMT",
"version": "v1"
}
] | 2021-03-24 | [
[
"Cabanas",
"M. V.",
""
],
[
"Lozano",
"D.",
""
],
[
"Torres-Pardo",
"A.",
""
],
[
"Sobrino",
"C.",
""
],
[
"Gonzalez-Calbet",
"J.",
""
],
[
"Arcos",
"D.",
""
],
[
"Vallet-Regi",
"M.",
""
]
] | Aminopropyl modified mesoporous SiO2 nanoparticles, MCM-41 type, have been synthesized by the co-condensation method from tetraethylorthosilicate (TEOS) and aminopropyltriethoxysilane (APTES). By means of modifying TEOS/APTES ratio we have carried out an in-depth characterization of the nanoparticles as a function of APTES content. Surface charge and nanoparticles morphology were strongly influenced by the amount of APTES and particles changed from hexagonal to bean-like morphology insofar APTES increased. Besides, the porous structure was also affected, showing a contraction of the lattice parameter and pore size, while increasing the wall thickness. These results bring about new insights about the nanoparticles formation during the co-condensation process. The model proposed herein considers that different interactions stablished between TEOS and APTES with the structure directing agent have consequences on pore size, wall thickness and particle morphology. Finally, APTES is an excellent linker to covalently attach active targeting agents such as folate groups. We have hypothesized that APTES could also play a role in the biological behavior of the nanoparticles. So, the internalization efficiency of the nanoparticles has been tested with cancerous LNCaP and non-cancerous preosteoblast-like MC3T3-E1 cells. The results indicate a cooperative effect between aminopropylsilane presence and folic acid, only for the cancerous LNCaP cell line. |
2009.08672 | Jean-S\'ebastien Guez | J.S Guez, S. Chenikher, J.Ph Cassar, P. Jacques | Setting up and modelling of overflowing fed-batch cultures of Bacillus
subtilis for the production and continuous removal of lipopeptides | null | Journal of Biotechnology, Elsevier, 2007, 131, pp.67 - 75 | 10.1016/j.jbiotec.2007.05.025 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work is related to the setup of overflowing exponential fed-batch
cultures (O-EFBC) derived from carbon limited EFBC dedicated to the production
of mycosubtilin, an antifungal lipopeptide belonging to the iturin family.
O-EFBC permits the continuous removal of the product from the bioreactor
achieving a complete extraction of mycosubtilin. This paper also provides a
dynamical Monod-based growth model of this process that is accurate enough to
simulate the evolution of the specific growth rate and to correlate it to the
mycosubtilin specific productivity. Two particular and dependant phenomena
related to the foam overflow are taken into account by the model: the outgoing
flow rate of a broth volume and the loss of biomass. Interestingly, the biomass
concentration in the foam was found to be lower than the biomass concentration
in the bioreactor relating this process to a recycling one. Parameters of this
model are the growth yield on substrate and the maximal specific growth rate
estimated from experiments led at feed rates of 0.062, 0.071 and 0.086 h --1.
The model was extrapolated to five additional experiments carried out at feed
rates of 0.008, 0.022, 0.040, 0.042 and 0.062 h --1 enabling the correlation of
the mean specific growth rates with productivity results. Finally, a feed rate
of 0.086 h --1 corresponding to a mean specific growth rate of 0.070 h --1
allowed a specific productivity of 1.27 mg of mycosubtilin g --1 of dried
biomass h --1 .
| [
{
"created": "Fri, 18 Sep 2020 07:48:24 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Aug 2023 15:09:54 GMT",
"version": "v2"
}
] | 2023-08-24 | [
[
"Guez",
"J. S",
""
],
[
"Chenikher",
"S.",
""
],
[
"Cassar",
"J. Ph",
""
],
[
"Jacques",
"P.",
""
]
] | This work is related to the setup of overflowing exponential fed-batch cultures (O-EFBC) derived from carbon limited EFBC dedicated to the production of mycosubtilin, an antifungal lipopeptide belonging to the iturin family. O-EFBC permits the continuous removal of the product from the bioreactor achieving a complete extraction of mycosubtilin. This paper also provides a dynamical Monod-based growth model of this process that is accurate enough to simulate the evolution of the specific growth rate and to correlate it to the mycosubtilin specific productivity. Two particular and dependant phenomena related to the foam overflow are taken into account by the model: the outgoing flow rate of a broth volume and the loss of biomass. Interestingly, the biomass concentration in the foam was found to be lower than the biomass concentration in the bioreactor relating this process to a recycling one. Parameters of this model are the growth yield on substrate and the maximal specific growth rate estimated from experiments led at feed rates of 0.062, 0.071 and 0.086 h --1. The model was extrapolated to five additional experiments carried out at feed rates of 0.008, 0.022, 0.040, 0.042 and 0.062 h --1 enabling the correlation of the mean specific growth rates with productivity results. Finally, a feed rate of 0.086 h --1 corresponding to a mean specific growth rate of 0.070 h --1 allowed a specific productivity of 1.27 mg of mycosubtilin g --1 of dried biomass h --1 . |
1506.02085 | Min Xu | Min Xu, Rudy Setiono | Gene selection for cancer classification using a hybrid of univariate
and multivariate feature selection methods | null | Applied Genomics and Proteomics. 2003:2(2)79-91 | null | null | q-bio.QM cs.CE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various approaches to gene selection for cancer classification based on
microarray data can be found in the literature and they may be grouped into two
categories: univariate methods and multivariate methods. Univariate methods
look at each gene in the data in isolation from others. They measure the
contribution of a particular gene to the classification without considering the
presence of the other genes. In contrast, multivariate methods measure the
relative contribution of a gene to the classification by taking the other genes
in the data into consideration. Multivariate methods select fewer genes in
general. However, the selection process of multivariate methods may be
sensitive to the presence of irrelevant genes, noises in the expression and
outliers in the training data. At the same time, the computational cost of
multivariate methods is high. To overcome the disadvantages of the two types of
approaches, we propose a hybrid method to obtain gene sets that are small and
highly discriminative.
We devise our hybrid method from the univariate Maximum Likelihood method
(LIK) and the multivariate Recursive Feature Elimination method (RFE). We
analyze the properties of these methods and systematically test the
effectiveness of our proposed method on two cancer microarray datasets. Our
experiments on a leukemia dataset and a small, round blue cell tumors dataset
demonstrate the effectiveness of our hybrid method. It is able to discover sets
consisting of fewer genes than those reported in the literature and at the same
time achieve the same or better prediction accuracy.
| [
{
"created": "Fri, 5 Jun 2015 23:29:06 GMT",
"version": "v1"
}
] | 2015-06-18 | [
[
"Xu",
"Min",
""
],
[
"Setiono",
"Rudy",
""
]
] | Various approaches to gene selection for cancer classification based on microarray data can be found in the literature and they may be grouped into two categories: univariate methods and multivariate methods. Univariate methods look at each gene in the data in isolation from others. They measure the contribution of a particular gene to the classification without considering the presence of the other genes. In contrast, multivariate methods measure the relative contribution of a gene to the classification by taking the other genes in the data into consideration. Multivariate methods select fewer genes in general. However, the selection process of multivariate methods may be sensitive to the presence of irrelevant genes, noises in the expression and outliers in the training data. At the same time, the computational cost of multivariate methods is high. To overcome the disadvantages of the two types of approaches, we propose a hybrid method to obtain gene sets that are small and highly discriminative. We devise our hybrid method from the univariate Maximum Likelihood method (LIK) and the multivariate Recursive Feature Elimination method (RFE). We analyze the properties of these methods and systematically test the effectiveness of our proposed method on two cancer microarray datasets. Our experiments on a leukemia dataset and a small, round blue cell tumors dataset demonstrate the effectiveness of our hybrid method. It is able to discover sets consisting of fewer genes than those reported in the literature and at the same time achieve the same or better prediction accuracy. |
1506.06572 | Weini Huang | Weini Huang, Christoph Hauert, Arne Traulsen | Stochastic evolutionary games in dynamic populations | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequency dependent selection and demographic fluctuations play important
roles in evolutionary and ecological processes. Under frequency dependent
selection, the average fitness of the population may increase or decrease based
on interactions between individuals within the population. This should be
reflected in fluctuations of the population size even in constant environments.
Here, we propose a stochastic model, which naturally combines these two
evolutionary ingredients by assuming frequency dependent competition between
different types in an individual-based model. In contrast to previous game
theoretic models, the carrying capacity of the population and thus the
population size is determined by pairwise competition of individuals mediated
by evolutionary games and demographic stochasticity. In the limit of infinite
population size, the averaged stochastic dynamics is captured by the
deterministic competitive Lotka-Volterra equations. In small populations,
demographic stochasticity may instead lead to the extinction of the entire
population. As the population size is driven by the fitness in evolutionary
games, a population of cooperators is less prone to go extinct than a
population of defectors, whereas in the usual systems of fixed size, the
population would thrive regardless of its average payoff.
| [
{
"created": "Mon, 22 Jun 2015 12:46:32 GMT",
"version": "v1"
}
] | 2015-06-23 | [
[
"Huang",
"Weini",
""
],
[
"Hauert",
"Christoph",
""
],
[
"Traulsen",
"Arne",
""
]
] | Frequency dependent selection and demographic fluctuations play important roles in evolutionary and ecological processes. Under frequency dependent selection, the average fitness of the population may increase or decrease based on interactions between individuals within the population. This should be reflected in fluctuations of the population size even in constant environments. Here, we propose a stochastic model, which naturally combines these two evolutionary ingredients by assuming frequency dependent competition between different types in an individual-based model. In contrast to previous game theoretic models, the carrying capacity of the population and thus the population size is determined by pairwise competition of individuals mediated by evolutionary games and demographic stochasticity. In the limit of infinite population size, the averaged stochastic dynamics is captured by the deterministic competitive Lotka-Volterra equations. In small populations, demographic stochasticity may instead lead to the extinction of the entire population. As the population size is driven by the fitness in evolutionary games, a population of cooperators is less prone to go extinct than a population of defectors, whereas in the usual systems of fixed size, the population would thrive regardless of its average payoff. |
1106.5778 | Simon DeDeo | Simon DeDeo | Effective Theories for Circuits and Automata | 11 pages, 9 figures | Chaos 21, 037106 (2011) | 10.1063/1.3640747 | SFI Working Paper #11-09-47 | q-bio.QM cs.FL nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abstracting an effective theory from a complicated process is central to the
study of complexity. Even when the underlying mechanisms are understood, or at
least measurable, the presence of dissipation and irreversibility in
biological, computational and social systems makes the problem harder. Here we
demonstrate the construction of effective theories in the presence of both
irreversibility and noise, in a dynamical model with underlying feedback. We
use the Krohn-Rhodes theorem to show how the composition of underlying
mechanisms can lead to innovations in the emergent effective theory. We show
how dissipation and irreversibility fundamentally limit the lifetimes of these
emergent structures, even though, on short timescales, the group properties may
be enriched compared to their noiseless counterparts.
| [
{
"created": "Tue, 28 Jun 2011 19:46:42 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Sep 2011 19:45:24 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Oct 2011 17:20:52 GMT",
"version": "v3"
},
{
"created": "Mon, 20 Feb 2012 17:19:05 GMT",
"version": "v4"
}
] | 2012-02-21 | [
[
"DeDeo",
"Simon",
""
]
] | Abstracting an effective theory from a complicated process is central to the study of complexity. Even when the underlying mechanisms are understood, or at least measurable, the presence of dissipation and irreversibility in biological, computational and social systems makes the problem harder. Here we demonstrate the construction of effective theories in the presence of both irreversibility and noise, in a dynamical model with underlying feedback. We use the Krohn-Rhodes theorem to show how the composition of underlying mechanisms can lead to innovations in the emergent effective theory. We show how dissipation and irreversibility fundamentally limit the lifetimes of these emergent structures, even though, on short timescales, the group properties may be enriched compared to their noiseless counterparts. |
2201.11659 | Samuel Okyere | Samuel Okyere, Joseph Ackora-Prah, Ebenezer Bonyah and Mary Osei Fokuo | A mathematical model of COVID-19 with an underlying health condition
using fraction order derivative | 40 pages, 15 figures. arXiv admin note: text overlap with
arXiv:2201.08224, arXiv:2201.08689 | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Studies have shown that some people with underlying conditions such as
cancer, heart failure, diabetes and hypertension are more likely to get
COVID-19 and have worse outcomes. In this paper, a fractional-order derivative
is proposed to study the transmission dynamics of COVID-19 taking into
consideration population having an underlying condition. The fractional
derivative is defined in the Atangana Beleanu and Caputo (ABC) sense. For the
proposed model, we find the basic reproductive number, the equilibrium points
and determine the stability of these equilibrium points. The existence and the
uniqueness of the solution are established along with Hyers Ulam Stability. The
numerical scheme for the operator was carried out to obtain a numerical
simulation to support the analytical results. COVID-19 cases from March to June
2020 of Ghana were used to validate the model. The numerical simulation
revealed a decline in infections as the fractional operator was increased from
0.6 within the 120 days. Time-dependent optimal control was incorporated into
the model. The numerical simulation of the optimal control revealed,
vaccination reduces the number of individuals susceptible to the COVID-19,
exposed to the COVID-19 and Covid-19 patients with and without an underlying
health condition.
| [
{
"created": "Wed, 26 Jan 2022 14:02:51 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Mar 2022 22:55:43 GMT",
"version": "v2"
}
] | 2022-03-29 | [
[
"Okyere",
"Samuel",
""
],
[
"Ackora-Prah",
"Joseph",
""
],
[
"Bonyah",
"Ebenezer",
""
],
[
"Fokuo",
"Mary Osei",
""
]
] | Studies have shown that some people with underlying conditions such as cancer, heart failure, diabetes and hypertension are more likely to get COVID-19 and have worse outcomes. In this paper, a fractional-order derivative is proposed to study the transmission dynamics of COVID-19 taking into consideration population having an underlying condition. The fractional derivative is defined in the Atangana Beleanu and Caputo (ABC) sense. For the proposed model, we find the basic reproductive number, the equilibrium points and determine the stability of these equilibrium points. The existence and the uniqueness of the solution are established along with Hyers Ulam Stability. The numerical scheme for the operator was carried out to obtain a numerical simulation to support the analytical results. COVID-19 cases from March to June 2020 of Ghana were used to validate the model. The numerical simulation revealed a decline in infections as the fractional operator was increased from 0.6 within the 120 days. Time-dependent optimal control was incorporated into the model. The numerical simulation of the optimal control revealed, vaccination reduces the number of individuals susceptible to the COVID-19, exposed to the COVID-19 and Covid-19 patients with and without an underlying health condition. |
2106.08995 | Birgitta Dresp-Langley | Birgitta Dresp-Langley, Rongrong Liu, John M. Wandeto | Surgical task expertise detected by a self-organizing neural network map | Conference on Automation in Medical Engineering AUTOMED21, University
Hospital Basel, Switzerland, 2021, June 8-9 | null | null | null | q-bio.NC cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual grip force profiling of bimanual simulator task performance of
experts and novices using a robotic control device designed for endoscopic
surgery permits defining benchmark criteria that tell true expert task skills
from the skills of novices or trainee surgeons. Grip force variability in a
true expert and a complete novice executing a robot assisted surgical simulator
task reveal statistically significant differences as a function of task
expertise. Here we show that the skill specific differences in local grip
forces are predicted by the output metric of a Self Organizing neural network
Map (SOM) with a bio inspired functional architecture that maps the functional
connectivity of somatosensory neural networks in the primate brain.
| [
{
"created": "Thu, 3 Jun 2021 10:48:10 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Dresp-Langley",
"Birgitta",
""
],
[
"Liu",
"Rongrong",
""
],
[
"Wandeto",
"John M.",
""
]
] | Individual grip force profiling of bimanual simulator task performance of experts and novices using a robotic control device designed for endoscopic surgery permits defining benchmark criteria that tell true expert task skills from the skills of novices or trainee surgeons. Grip force variability in a true expert and a complete novice executing a robot assisted surgical simulator task reveal statistically significant differences as a function of task expertise. Here we show that the skill specific differences in local grip forces are predicted by the output metric of a Self Organizing neural network Map (SOM) with a bio inspired functional architecture that maps the functional connectivity of somatosensory neural networks in the primate brain. |
1105.4599 | Robert Hilborn | Robert C. Hilborn, Benjamin Brookshire, Jenna Mattingly, Anusha
Purushotham, and Anuraag Sharma | The transition between stochastic and deterministic behavior in an
excitable gene circuit | PLoS ONE: Research Article, published 11 Apr 2012 | null | 10.1371/journal.pone.0034536 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the connection between a stochastic simulation model and an
ordinary differential equations (ODEs) model of the dynamics of an excitable
gene circuit that exhibits noise-induced oscillations. Near a bifurcation point
in the ODE model, the stochastic simulation model yields behavior dramatically
different from that predicted by the ODE model. We analyze how that behavior
depends on the gene copy number and find very slow convergence to the large
number limit near the bifurcation point. The implications for understanding the
dynamics of gene circuits and other birth-death dynamical systems with small
numbers of constituents are discussed.
| [
{
"created": "Mon, 23 May 2011 19:57:51 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Apr 2012 21:14:51 GMT",
"version": "v2"
}
] | 2012-04-26 | [
[
"Hilborn",
"Robert C.",
""
],
[
"Brookshire",
"Benjamin",
""
],
[
"Mattingly",
"Jenna",
""
],
[
"Purushotham",
"Anusha",
""
],
[
"Sharma",
"Anuraag",
""
]
] | We explore the connection between a stochastic simulation model and an ordinary differential equations (ODEs) model of the dynamics of an excitable gene circuit that exhibits noise-induced oscillations. Near a bifurcation point in the ODE model, the stochastic simulation model yields behavior dramatically different from that predicted by the ODE model. We analyze how that behavior depends on the gene copy number and find very slow convergence to the large number limit near the bifurcation point. The implications for understanding the dynamics of gene circuits and other birth-death dynamical systems with small numbers of constituents are discussed. |
1909.05802 | Laura Sidhom | Laura Sidhom and Tobias Galla | Ecological communities from random generalised Lotka-Volterra dynamics
with non-linear feedback | 48 pages, 11 figures | Phys. Rev. E 101, 032101 (2020) | 10.1103/PhysRevE.101.032101 | null | q-bio.PE cond-mat.dis-nn cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the outcome of generalised Lotka-Volterra dynamics of
ecological communities with random interaction coefficients and non-linear
feedback. We show in simulations that the saturation of non-linear feedback
stabilises the dynamics. This is confirmed in an analytical
generating-functional approach to generalised Lotka-Volterra equations with
piecewise linear saturating response. For such systems we are able to derive
self-consistent relations governing the stable fixed-point phase, and to carry
out a linear stability analysis to predict the onset of unstable behaviour. We
investigate in detail the combined effects of the mean, variance and
co-variance of the random interaction coefficients, and the saturation value of
the non-linear response. We find that stability and diversity increases with
the introduction of non-linear feedback, where decreasing the saturation value
has a similar effect to decreasing the co-variance. We also find co-operation
to no longer have a detrimental effect on stability with non-linear feedback,
and the order parameters mean abundance and diversity to be less dependent on
the symmetry of interactions with stronger saturation.
| [
{
"created": "Thu, 12 Sep 2019 17:00:39 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Mar 2020 18:46:19 GMT",
"version": "v2"
}
] | 2020-03-27 | [
[
"Sidhom",
"Laura",
""
],
[
"Galla",
"Tobias",
""
]
] | We investigate the outcome of generalised Lotka-Volterra dynamics of ecological communities with random interaction coefficients and non-linear feedback. We show in simulations that the saturation of non-linear feedback stabilises the dynamics. This is confirmed in an analytical generating-functional approach to generalised Lotka-Volterra equations with piecewise linear saturating response. For such systems we are able to derive self-consistent relations governing the stable fixed-point phase, and to carry out a linear stability analysis to predict the onset of unstable behaviour. We investigate in detail the combined effects of the mean, variance and co-variance of the random interaction coefficients, and the saturation value of the non-linear response. We find that stability and diversity increases with the introduction of non-linear feedback, where decreasing the saturation value has a similar effect to decreasing the co-variance. We also find co-operation to no longer have a detrimental effect on stability with non-linear feedback, and the order parameters mean abundance and diversity to be less dependent on the symmetry of interactions with stronger saturation. |
2406.11906 | Jingbo Zhou | Jingbo Zhou, Shaorong Chen, Jun Xia, Sizhe Liu, Tianze Ling, Wenjie
Du, Yue Liu, Jianwei Yin, Stan Z. Li | NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing
Methods in Proteomics | null | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tandem mass spectrometry has played a pivotal role in advancing proteomics,
enabling the high-throughput analysis of protein composition in biological
tissues. Many deep learning methods have been developed for \emph{de novo}
peptide sequencing task, i.e., predicting the peptide sequence for the observed
mass spectrum. However, two key challenges seriously hinder the further
advancement of this important task. Firstly, since there is no consensus for
the evaluation datasets, the empirical results in different research papers are
often not comparable, leading to unfair comparison. Secondly, the current
methods are usually limited to amino acid-level or peptide-level precision and
recall metrics. In this work, we present the first unified benchmark NovoBench
for \emph{de novo} peptide sequencing, which comprises diverse mass spectrum
data, integrated models, and comprehensive evaluation metrics. Recent
impressive methods, including DeepNovo, PointNovo, Casanovo, InstaNovo, AdaNovo
and $\pi$-HelixNovo are integrated into our framework. In addition to amino
acid-level and peptide-level precision and recall, we evaluate the models'
performance in terms of identifying post-tranlational modifications (PTMs),
efficiency and robustness to peptide length, noise peaks and missing fragment
ratio, which are important influencing factors while seldom be considered.
Leveraging this benchmark, we conduct a large-scale study of current methods,
report many insightful findings that open up new possibilities for future
development. The benchmark will be open-sourced to facilitate future research
and application.
| [
{
"created": "Sun, 16 Jun 2024 08:23:21 GMT",
"version": "v1"
}
] | 2024-06-19 | [
[
"Zhou",
"Jingbo",
""
],
[
"Chen",
"Shaorong",
""
],
[
"Xia",
"Jun",
""
],
[
"Liu",
"Sizhe",
""
],
[
"Ling",
"Tianze",
""
],
[
"Du",
"Wenjie",
""
],
[
"Liu",
"Yue",
""
],
[
"Yin",
"Jianwei",
... | Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the high-throughput analysis of protein composition in biological tissues. Many deep learning methods have been developed for \emph{de novo} peptide sequencing task, i.e., predicting the peptide sequence for the observed mass spectrum. However, two key challenges seriously hinder the further advancement of this important task. Firstly, since there is no consensus for the evaluation datasets, the empirical results in different research papers are often not comparable, leading to unfair comparison. Secondly, the current methods are usually limited to amino acid-level or peptide-level precision and recall metrics. In this work, we present the first unified benchmark NovoBench for \emph{de novo} peptide sequencing, which comprises diverse mass spectrum data, integrated models, and comprehensive evaluation metrics. Recent impressive methods, including DeepNovo, PointNovo, Casanovo, InstaNovo, AdaNovo and $\pi$-HelixNovo are integrated into our framework. In addition to amino acid-level and peptide-level precision and recall, we evaluate the models' performance in terms of identifying post-tranlational modifications (PTMs), efficiency and robustness to peptide length, noise peaks and missing fragment ratio, which are important influencing factors while seldom be considered. Leveraging this benchmark, we conduct a large-scale study of current methods, report many insightful findings that open up new possibilities for future development. The benchmark will be open-sourced to facilitate future research and application. |
1806.04704 | Gerard Rinkus | Gerard Rinkus | Sparse distributed representation, hierarchy, critical periods,
metaplasticity: the keys to lifelong fixed-time learning and best-match
retrieval | 6 pages, 4 figs. Accepted for talk at Biological Distributed
Algorithms 2018. July 23, 2018. London | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the more important hallmarks of human intelligence, which any
artificial general intelligence (AGI) should have, are the following. 1. It
must be capable of on-line learning, including with single/few trials. 2.
Memories/knowledge must be permanent over lifelong durations, safe from
catastrophic forgetting. Some confabulation, i.e., semantically plausible
retrieval errors, may gradually accumulate over time. 3. The time to both: a)
learn a new item, and b) retrieve the best-matching / most relevant item(s),
i.e., do similarity-based retrieval, must remain constant throughout the
lifetime. 4. The system should never become full: it must remain able to store
new information, i.e., make new permanent memories, throughout very long
lifetimes. No artificial computational system has been shown to have all these
properties. Here, we describe a neuromorphic associative memory model, Sparsey,
which does, in principle, possess them all. We cite prior results supporting
possession of hallmarks 1 and 3 and sketch an argument, hinging on strongly
recursive, hierarchical, part-whole compositional structure of natural data,
that Sparsey also possesses hallmarks 2 and 4.
| [
{
"created": "Sat, 2 Jun 2018 17:16:10 GMT",
"version": "v1"
}
] | 2018-06-14 | [
[
"Rinkus",
"Gerard",
""
]
] | Among the more important hallmarks of human intelligence, which any artificial general intelligence (AGI) should have, are the following. 1. It must be capable of on-line learning, including with single/few trials. 2. Memories/knowledge must be permanent over lifelong durations, safe from catastrophic forgetting. Some confabulation, i.e., semantically plausible retrieval errors, may gradually accumulate over time. 3. The time to both: a) learn a new item, and b) retrieve the best-matching / most relevant item(s), i.e., do similarity-based retrieval, must remain constant throughout the lifetime. 4. The system should never become full: it must remain able to store new information, i.e., make new permanent memories, throughout very long lifetimes. No artificial computational system has been shown to have all these properties. Here, we describe a neuromorphic associative memory model, Sparsey, which does, in principle, possess them all. We cite prior results supporting possession of hallmarks 1 and 3 and sketch an argument, hinging on strongly recursive, hierarchical, part-whole compositional structure of natural data, that Sparsey also possesses hallmarks 2 and 4. |
2408.03951 | Mariia Sorokina | Mariia Sorokina | Harmonic fractal transformation, 4R-regeneration and noise shaping for
ultra wide-band reception in FitzHugh-Nagumo neuronal model | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human hearing range significantly surpasses the typical neuronal spiking
frequency. Yet, neurons with their modest frequency range not only efficiently
receive and process multiple orders higher frequency signals, but also
demonstrate remarkable stability and adaptability to frequency variations in
brain functional connectivity. Ability to process signals beyond the
limitations of the receiver temporal or frequency (bandwidth) resolution is
highly desirable yet requires complex design architectures. Using the
FitzHugh-Nagumo model we reveal the harmonic fractal transformation of
frequency and bandwidth, which enables the Nyquist rate integer (for low
frequencies) and sub-integer (for high frequencies) multiplication. We also
demonstrate for the first time that noise shaping can be achieved in a simple
RLC-circuit without a requirement of a delay line. The discovered effect
presents a novel regeneration type - 4R: re-amplifying, re-shaping, re-timing,
and re-modulating and due to the fractal nature of transformation offers a
remarkable regenerative efficiency. The effect is a generalization of phase
locking to non-periodic encoded signals. The discovered physical mechanism
explains how using neuronal functionality one can receive and process signals
over an ultra-wide band (below or higher the spiking neuronal range by multiple
orders) and below the noise floor.
| [
{
"created": "Mon, 22 Jul 2024 17:56:07 GMT",
"version": "v1"
}
] | 2024-08-09 | [
[
"Sorokina",
"Mariia",
""
]
] | Human hearing range significantly surpasses the typical neuronal spiking frequency. Yet, neurons with their modest frequency range not only efficiently receive and process multiple orders higher frequency signals, but also demonstrate remarkable stability and adaptability to frequency variations in brain functional connectivity. Ability to process signals beyond the limitations of the receiver temporal or frequency (bandwidth) resolution is highly desirable yet requires complex design architectures. Using the FitzHugh-Nagumo model we reveal the harmonic fractal transformation of frequency and bandwidth, which enables the Nyquist rate integer (for low frequencies) and sub-integer (for high frequencies) multiplication. We also demonstrate for the first time that noise shaping can be achieved in a simple RLC-circuit without a requirement of a delay line. The discovered effect presents a novel regeneration type - 4R: re-amplifying, re-shaping, re-timing, and re-modulating and due to the fractal nature of transformation offers a remarkable regenerative efficiency. The effect is a generalization of phase locking to non-periodic encoded signals. The discovered physical mechanism explains how using neuronal functionality one can receive and process signals over an ultra-wide band (below or higher the spiking neuronal range by multiple orders) and below the noise floor. |
0912.2108 | Casey Schneider-Mizell | Casey M. Schneider-Mizell, Jack M. Parent, Eshel Ben-Jacob, Michal
Zochowski, Leonard M. Sander | Network structure determines patterns of network reorganization during
adult neurogenesis | 28 pages, 10 figures | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New cells are generated throughout life and integrate into the hippocampus
via the process of adult neurogenesis. Epileptogenic brain injury induces many
structural changes in the hippocampus, including the death of interneurons and
altered connectivity patterns. The pathological neurogenic niche is associated
with aberrant neurogenesis, though the role of the network-level changes in
development of epilepsy is not well understood. In this paper, we use
computational simulations to investigate the effect of network environment on
structural and functional outcomes of neurogenesis. We find that small-world
networks with external stimulus are able to be augmented by activity-seeking
neurons in a manner that enhances activity at the stimulated sites without
altering the network as a whole. However, when inhibition is decreased or
connectivity patterns are changed, new cells are both less responsive to
stimulus and the new cells are more likely to drive the network into bursting
dynamics. Our results suggest that network-level changes caused by
epileptogenic injury can create an environment where neurogenic reorganization
can induce or intensify epileptic dynamics and abnormal integration of new
cells.
| [
{
"created": "Thu, 10 Dec 2009 22:07:37 GMT",
"version": "v1"
}
] | 2009-12-14 | [
[
"Schneider-Mizell",
"Casey M.",
""
],
[
"Parent",
"Jack M.",
""
],
[
"Ben-Jacob",
"Eshel",
""
],
[
"Zochowski",
"Michal",
""
],
[
"Sander",
"Leonard M.",
""
]
] | New cells are generated throughout life and integrate into the hippocampus via the process of adult neurogenesis. Epileptogenic brain injury induces many structural changes in the hippocampus, including the death of interneurons and altered connectivity patterns. The pathological neurogenic niche is associated with aberrant neurogenesis, though the role of the network-level changes in development of epilepsy is not well understood. In this paper, we use computational simulations to investigate the effect of network environment on structural and functional outcomes of neurogenesis. We find that small-world networks with external stimulus are able to be augmented by activity-seeking neurons in a manner that enhances activity at the stimulated sites without altering the network as a whole. However, when inhibition is decreased or connectivity patterns are changed, new cells are both less responsive to stimulus and the new cells are more likely to drive the network into bursting dynamics. Our results suggest that network-level changes caused by epileptogenic injury can create an environment where neurogenic reorganization can induce or intensify epileptic dynamics and abnormal integration of new cells. |
1305.0782 | Benjamin M. Friedrich | Veikko Geyer, Frank J\"ulicher, Jonathon Howard, Benjamin M Friedrich | Cell body rocking is a dominant mechanism for flagellar synchronization
in a swimming alga | 40 pages, 15 color figures | VF Geyer, F Julicher, J Howard, BM Friedrich: Proc. Natl. Acad.
Sci. U.S.A., 110(45), p. 18058(6), 2013 | 10.1073/pnas.1300895110 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unicellular green algae Chlamydomonas swims with two flagella, which can
synchronize their beat. Synchronized beating is required to swim both fast and
straight. A long-standing hypothesis proposes that synchronization of flagella
results from hydrodynamic coupling, but the details are not understood. Here,
we present realistic hydrodynamic computations and high-speed tracking
experiments of swimming cells that show how a perturbation from the
synchronized state causes rotational motion of the cell body. This rotation
feeds back on the flagellar dynamics via hydrodynamic friction forces and
rapidly restores the synchronized state in our theory. We calculate that this
`cell body rocking' provides the dominant contribution to synchronization in
swimming cells, whereas direct hydrodynamic interactions between the flagella
contribute negligibly. We experimentally confirmed the coupling between
flagellar beating and cell body rocking predicted by our theory. This work
appeared also in the Proceedings of the National Academy of Science of the
U.S.A as: Geyer et al., PNAS 110(45), p. 18058(6), 2013.
| [
{
"created": "Fri, 3 May 2013 17:11:18 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Nov 2013 22:14:37 GMT",
"version": "v2"
}
] | 2013-11-26 | [
[
"Geyer",
"Veikko",
""
],
[
"Jülicher",
"Frank",
""
],
[
"Howard",
"Jonathon",
""
],
[
"Friedrich",
"Benjamin M",
""
]
] | The unicellular green algae Chlamydomonas swims with two flagella, which can synchronize their beat. Synchronized beating is required to swim both fast and straight. A long-standing hypothesis proposes that synchronization of flagella results from hydrodynamic coupling, but the details are not understood. Here, we present realistic hydrodynamic computations and high-speed tracking experiments of swimming cells that show how a perturbation from the synchronized state causes rotational motion of the cell body. This rotation feeds back on the flagellar dynamics via hydrodynamic friction forces and rapidly restores the synchronized state in our theory. We calculate that this `cell body rocking' provides the dominant contribution to synchronization in swimming cells, whereas direct hydrodynamic interactions between the flagella contribute negligibly. We experimentally confirmed the coupling between flagellar beating and cell body rocking predicted by our theory. This work appeared also in the Proceedings of the National Academy of Science of the U.S.A as: Geyer et al., PNAS 110(45), p. 18058(6), 2013. |
1310.6980 | Carla Sofia Carvalho | C. Sofia Carvalho, Dimitrios Vlachakis, Georgia Tsiliki, Vasileios
Megalooikonomou and Sophia Kossida | Protein signatures using electrostatic molecular surfaces in harmonic
space | 9 pages, 10 figures Published in PeerJ (2013),
https://peerj.com/articles/185/ | null | 10.7717/peerj.185 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We developed a novel method based on the Fourier analysis of protein
molecular surfaces to speed up the analysis of the vast structural data
generated in the post-genomic era. This method computes the power spectrum of
surfaces of the molecular electrostatic potential, whose three-dimensional
coordinates have been either experimentally or theoretically determined. Thus
we achieve a reduction of the initial three-dimensional information on the
molecular surface to the one-dimensional information on pairs of points at a
fixed scale apart. Consequently, the similarity search in our method is
computationally less demanding and significantly faster than shape comparison
methods. As proof of principle, we applied our method to a training set of
viral proteins that are involved in major diseases such as Hepatitis C, Dengue
fever, Yellow fever, Bovine viral diarrhea and West Nile fever. The training
set contains proteins of four different protein families, as well as a
mammalian representative enzyme. We found that the power spectrum successfully
assigns a unique signature to each protein included in our training set, thus
providing a direct probe of functional similarity among proteins. The results
agree with established biological data from conventional structural
biochemistry analyses.
| [
{
"created": "Fri, 25 Oct 2013 17:04:15 GMT",
"version": "v1"
}
] | 2015-09-18 | [
[
"Carvalho",
"C. Sofia",
""
],
[
"Vlachakis",
"Dimitrios",
""
],
[
"Tsiliki",
"Georgia",
""
],
[
"Megalooikonomou",
"Vasileios",
""
],
[
"Kossida",
"Sophia",
""
]
] | We developed a novel method based on the Fourier analysis of protein molecular surfaces to speed up the analysis of the vast structural data generated in the post-genomic era. This method computes the power spectrum of surfaces of the molecular electrostatic potential, whose three-dimensional coordinates have been either experimentally or theoretically determined. Thus we achieve a reduction of the initial three-dimensional information on the molecular surface to the one-dimensional information on pairs of points at a fixed scale apart. Consequently, the similarity search in our method is computationally less demanding and significantly faster than shape comparison methods. As proof of principle, we applied our method to a training set of viral proteins that are involved in major diseases such as Hepatitis C, Dengue fever, Yellow fever, Bovine viral diarrhea and West Nile fever. The training set contains proteins of four different protein families, as well as a mammalian representative enzyme. We found that the power spectrum successfully assigns a unique signature to each protein included in our training set, thus providing a direct probe of functional similarity among proteins. The results agree with established biological data from conventional structural biochemistry analyses. |
2005.13150 | Siawoosh Mohammadi | Siawoosh Mohammadi and Martina F. Callaghan | Towards in vivo g-ratio mapping using MRI: unifying myelin and diffusion
imaging | Will be published as a review article in Journal of Neuroscience
Methods as parf of the Special Issue with Hu Cheng and Vince Calhoun as Guest
Editors | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The g-ratio, quantifying the comparative thickness of the myelin sheath
encasing an axon, is a geometrical invariant that has high functional relevance
because of its importance in determining neuronal conduction velocity. Advances
in MRI data acquisition and signal modelling have put in vivo mapping of the
g-ratio, across the entire white matter, within our reach. This capacity would
greatly increase our knowledge of the nervous system: how it functions, and how
it is impacted by disease. This is the second review on the topic of g-ratio
mapping using MRI. As such, it summarizes the most recent developments in the
field, while also providing methodological background pertinent to aggregate
g-ratio weighted mapping, and discussing pitfalls associated with these
approaches. Using simulations based on recently published data, this review
demonstrates the relevance of the calibration step for three myelin-markers
(macromolecular tissue volume, myelin water fraction, and bound pool fraction).
It highlights the need to estimate both the slope and offset of the
relationship between these MRI-based markers and the true myelin volume
fraction if we are really to achieve the goal of precise, high sensitivity
g-ratio mapping in vivo. Other challenges discussed in this review further
evidence the need for gold standard measurements of human brain tissue from ex
vivo histology. We conclude that the quest to find the most appropriate MRI
biomarkers to enable in vivo g-ratio mapping is ongoing, with the potential of
many novel techniques yet to be investigated.
| [
{
"created": "Wed, 27 May 2020 04:25:50 GMT",
"version": "v1"
}
] | 2020-05-28 | [
[
"Mohammadi",
"Siawoosh",
""
],
[
"Callaghan",
"Martina F.",
""
]
] | The g-ratio, quantifying the comparative thickness of the myelin sheath encasing an axon, is a geometrical invariant that has high functional relevance because of its importance in determining neuronal conduction velocity. Advances in MRI data acquisition and signal modelling have put in vivo mapping of the g-ratio, across the entire white matter, within our reach. This capacity would greatly increase our knowledge of the nervous system: how it functions, and how it is impacted by disease. This is the second review on the topic of g-ratio mapping using MRI. As such, it summarizes the most recent developments in the field, while also providing methodological background pertinent to aggregate g-ratio weighted mapping, and discussing pitfalls associated with these approaches. Using simulations based on recently published data, this review demonstrates the relevance of the calibration step for three myelin-markers (macromolecular tissue volume, myelin water fraction, and bound pool fraction). It highlights the need to estimate both the slope and offset of the relationship between these MRI-based markers and the true myelin volume fraction if we are really to achieve the goal of precise, high sensitivity g-ratio mapping in vivo. Other challenges discussed in this review further evidence the need for gold standard measurements of human brain tissue from ex vivo histology. We conclude that the quest to find the most appropriate MRI biomarkers to enable in vivo g-ratio mapping is ongoing, with the potential of many novel techniques yet to be investigated. |
1704.03009 | Jay Newby | Jay M. Newby, Alison M. Schaefer, Phoebe T. Lee, M. Gregory Forest,
and Samuel K. Lai | Convolutional neural networks automate detection for tracking of
submicron scale particles in 2D and 3D | null | null | 10.1073/pnas.1804420115 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Particle tracking is a powerful biophysical tool that requires conversion of
large video files into position time series, i.e. traces of the species of
interest for data analysis. Current tracking methods, based on a limited set of
input parameters to identify bright objects, are ill-equipped to handle the
spectrum of spatiotemporal heterogeneity and poor signal-to-noise ratios
typically presented by submicron species in complex biological environments.
Extensive user involvement is frequently necessary to optimize and execute
tracking methods, which is not only inefficient but introduces user bias. To
develop a fully automated tracking method, we developed a convolutional neural
network for particle localization from image data, comprised of over 6,000
parameters, and employed machine learning techniques to train the network on a
diverse portfolio of video conditions. The neural network tracker provides
unprecedented automation and accuracy, with exceptionally low false positive
and false negative rates on both 2D and 3D simulated videos and 2D experimental
videos of difficult-to-track species.
| [
{
"created": "Mon, 10 Apr 2017 18:39:46 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Oct 2018 18:32:09 GMT",
"version": "v2"
}
] | 2018-10-09 | [
[
"Newby",
"Jay M.",
""
],
[
"Schaefer",
"Alison M.",
""
],
[
"Lee",
"Phoebe T.",
""
],
[
"Forest",
"M. Gregory",
""
],
[
"Lai",
"Samuel K.",
""
]
] | Particle tracking is a powerful biophysical tool that requires conversion of large video files into position time series, i.e. traces of the species of interest for data analysis. Current tracking methods, based on a limited set of input parameters to identify bright objects, are ill-equipped to handle the spectrum of spatiotemporal heterogeneity and poor signal-to-noise ratios typically presented by submicron species in complex biological environments. Extensive user involvement is frequently necessary to optimize and execute tracking methods, which is not only inefficient but introduces user bias. To develop a fully automated tracking method, we developed a convolutional neural network for particle localization from image data, comprised of over 6,000 parameters, and employed machine learning techniques to train the network on a diverse portfolio of video conditions. The neural network tracker provides unprecedented automation and accuracy, with exceptionally low false positive and false negative rates on both 2D and 3D simulated videos and 2D experimental videos of difficult-to-track species. |
2301.05057 | Louis Fabrice Tshimanga | Louis Fabrice Tshimanga and Manfredo Atzori and Federico Del Pup and
Maurizio Corbetta | An overview of open source Deep Learning-based libraries for
Neuroscience | null | null | null | null | q-bio.QM cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, deep learning revolutionized machine learning and its
applications, producing results comparable to human experts in several domains,
including neuroscience. Each year, hundreds of scientific publications present
applications of deep neural networks for biomedical data analysis. Due to the
fast growth of the domain, it could be a complicated and extremely
time-consuming task for worldwide researchers to have a clear perspective of
the most recent and advanced software libraries. This work contributes to
clarify the current situation in the domain, outlining the most useful
libraries that implement and facilitate deep learning application to
neuroscience, allowing scientists to identify the most suitable options for
their research or clinical projects. This paper summarizes the main
developments in Deep Learning and their relevance to Neuroscience; it then
reviews neuroinformatic toolboxes and libraries, collected from the literature
and from specific hubs of software projects oriented to neuroscience research.
The selected tools are presented in tables detailing key features grouped by
domain of application (e.g. data type, neuroscience area, task), model
engineering (e.g. programming language, model customization) and technological
aspect (e.g. interface, code source). The results show that, among a high
number of available software tools, several libraries are standing out in terms
of functionalities for neuroscience applications. The aggregation and
discussion of this information can help the neuroscience community to devolop
their research projects more efficiently and quickly, both by means of readily
available tools, and by knowing which modules may be improved, connected or
added.
| [
{
"created": "Mon, 19 Dec 2022 09:09:40 GMT",
"version": "v1"
}
] | 2023-01-13 | [
[
"Tshimanga",
"Louis Fabrice",
""
],
[
"Atzori",
"Manfredo",
""
],
[
"Del Pup",
"Federico",
""
],
[
"Corbetta",
"Maurizio",
""
]
] | In recent years, deep learning revolutionized machine learning and its applications, producing results comparable to human experts in several domains, including neuroscience. Each year, hundreds of scientific publications present applications of deep neural networks for biomedical data analysis. Due to the fast growth of the domain, it could be a complicated and extremely time-consuming task for worldwide researchers to have a clear perspective of the most recent and advanced software libraries. This work contributes to clarify the current situation in the domain, outlining the most useful libraries that implement and facilitate deep learning application to neuroscience, allowing scientists to identify the most suitable options for their research or clinical projects. This paper summarizes the main developments in Deep Learning and their relevance to Neuroscience; it then reviews neuroinformatic toolboxes and libraries, collected from the literature and from specific hubs of software projects oriented to neuroscience research. The selected tools are presented in tables detailing key features grouped by domain of application (e.g. data type, neuroscience area, task), model engineering (e.g. programming language, model customization) and technological aspect (e.g. interface, code source). The results show that, among a high number of available software tools, several libraries are standing out in terms of functionalities for neuroscience applications. The aggregation and discussion of this information can help the neuroscience community to devolop their research projects more efficiently and quickly, both by means of readily available tools, and by knowing which modules may be improved, connected or added. |
1908.02479 | Tom\'as Revilla | Vlastimil K\v{r}ivan and Tom\'as A. Revilla | Plant coexistence mediated by adaptive foraging preferences of
exploiters or mutualists | 33 pages, 9 figures | null | 10.1016/j.jtbi.2019.08.003 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coexistence of plants depends on their competition for common resources and
indirect interactions mediated by shared exploiters or mutualists. These
interactions are driven either by changes in animal abundance (density-mediated
interactions, e.g., apparent competition), or by changes in animal preferences
for plants (behaviorally-mediated interactions). This article studies effects
of behaviorally-mediated interactions on two plant population dynamics and
animal preference dynamics when animal densities are fixed. Animals can be
either adaptive exploiters or adaptive mutualists (e.g., herbivores or
pollinators) that maximize their fitness. Analysis of the model shows that
adaptive animal preferences for plants can lead to multiple outcomes of plant
coexistence with different levels of specialization or generalism for the
mediator animal species. In particular, exploiter generalism promotes plant
coexistence even when inter-specific competition is too strong to make plant
coexistence possible without exploiters, and mutualist specialization promotes
plant coexistence at alternative stable states when plant inter-specific
competition is weak. Introducing a new concept of generalized isoclines allows
us to fully analyze the model with respect to the strength of competitive
interactions between plants (weak or strong), and the type of interaction
between plants and animals (exploitation or mutualism).
Keywords: behaviorally-mediated interactions, competition for preference,
differential inclusion, generalized isocline, switching, sliding and repelling
regimes.
| [
{
"created": "Wed, 7 Aug 2019 08:05:40 GMT",
"version": "v1"
}
] | 2019-08-12 | [
[
"Křivan",
"Vlastimil",
""
],
[
"Revilla",
"Tomás A.",
""
]
] | Coexistence of plants depends on their competition for common resources and indirect interactions mediated by shared exploiters or mutualists. These interactions are driven either by changes in animal abundance (density-mediated interactions, e.g., apparent competition), or by changes in animal preferences for plants (behaviorally-mediated interactions). This article studies effects of behaviorally-mediated interactions on two plant population dynamics and animal preference dynamics when animal densities are fixed. Animals can be either adaptive exploiters or adaptive mutualists (e.g., herbivores or pollinators) that maximize their fitness. Analysis of the model shows that adaptive animal preferences for plants can lead to multiple outcomes of plant coexistence with different levels of specialization or generalism for the mediator animal species. In particular, exploiter generalism promotes plant coexistence even when inter-specific competition is too strong to make plant coexistence possible without exploiters, and mutualist specialization promotes plant coexistence at alternative stable states when plant inter-specific competition is weak. Introducing a new concept of generalized isoclines allows us to fully analyze the model with respect to the strength of competitive interactions between plants (weak or strong), and the type of interaction between plants and animals (exploitation or mutualism). Keywords: behaviorally-mediated interactions, competition for preference, differential inclusion, generalized isocline, switching, sliding and repelling regimes. |
1602.08113 | Vicente M. Reyes Ph.D. | Srujana Cheguri and Vicente M. Reyes | Representing Rod-Shaped Protein 3D Structures in Cylindrical Coordinates | 40 pages, 14 figures, 1 table | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Based on overall 3D structure, proteins may be grouped into two broad
categories, namely, globular proteins (spheroproteins), and elongated or
rod-shaped proteins (RSP). The former comprises a significant majority of
proteins. This work concerns the second category. Unlike a spheroprotein, an
RSP possesses a conspicuous axis along its longest dimension. To take advantage
of this symmetry element, we decided to represent RSPs using cylindrical
coordinates, (rho, theta, z), with the z-axis as the main axis, and one tip of
the protein at the origin. A "tip" is one of two extreme points in the protein
lying along the protein axis along its longest dimension. We first identify the
two tips, T1 and T2, of the RSP using a protein graphics software, then
determine their (Cartesian) coordinates, (h, k, l) and (m, n, o), respectively.
Arbitrarily selecting T1 as the tip at the origin, we translate the protein by
subtracting (h, k, l) from all structural coordinates. We then find the angle
alpha between vector T1-T2 and the positive z-axis by computing the scalar
product of vectors T1- T2 and OP where P is an arbitrary point along the
positive z-axis. We typically use (0, 0, p) where p is a suitable positive
number. Then we compute the cross product of the two vectors to determine the
axis about which we should rotate vector T1-T2 to make it coincide with the
positive z-axis. We use a matrix form of Rodrigues' formula to do the rotation.
We then apply the Cartesian to cylindrical coordinate transformation equations
to the system. We have applied the above transformation to 15 RSPs: 1QCE, 2JJ7,
2KPE, 3K2A, 3LHP, 2LOE, 2L3H, 2L1P, 1KSG, 1KSJ, 1KSH, 2KOL, 2KZG, 2KPF and
3MQC. We have also created a web server that can take the PDB coordinate file
of an RSP and output its cylindrical coordinates. The URL of our web server
will be announced publicly in due course.
| [
{
"created": "Tue, 15 Dec 2015 04:22:48 GMT",
"version": "v1"
}
] | 2016-02-29 | [
[
"Cheguri",
"Srujana",
""
],
[
"Reyes",
"Vicente M.",
""
]
] | Based on overall 3D structure, proteins may be grouped into two broad categories, namely, globular proteins (spheroproteins), and elongated or rod-shaped proteins (RSP). The former comprises a significant majority of proteins. This work concerns the second category. Unlike a spheroprotein, an RSP possesses a conspicuous axis along its longest dimension. To take advantage of this symmetry element, we decided to represent RSPs using cylindrical coordinates, (rho, theta, z), with the z-axis as the main axis, and one tip of the protein at the origin. A "tip" is one of two extreme points in the protein lying along the protein axis along its longest dimension. We first identify the two tips, T1 and T2, of the RSP using a protein graphics software, then determine their (Cartesian) coordinates, (h, k, l) and (m, n, o), respectively. Arbitrarily selecting T1 as the tip at the origin, we translate the protein by subtracting (h, k, l) from all structural coordinates. We then find the angle alpha between vector T1-T2 and the positive z-axis by computing the scalar product of vectors T1- T2 and OP where P is an arbitrary point along the positive z-axis. We typically use (0, 0, p) where p is a suitable positive number. Then we compute the cross product of the two vectors to determine the axis about which we should rotate vector T1-T2 to make it coincide with the positive z-axis. We use a matrix form of Rodrigues' formula to do the rotation. We then apply the Cartesian to cylindrical coordinate transformation equations to the system. We have applied the above transformation to 15 RSPs: 1QCE, 2JJ7, 2KPE, 3K2A, 3LHP, 2LOE, 2L3H, 2L1P, 1KSG, 1KSJ, 1KSH, 2KOL, 2KZG, 2KPF and 3MQC. We have also created a web server that can take the PDB coordinate file of an RSP and output its cylindrical coordinates. The URL of our web server will be announced publicly in due course. |
2212.05059 | Aurelien Pelissier | Aurelien Pelissier, Miroslav Phan, Niko Beerenwinkel and Maria
Rodriguez Martinez | Practical and scalable simulations of non-Markovian stochastic processes | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Discrete stochastic processes are widespread in natural systems with many
applications across physics, biochemistry, epidemiology, sociology, and
finance. While analytic solutions often cannot be derived, existing simulation
frameworks can generate stochastic trajectories compatible with the dynamical
laws underlying the random phenomena. However, most simulation algorithms
assume the system dynamics are memoryless (Markovian assumption), under which
assumption, future occurrences only depend on the present state of the system.
Mathematically, the Markovian assumption models inter-event times as
exponentially distributed variables, which enables the exact simulation of
stochastic trajectories using the seminal Gillespie algorithm. Unfortunately,
the majority of stochastic systems exhibit properties of memory, an inherently
non-Markovian attribute. Non-Markovian systems are notoriously difficult to
investigate analytically, and existing numerical methods are computationally
costly or only applicable under strong simplifying assumptions, often not
compatible with empirical observations. To address these challenges, we have
developed the Rejection-based Gillespie algorithm for non-Markovian Reactions
(REGIR), a general and scalable framework to simulate non-Markovian stochastic
systems with arbitrary inter-event time distributions. REGIR can achieve
arbitrary user-defined accuracy while maintaining the same asymptotic
computational complexity as the Gillespie algorithm. We illustrate REGIR's
modeling capabilities in three important biochemical systems, namely microbial
growth dynamics, stem cell differentiation, and RNA transcription. In all three
cases, REGIR efficiently models the underlying stochastic processes and
demonstrates its utility to accurately investigate complex non-Markovian
systems. The algorithm is implemented as a python library REGIR.
| [
{
"created": "Fri, 9 Dec 2022 03:55:19 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Dec 2022 03:00:08 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Jan 2023 16:07:11 GMT",
"version": "v3"
}
] | 2023-01-13 | [
[
"Pelissier",
"Aurelien",
""
],
[
"Phan",
"Miroslav",
""
],
[
"Beerenwinkel",
"Niko",
""
],
[
"Martinez",
"Maria Rodriguez",
""
]
] | Discrete stochastic processes are widespread in natural systems with many applications across physics, biochemistry, epidemiology, sociology, and finance. While analytic solutions often cannot be derived, existing simulation frameworks can generate stochastic trajectories compatible with the dynamical laws underlying the random phenomena. However, most simulation algorithms assume the system dynamics are memoryless (Markovian assumption), under which assumption, future occurrences only depend on the present state of the system. Mathematically, the Markovian assumption models inter-event times as exponentially distributed variables, which enables the exact simulation of stochastic trajectories using the seminal Gillespie algorithm. Unfortunately, the majority of stochastic systems exhibit properties of memory, an inherently non-Markovian attribute. Non-Markovian systems are notoriously difficult to investigate analytically, and existing numerical methods are computationally costly or only applicable under strong simplifying assumptions, often not compatible with empirical observations. To address these challenges, we have developed the Rejection-based Gillespie algorithm for non-Markovian Reactions (REGIR), a general and scalable framework to simulate non-Markovian stochastic systems with arbitrary inter-event time distributions. REGIR can achieve arbitrary user-defined accuracy while maintaining the same asymptotic computational complexity as the Gillespie algorithm. We illustrate REGIR's modeling capabilities in three important biochemical systems, namely microbial growth dynamics, stem cell differentiation, and RNA transcription. In all three cases, REGIR efficiently models the underlying stochastic processes and demonstrates its utility to accurately investigate complex non-Markovian systems. The algorithm is implemented as a python library REGIR. |
2402.16185 | Heng Li | Heng Li, Maximillian Marin, Maha Reda Farhat | Exploring gene content with pangene graphs | 9 pages, 7 figures and 2 tables | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Motivation: The gene content regulates the biology of an organism. It varies
between species and between individuals of the same species. Although tools
have been developed to identify gene content changes in bacterial genomes, none
is applicable to collections of large eukaryotic genomes such as the human
pangenome.
Results: We developed pangene, a computational tool to identify gene
orientation, gene order and gene copy-number changes in a collection of
genomes. Pangene aligns a set of input protein sequences to the genomes,
resolves redundancies between protein sequences and constructs a gene graph
with each genome represented as a walk in the graph. It additionally finds
subgraphs, which we call bibubbles, that capture gene content changes. Applied
to the human pangenome, pangene identifies known gene-level variations and
reveals complex haplotypes that are not well studied before. Pangene also works
with high-quality bacterial pangenome and reports similar numbers of core and
accessory genes in comparison to existing tools.
Availability and implementation: Source code at
https://github.com/lh3/pangene; pre-built pangene graphs can be downloaded from
https://zenodo.org/records/8118576 and visualized at
https://pangene.bioinweb.org
| [
{
"created": "Sun, 25 Feb 2024 20:17:18 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 18:35:42 GMT",
"version": "v2"
},
{
"created": "Wed, 29 May 2024 00:39:57 GMT",
"version": "v3"
}
] | 2024-05-30 | [
[
"Li",
"Heng",
""
],
[
"Marin",
"Maximillian",
""
],
[
"Farhat",
"Maha Reda",
""
]
] | Motivation: The gene content regulates the biology of an organism. It varies between species and between individuals of the same species. Although tools have been developed to identify gene content changes in bacterial genomes, none is applicable to collections of large eukaryotic genomes such as the human pangenome. Results: We developed pangene, a computational tool to identify gene orientation, gene order and gene copy-number changes in a collection of genomes. Pangene aligns a set of input protein sequences to the genomes, resolves redundancies between protein sequences and constructs a gene graph with each genome represented as a walk in the graph. It additionally finds subgraphs, which we call bibubbles, that capture gene content changes. Applied to the human pangenome, pangene identifies known gene-level variations and reveals complex haplotypes that are not well studied before. Pangene also works with high-quality bacterial pangenome and reports similar numbers of core and accessory genes in comparison to existing tools. Availability and implementation: Source code at https://github.com/lh3/pangene; pre-built pangene graphs can be downloaded from https://zenodo.org/records/8118576 and visualized at https://pangene.bioinweb.org |
q-bio/0505039 | Georgy Karev | Georgy P. Karev | Dynamics of inhomogeneous populations and global demography models | 25 pages, 7 figures; submitted to Journal of Biological Systems | null | null | null | q-bio.PE | null | The dynamic theory of inhomogeneous populations developed during the last
decade predicts several essential new dynamic regimes applicable even to the
well-known, simple population models. We show that, in an inhomogeneous
population with a distributed reproduction coefficient, the entire initial
distribution of the coefficient should be used to investigate real population
dynamics. In the general case, neither the average rate of growth nor the
variance or any finite number of moments of the initial distribution is
sufficient to predict the overall population growth. We developed methods for
solving the heterogeneous models and explored the dynamics of the total
population size together with the reproduction coefficient distribution. We
show that, typically, there exists a phase of hyper-exponential growth that
precedes the well-known exponential phase of population growth in a free
regime. The developed formalism is applied to models of global demography and
the problem of population explosion predicted by the known hyperbolic formula
of world population growth. We prove here that the hyperbolic formula presents
an exact solution to the Malthus model with an exponentially distributed
reproduction coefficient and that population explosion is a corollary of
certain implicit unrealistic assumptions. Alternative models of world
population growth are derived; they show a notable phenomenon, a transition
from protracted hyperbolical growth (the phase of hyper-exponential
development) to the brief transitional phase of exponential growth and,
subsequently, to stabilization. The model solutions are consistent with real
data and produce relatively accurate forecasts.
| [
{
"created": "Fri, 20 May 2005 17:18:14 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Karev",
"Georgy P.",
""
]
] | The dynamic theory of inhomogeneous populations developed during the last decade predicts several essential new dynamic regimes applicable even to the well-known, simple population models. We show that, in an inhomogeneous population with a distributed reproduction coefficient, the entire initial distribution of the coefficient should be used to investigate real population dynamics. In the general case, neither the average rate of growth nor the variance or any finite number of moments of the initial distribution is sufficient to predict the overall population growth. We developed methods for solving the heterogeneous models and explored the dynamics of the total population size together with the reproduction coefficient distribution. We show that, typically, there exists a phase of hyper-exponential growth that precedes the well-known exponential phase of population growth in a free regime. The developed formalism is applied to models of global demography and the problem of population explosion predicted by the known hyperbolic formula of world population growth. We prove here that the hyperbolic formula presents an exact solution to the Malthus model with an exponentially distributed reproduction coefficient and that population explosion is a corollary of certain implicit unrealistic assumptions. Alternative models of world population growth are derived; they show a notable phenomenon, a transition from protracted hyperbolical growth (the phase of hyper-exponential development) to the brief transitional phase of exponential growth and, subsequently, to stabilization. The model solutions are consistent with real data and produce relatively accurate forecasts. |
2008.13118 | Islem Rekik | Mert Lostar and Islem Rekik | Deep Hypergraph U-Net for Brain Graph Embedding and Classification | null | null | null | null | q-bio.NC cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | -Background. Network neuroscience examines the brain as a complex system
represented by a network (or connectome), providing deeper insights into the
brain morphology and function, allowing the identification of atypical brain
connectivity alterations, which can be used as diagnostic markers of
neurological disorders. -Existing Methods. Graph embedding methods which map
data samples (e.g., brain networks) into a low dimensional space have been
widely used to explore the relationship between samples for classification or
prediction tasks. However, the majority of these works are based on modeling
the pair-wise relationships between samples, failing to capture their
higher-order relationships. -New Method. In this paper, inspired by the nascent
field of geometric deep learning, we propose Hypergraph U-Net (HUNet), a novel
data embedding framework leveraging the hypergraph structure to learn
low-dimensional embeddings of data samples while capturing their high-order
relationships. Specifically, we generalize the U-Net architecture, naturally
operating on graphs, to hypergraphs by improving local feature aggregation and
preserving the high-order relationships present in the data. -Results. We
tested our method on small-scale and large-scale heterogeneous brain
connectomic datasets including morphological and functional brain networks of
autistic and demented patients, respectively. -Conclusion. Our HUNet
outperformed state-of-the-art geometric graph and hypergraph data embedding
techniques with a gain of 4-14% in classification accuracy, demonstrating both
scalability and generalizability. HUNet code is available at
https://github.com/basiralab/HUNet.
| [
{
"created": "Sun, 30 Aug 2020 08:15:18 GMT",
"version": "v1"
}
] | 2020-09-01 | [
[
"Lostar",
"Mert",
""
],
[
"Rekik",
"Islem",
""
]
] | -Background. Network neuroscience examines the brain as a complex system represented by a network (or connectome), providing deeper insights into the brain morphology and function, allowing the identification of atypical brain connectivity alterations, which can be used as diagnostic markers of neurological disorders. -Existing Methods. Graph embedding methods which map data samples (e.g., brain networks) into a low dimensional space have been widely used to explore the relationship between samples for classification or prediction tasks. However, the majority of these works are based on modeling the pair-wise relationships between samples, failing to capture their higher-order relationships. -New Method. In this paper, inspired by the nascent field of geometric deep learning, we propose Hypergraph U-Net (HUNet), a novel data embedding framework leveraging the hypergraph structure to learn low-dimensional embeddings of data samples while capturing their high-order relationships. Specifically, we generalize the U-Net architecture, naturally operating on graphs, to hypergraphs by improving local feature aggregation and preserving the high-order relationships present in the data. -Results. We tested our method on small-scale and large-scale heterogeneous brain connectomic datasets including morphological and functional brain networks of autistic and demented patients, respectively. -Conclusion. Our HUNet outperformed state-of-the-art geometric graph and hypergraph data embedding techniques with a gain of 4-14% in classification accuracy, demonstrating both scalability and generalizability. HUNet code is available at https://github.com/basiralab/HUNet. |
2305.09790 | Cuong Nguyen | Cuong Q. Nguyen, Dante Pertusi, Kim M. Branson | Molecule-Morphology Contrastive Pretraining for Transferable Molecular
Representation | ICML 2023 Workshop on Computational Biology | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Image-based profiling techniques have become increasingly popular over the
past decade for their applications in target identification,
mechanism-of-action inference, and assay development. These techniques have
generated large datasets of cellular morphologies, which are typically used to
investigate the effects of small molecule perturbagens. In this work, we extend
the impact of such dataset to improving quantitative structure-activity
relationship (QSAR) models by introducing Molecule-Morphology Contrastive
Pretraining (MoCoP), a framework for learning multi-modal representation of
molecular graphs and cellular morphologies. We scale MoCoP to approximately
100K molecules and 600K morphological profiles using data from the JUMP-CP
Consortium and show that MoCoP consistently improves performances of graph
neural networks (GNNs) on molecular property prediction tasks in ChEMBL20
across all dataset sizes. The pretrained GNNs are also evaluated on internal
GSK pharmacokinetic data and show an average improvement of 2.6% and 6.3% in
AUPRC for full and low data regimes, respectively. Our findings suggest that
integrating cellular morphologies with molecular graphs using MoCoP can
significantly improve the performance of QSAR models, ultimately expanding the
deep learning toolbox available for QSAR applications.
| [
{
"created": "Thu, 27 Apr 2023 02:01:41 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jun 2023 02:24:18 GMT",
"version": "v2"
}
] | 2023-06-28 | [
[
"Nguyen",
"Cuong Q.",
""
],
[
"Pertusi",
"Dante",
""
],
[
"Branson",
"Kim M.",
""
]
] | Image-based profiling techniques have become increasingly popular over the past decade for their applications in target identification, mechanism-of-action inference, and assay development. These techniques have generated large datasets of cellular morphologies, which are typically used to investigate the effects of small molecule perturbagens. In this work, we extend the impact of such dataset to improving quantitative structure-activity relationship (QSAR) models by introducing Molecule-Morphology Contrastive Pretraining (MoCoP), a framework for learning multi-modal representation of molecular graphs and cellular morphologies. We scale MoCoP to approximately 100K molecules and 600K morphological profiles using data from the JUMP-CP Consortium and show that MoCoP consistently improves performances of graph neural networks (GNNs) on molecular property prediction tasks in ChEMBL20 across all dataset sizes. The pretrained GNNs are also evaluated on internal GSK pharmacokinetic data and show an average improvement of 2.6% and 6.3% in AUPRC for full and low data regimes, respectively. Our findings suggest that integrating cellular morphologies with molecular graphs using MoCoP can significantly improve the performance of QSAR models, ultimately expanding the deep learning toolbox available for QSAR applications. |
2101.08385 | Ethan Moyer | Ethan Jacob Moyer and Anup Das | Motif Identification using CNN-based Pairwise Subsequence Alignment
Score Prediction | 7 pages, 4 figures, submitted to the 2021 International Joint
Conference on Neural Networks | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | A common problem in bioinformatics is related to identifying gene regulatory
regions marked by relatively high frequencies of motifs, or deoxyribonucleic
acid sequences that often code for transcription and enhancer proteins.
Predicting alignment scores between subsequence k-mers and a given motif
enables the identification of candidate regulatory regions in a gene, which
correspond to the transcription of these proteins. We propose a one-dimensional
(1-D) Convolution Neural Network trained on k-mer formatted sequences
interspaced with the given motif pattern to predict pairwise alignment scores
between the consensus motif and subsequence k-mers. Our model consists of
fifteen layers with three rounds of a one-dimensional convolution layer, a
batch normalization layer, a dense layer, and a 1-D maximum pooling layer. We
train the model using mean squared error loss on four different data sets each
with a different motif pattern randomly inserted in DNA sequences: the first
three data sets have zero, one, and two mutations applied on each inserted
motif, and the fourth data set represents the inserted motif as a
position-specific probability matrix. We use a novel proposed metric in order
to evaluate the model's performance, $S_{\alpha}$, which is based on the
Jaccard Index. We use 10-fold cross validation to evaluate out model. Using
$S_{\alpha}$, we measure the accuracy of the model by identifying the 15
highest-scoring 15-mer indices of the predicted scores that agree with that of
the actual scores within a selected $\alpha$ region. For the best performing
data set, our results indicate on average 99.3% of the top 15 motifs were
identified correctly within a one base pair stride ($\alpha = 1$) in the out of
sample data. To the best of our knowledge, this is a novel approach that
illustrates how data formatted in an intelligent way can be extrapolated using
machine learning.
| [
{
"created": "Thu, 21 Jan 2021 01:27:42 GMT",
"version": "v1"
}
] | 2021-01-22 | [
[
"Moyer",
"Ethan Jacob",
""
],
[
"Das",
"Anup",
""
]
] | A common problem in bioinformatics is related to identifying gene regulatory regions marked by relatively high frequencies of motifs, or deoxyribonucleic acid sequences that often code for transcription and enhancer proteins. Predicting alignment scores between subsequence k-mers and a given motif enables the identification of candidate regulatory regions in a gene, which correspond to the transcription of these proteins. We propose a one-dimensional (1-D) Convolution Neural Network trained on k-mer formatted sequences interspaced with the given motif pattern to predict pairwise alignment scores between the consensus motif and subsequence k-mers. Our model consists of fifteen layers with three rounds of a one-dimensional convolution layer, a batch normalization layer, a dense layer, and a 1-D maximum pooling layer. We train the model using mean squared error loss on four different data sets each with a different motif pattern randomly inserted in DNA sequences: the first three data sets have zero, one, and two mutations applied on each inserted motif, and the fourth data set represents the inserted motif as a position-specific probability matrix. We use a novel proposed metric in order to evaluate the model's performance, $S_{\alpha}$, which is based on the Jaccard Index. We use 10-fold cross validation to evaluate out model. Using $S_{\alpha}$, we measure the accuracy of the model by identifying the 15 highest-scoring 15-mer indices of the predicted scores that agree with that of the actual scores within a selected $\alpha$ region. For the best performing data set, our results indicate on average 99.3% of the top 15 motifs were identified correctly within a one base pair stride ($\alpha = 1$) in the out of sample data. To the best of our knowledge, this is a novel approach that illustrates how data formatted in an intelligent way can be extrapolated using machine learning. |
1511.09347 | Guy Jacobs | Guy S. Jacobs, Tim J. Sluckin | Long-range dispersal, stochasticity and the broken accelerating wave of
advance | Preprint version (October 2014) of TPB article accepted for
publication December 2014 | Theoretical Population Biology (2015) Vol. 100 Pages 39-55 | 10.1016/j.tpb.2014.12.003 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rare long distance dispersal events are thought to have a disproportionate
impact on the spread of invasive species. Modelling using integrodifference
equations suggests that, when long distance contacts are represented by a
fat-tailed dispersal kernel, an accelerating wave of advance can ensue.
Invasions spreading in this manner could have particularly dramatic effects.
Recently, various authors have suggested that demographic stochasticity
disrupts wave acceleration. Integrodifference models have been widely used in
movement ecology, and as such a clearer understanding of stochastic effects is
needed. Here, we present a stochastic non-linear one-dimensional lattice model
in which demographic stochasticity and the dispersal regime can be
systematically varied. Extensive simulations show that stochasticity has a
profound effect on model behaviour, and usually breaks acceleration for
fat-tailed kernels. Exceptions are seen for some power law kernels, $K(l)
\propto |l|^{-\beta}$ with $\beta < 3$, for which acceleration persists despite
stochasticity. Such kernels lack a second moment and are important in
`accelerating' phenomena such as L\'{e}vy flights. Furthermore, for long-range
kernels the approach to the continuum limit behaviour as stochasticity is
reduced is generally slow. Given that real-world populations are finite,
stochastic models may give better predictive power when long-range dispersal is
important. Insights from mean-field models such as integrodifference equations
should be applied with caution in such circumstances.
| [
{
"created": "Mon, 30 Nov 2015 15:28:27 GMT",
"version": "v1"
}
] | 2015-12-01 | [
[
"Jacobs",
"Guy S.",
""
],
[
"Sluckin",
"Tim J.",
""
]
] | Rare long distance dispersal events are thought to have a disproportionate impact on the spread of invasive species. Modelling using integrodifference equations suggests that, when long distance contacts are represented by a fat-tailed dispersal kernel, an accelerating wave of advance can ensue. Invasions spreading in this manner could have particularly dramatic effects. Recently, various authors have suggested that demographic stochasticity disrupts wave acceleration. Integrodifference models have been widely used in movement ecology, and as such a clearer understanding of stochastic effects is needed. Here, we present a stochastic non-linear one-dimensional lattice model in which demographic stochasticity and the dispersal regime can be systematically varied. Extensive simulations show that stochasticity has a profound effect on model behaviour, and usually breaks acceleration for fat-tailed kernels. Exceptions are seen for some power law kernels, $K(l) \propto |l|^{-\beta}$ with $\beta < 3$, for which acceleration persists despite stochasticity. Such kernels lack a second moment and are important in `accelerating' phenomena such as L\'{e}vy flights. Furthermore, for long-range kernels the approach to the continuum limit behaviour as stochasticity is reduced is generally slow. Given that real-world populations are finite, stochastic models may give better predictive power when long-range dispersal is important. Insights from mean-field models such as integrodifference equations should be applied with caution in such circumstances. |
1803.04915 | Seunghyeon Kim | Seunghyeon Kim, Michael F. Fenech, Pan-Jun Kim | Nutritionally recommended food for semi- to strict vegetarian diets
based on large-scale nutrient composition data | Supplementary material is available at the journal website. Source
codes are available at http://panjunkim.com | Scientific Reports 8:4344 (2018) | 10.1038/s41598-018-22691-1 | null | q-bio.OT cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diet design for vegetarian health is challenging due to the limited food
repertoire of vegetarians. This challenge can be partially overcome by
quantitative, data-driven approaches that utilise massive nutritional
information collected for many different foods. Based on large-scale data of
foods' nutrient compositions, the recent concept of nutritional fitness helps
quantify a nutrient balance within each food with regard to satisfying daily
nutritional requirements. Nutritional fitness offers prioritisation of
recommended foods using the foods' occurrence in nutritionally adequate food
combinations. Here, we systematically identify nutritionally recommendable
foods for semi- to strict vegetarian diets through the computation of
nutritional fitness. Along with commonly recommendable foods across different
diets, our analysis reveals favourable foods specific to each diet, such as
immature lima beans for a vegan diet as an amino acid and choline source, and
mushrooms for ovo-lacto vegetarian and vegan diets as a vitamin D source.
Furthermore, we find that selenium and other essential micronutrients can be
subject to deficiency in plant-based diets, and suggest nutritionally-desirable
dietary patterns. We extend our analysis to two hypothetical scenarios of
highly personalised, plant-based methionine-restricted diets. Our
nutrient-profiling approach may provide a useful guide for designing different
types of personalised vegetarian diets.
| [
{
"created": "Mon, 12 Mar 2018 11:19:57 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Sep 2018 03:50:01 GMT",
"version": "v2"
}
] | 2018-09-07 | [
[
"Kim",
"Seunghyeon",
""
],
[
"Fenech",
"Michael F.",
""
],
[
"Kim",
"Pan-Jun",
""
]
] | Diet design for vegetarian health is challenging due to the limited food repertoire of vegetarians. This challenge can be partially overcome by quantitative, data-driven approaches that utilise massive nutritional information collected for many different foods. Based on large-scale data of foods' nutrient compositions, the recent concept of nutritional fitness helps quantify a nutrient balance within each food with regard to satisfying daily nutritional requirements. Nutritional fitness offers prioritisation of recommended foods using the foods' occurrence in nutritionally adequate food combinations. Here, we systematically identify nutritionally recommendable foods for semi- to strict vegetarian diets through the computation of nutritional fitness. Along with commonly recommendable foods across different diets, our analysis reveals favourable foods specific to each diet, such as immature lima beans for a vegan diet as an amino acid and choline source, and mushrooms for ovo-lacto vegetarian and vegan diets as a vitamin D source. Furthermore, we find that selenium and other essential micronutrients can be subject to deficiency in plant-based diets, and suggest nutritionally-desirable dietary patterns. We extend our analysis to two hypothetical scenarios of highly personalised, plant-based methionine-restricted diets. Our nutrient-profiling approach may provide a useful guide for designing different types of personalised vegetarian diets. |
q-bio/0602022 | Pablo Echenique | Pablo Echenique, J. L. Alonso and Ivan Calvo | Effects of constraints in general branched molecules: A quantitative ab
initio study in HCO-L-Ala-NH2 | 8 pages, 1 figure, LaTeX, aipproc style (included) | in the BIFI 2006 II International Congress Proceedings, edited by
J. Clemente-Gallardo et al., vol. 851, pp. 108-116, AIP Conference
Proceedings, Melville, New York, 2006 | 10.1063/1.2345627 | null | q-bio.BM cond-mat.soft | null | A general approach to the design of accurate classical potentials for protein
folding is described. It includes the introduction of a meaningful statistical
measure of the differences between approximations of the same potential energy,
the definition of a set of Systematic and Approximately Separable and Modular
Internal Coordinates (SASMIC), much convenient for the simulation of general
branched molecules, and the imposition of constraints on the most rapidly
oscillating degrees of freedom. All these tools are used to study the effects
of constraints in the Conformational Equilibrium Distribution (CED) of the
model dipeptide HCO-L-Ala-NH2. We use ab initio Quantum Mechanics calculations
including electron correlation at the MP2 level to describe the system, and we
measure the conformational dependence of the correcting terms to the naive CED
based in the Potential Energy Surface (PES) without any simplifying assumption.
These terms are related to mass-metric tensors determinants and also occur in
the Fixman's compensating potential. We show that some of the corrections are
non-negligible if one is interested in the whole Ramachandran space. On the
other hand, if only the energetically lower region, containing the principal
secondary structure elements, is assumed to be relevant, then, all correcting
terms may be neglected up to peptides of considerable length. This is the first
time, as far as we know, that the analysis of the conformational dependence of
these correcting terms is performed in a relevant biomolecule with a realistic
potential energy function.
| [
{
"created": "Wed, 22 Feb 2006 17:05:10 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Dec 2006 17:34:50 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Echenique",
"Pablo",
""
],
[
"Alonso",
"J. L.",
""
],
[
"Calvo",
"Ivan",
""
]
] | A general approach to the design of accurate classical potentials for protein folding is described. It includes the introduction of a meaningful statistical measure of the differences between approximations of the same potential energy, the definition of a set of Systematic and Approximately Separable and Modular Internal Coordinates (SASMIC), much convenient for the simulation of general branched molecules, and the imposition of constraints on the most rapidly oscillating degrees of freedom. All these tools are used to study the effects of constraints in the Conformational Equilibrium Distribution (CED) of the model dipeptide HCO-L-Ala-NH2. We use ab initio Quantum Mechanics calculations including electron correlation at the MP2 level to describe the system, and we measure the conformational dependence of the correcting terms to the naive CED based in the Potential Energy Surface (PES) without any simplifying assumption. These terms are related to mass-metric tensors determinants and also occur in the Fixman's compensating potential. We show that some of the corrections are non-negligible if one is interested in the whole Ramachandran space. On the other hand, if only the energetically lower region, containing the principal secondary structure elements, is assumed to be relevant, then, all correcting terms may be neglected up to peptides of considerable length. This is the first time, as far as we know, that the analysis of the conformational dependence of these correcting terms is performed in a relevant biomolecule with a realistic potential energy function. |
1807.01243 | Sitabhra Sinha | Tanmay Mitra, Shakti N. Menon and Sitabhra Sinha | Non-associative learning in intra-cellular signaling networks | 6 pages, 2 figures + 5 pages supplementary information | null | null | null | q-bio.SC nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonlinear systems driven by recurrent signals are known to exhibit complex
dynamical responses which, in the physiological context, can have important
functional consequences. One of the simplest biological systems that is exposed
to such repeated stimuli is the intra-cellular signaling network. In this paper
we investigate the periodic activation of an evolutionarily conserved motif of
this network, viz., the mitogen-activated protein kinase (MAPK) signaling
cascade, with a train of pulses. The resulting response of the cascade, which
shows integrative capability over several successive pulses, is characterized
by complex adaptive behavior. These include aspects of non-associative
learning, in particular, habituation and sensitization, which are observed in
response to high- and low-frequency stimulation, respectively. In addition, the
existence of a response threshold of the cascade, an apparent refractory
behavior following stimulation with short inter-pulse interval, and an
alternans-like response under certain conditions suggest an analogy with
excitable media.
| [
{
"created": "Tue, 3 Jul 2018 15:41:45 GMT",
"version": "v1"
}
] | 2018-07-04 | [
[
"Mitra",
"Tanmay",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Sinha",
"Sitabhra",
""
]
] | Nonlinear systems driven by recurrent signals are known to exhibit complex dynamical responses which, in the physiological context, can have important functional consequences. One of the simplest biological systems that is exposed to such repeated stimuli is the intra-cellular signaling network. In this paper we investigate the periodic activation of an evolutionarily conserved motif of this network, viz., the mitogen-activated protein kinase (MAPK) signaling cascade, with a train of pulses. The resulting response of the cascade, which shows integrative capability over several successive pulses, is characterized by complex adaptive behavior. These include aspects of non-associative learning, in particular, habituation and sensitization, which are observed in response to high- and low-frequency stimulation, respectively. In addition, the existence of a response threshold of the cascade, an apparent refractory behavior following stimulation with short inter-pulse interval, and an alternans-like response under certain conditions suggest an analogy with excitable media. |
1706.00382 | Cengiz Pehlevan | Cengiz Pehlevan, Sreyas Mohan, Dmitri B. Chklovskii | Blind nonnegative source separation using biological neural networks | Accepted for publication in Neural Computation | null | 10.1162/neco_a_01007 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blind source separation, i.e. extraction of independent sources from a
mixture, is an important problem for both artificial and natural signal
processing. Here, we address a special case of this problem when sources (but
not the mixing matrix) are known to be nonnegative, for example, due to the
physical nature of the sources. We search for the solution to this problem that
can be implemented using biologically plausible neural networks. Specifically,
we consider the online setting where the dataset is streamed to a neural
network. The novelty of our approach is that we formulate blind nonnegative
source separation as a similarity matching problem and derive neural networks
from the similarity matching objective. Importantly, synaptic weights in our
networks are updated according to biologically plausible local learning rules.
| [
{
"created": "Thu, 1 Jun 2017 16:50:09 GMT",
"version": "v1"
}
] | 2017-10-20 | [
[
"Pehlevan",
"Cengiz",
""
],
[
"Mohan",
"Sreyas",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | Blind source separation, i.e. extraction of independent sources from a mixture, is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative, for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the dataset is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules. |
1811.09718 | Zheng Zhao | Zheng Zhao and Philip E. Bourne | Overview of Current Type I/II Kinase Inhibitors | 26 pages; 7 figures; 1 table | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Research on kinase-targeting drugs has made great strides over the last 30
years and is attracting greater attention for the treatment of yet more
kinase-related diseases. Currently, 42 kinase drugs have been approved by the
FDA, most of which (39) are Type I/II inhibitors. Notwithstanding these
advances, it is desirable to target additional kinases for drug development as
more than 200 diseases, particularly cancers, are directly associated with
aberrant kinase regulation and signaling. Here, we review the extant Type I/II
drugs systematically to obtain insights into the binding pocket
characteristics, the associated features of Type I/II drugs, and the mechanism
of action to facilitate future kinase drug design and discovery. We conclude by
summarizing the main successes and limitations of targeting kinase for the
development of drugs.
| [
{
"created": "Fri, 23 Nov 2018 22:15:45 GMT",
"version": "v1"
}
] | 2018-11-27 | [
[
"Zhao",
"Zheng",
""
],
[
"Bourne",
"Philip E.",
""
]
] | Research on kinase-targeting drugs has made great strides over the last 30 years and is attracting greater attention for the treatment of yet more kinase-related diseases. Currently, 42 kinase drugs have been approved by the FDA, most of which (39) are Type I/II inhibitors. Notwithstanding these advances, it is desirable to target additional kinases for drug development as more than 200 diseases, particularly cancers, are directly associated with aberrant kinase regulation and signaling. Here, we review the extant Type I/II drugs systematically to obtain insights into the binding pocket characteristics, the associated features of Type I/II drugs, and the mechanism of action to facilitate future kinase drug design and discovery. We conclude by summarizing the main successes and limitations of targeting kinase for the development of drugs. |
2108.08975 | Karin Knudson | Karin C. Knudson, Anoopum S. Gupta | Assessing Cerebellar Disorders With Wearable Inertial Sensor Data Using
Time-Frequency and Autoregressive Hidden Markov Model Approaches | 12 pages, 7 figures | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use autoregressive hidden Markov models and a time-frequency approach to
create meaningful quantitative descriptions of behavioral characteristics of
cerebellar ataxias from wearable inertial sensor data gathered during movement.
Wearable sensor data is relatively easily collected and provides direct
measurements of movement that can be used to develop useful behavioral
biomarkers. Sensitive and specific behavioral biomarkers for neurodegenerative
diseases are critical to supporting early detection, drug development efforts,
and targeted treatments. We create a flexible and descriptive set of features
derived from accelerometer and gyroscope data collected from wearable sensors
while participants perform clinical assessment tasks, and with them estimate
disease status and severity. A short period of data collection ($<$ 5 minutes)
yields enough information to effectively separate patients with ataxia from
healthy controls with very high accuracy, to separate ataxia from other
neurodegenerative diseases such as Parkinson's disease, and to give estimates
of disease severity.
| [
{
"created": "Fri, 20 Aug 2021 02:45:14 GMT",
"version": "v1"
}
] | 2021-08-23 | [
[
"Knudson",
"Karin C.",
""
],
[
"Gupta",
"Anoopum S.",
""
]
] | We use autoregressive hidden Markov models and a time-frequency approach to create meaningful quantitative descriptions of behavioral characteristics of cerebellar ataxias from wearable inertial sensor data gathered during movement. Wearable sensor data is relatively easily collected and provides direct measurements of movement that can be used to develop useful behavioral biomarkers. Sensitive and specific behavioral biomarkers for neurodegenerative diseases are critical to supporting early detection, drug development efforts, and targeted treatments. We create a flexible and descriptive set of features derived from accelerometer and gyroscope data collected from wearable sensors while participants perform clinical assessment tasks, and with them estimate disease status and severity. A short period of data collection ($<$ 5 minutes) yields enough information to effectively separate patients with ataxia from healthy controls with very high accuracy, to separate ataxia from other neurodegenerative diseases such as Parkinson's disease, and to give estimates of disease severity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.