id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.15478
|
Anthony Gitter
|
Daniel McNeela, Frederic Sala, Anthony Gitter
|
Product Manifold Representations for Learning on Biological Pathways
|
28 pages, 19 figures
| null | null | null |
q-bio.QM cs.LG q-bio.MN
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning models that embed graphs in non-Euclidean spaces have shown
substantial benefits in a variety of contexts, but their application has not
been studied extensively in the biological domain, particularly with respect to
biological pathway graphs. Such graphs exhibit a variety of complex network
structures, presenting challenges to existing embedding approaches. Learning
high-quality embeddings for biological pathway graphs is important for
researchers looking to understand the underpinnings of disease and train
high-quality predictive models on these networks. In this work, we investigate
the effects of embedding pathway graphs in non-Euclidean mixed-curvature spaces
and compare against traditional Euclidean graph representation learning models.
We then train a supervised model using the learned node embeddings to predict
missing protein-protein interactions in pathway graphs. We find large
reductions in distortion and boosts on in-distribution edge prediction
performance as a result of using mixed-curvature embeddings and their
corresponding graph neural network models. However, we find that
mixed-curvature representations underperform existing baselines on
out-of-distribution edge prediction performance suggesting that these
representations may overfit to the training graph topology. We provide our
mixed-curvature product GCN code at
https://github.com/mcneela/Mixed-Curvature-GCN and our pathway analysis code at
https://github.com/mcneela/Mixed-Curvature-Pathways.
|
[
{
"created": "Sat, 27 Jan 2024 18:46:19 GMT",
"version": "v1"
}
] |
2024-01-30
|
[
[
"McNeela",
"Daniel",
""
],
[
"Sala",
"Frederic",
""
],
[
"Gitter",
"Anthony",
""
]
] |
Machine learning models that embed graphs in non-Euclidean spaces have shown substantial benefits in a variety of contexts, but their application has not been studied extensively in the biological domain, particularly with respect to biological pathway graphs. Such graphs exhibit a variety of complex network structures, presenting challenges to existing embedding approaches. Learning high-quality embeddings for biological pathway graphs is important for researchers looking to understand the underpinnings of disease and train high-quality predictive models on these networks. In this work, we investigate the effects of embedding pathway graphs in non-Euclidean mixed-curvature spaces and compare against traditional Euclidean graph representation learning models. We then train a supervised model using the learned node embeddings to predict missing protein-protein interactions in pathway graphs. We find large reductions in distortion and boosts on in-distribution edge prediction performance as a result of using mixed-curvature embeddings and their corresponding graph neural network models. However, we find that mixed-curvature representations underperform existing baselines on out-of-distribution edge prediction performance suggesting that these representations may overfit to the training graph topology. We provide our mixed-curvature product GCN code at https://github.com/mcneela/Mixed-Curvature-GCN and our pathway analysis code at https://github.com/mcneela/Mixed-Curvature-Pathways.
|
1712.02289
|
Massimo Cencini Dr.
|
Massimo Cencini and Simone Pigolotti
|
Energetic funnel facilitates facilitated diffusion
|
10 pages, Nucleic Acids Research in press. Supplementary information
available from the Journal (at
https://academic.oup.com/nar/advance-article-abstract/doi/10.1093/nar/gkx1220/4690584)
| null |
10.1093/nar/gkx1220
| null |
q-bio.BM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transcription factors are able to associate to their binding sites on DNA
faster than the physical limit posed by diffusion. Such high association rates
can be achieved by alternating between three-dimensional diffusion and
one-dimensional sliding along the DNA chain, a mechanism dubbed Facilitated
Diffusion. By studying a collection of transcription factor binding sites of
Escherichia coli from the RegulonDB database and of Bacillus subtilis from
DBTBS, we reveal a funnel in the binding energy landscape around the target
sequences. We show that such a funnel is linked to the presence of gradients of
AT in the base composition in the DNA region around the binding sites. An
extensive computational study of the stochastic sliding process along the
energetic landscapes obtained from the database shows that the funnel can
significantly enhance the probability of transcription factors to find their
target sequences when sliding in their proximity. We demonstrate that this
enhancement leads to a speed-up of the association process.
|
[
{
"created": "Wed, 6 Dec 2017 17:13:01 GMT",
"version": "v1"
}
] |
2017-12-07
|
[
[
"Cencini",
"Massimo",
""
],
[
"Pigolotti",
"Simone",
""
]
] |
Transcription factors are able to associate to their binding sites on DNA faster than the physical limit posed by diffusion. Such high association rates can be achieved by alternating between three-dimensional diffusion and one-dimensional sliding along the DNA chain, a mechanism dubbed Facilitated Diffusion. By studying a collection of transcription factor binding sites of Escherichia coli from the RegulonDB database and of Bacillus subtilis from DBTBS, we reveal a funnel in the binding energy landscape around the target sequences. We show that such a funnel is linked to the presence of gradients of AT in the base composition in the DNA region around the binding sites. An extensive computational study of the stochastic sliding process along the energetic landscapes obtained from the database shows that the funnel can significantly enhance the probability of transcription factors to find their target sequences when sliding in their proximity. We demonstrate that this enhancement leads to a speed-up of the association process.
|
2203.02493
|
Michael Beyeler
|
Ashley Bruce and Michael Beyeler
|
Greedy Optimization of Electrode Arrangement for Epiretinal Prostheses
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Visual neuroprostheses are the only FDA-approved technology for the treatment
of retinal degenerative blindness. Although recent work has demonstrated a
systematic relationship between electrode location and the shape of the
elicited visual percept, this knowledge has yet to be incorporated into retinal
prosthesis design, where electrodes are typically arranged on either a
rectangular or hexagonal grid. Here we optimize the intraocular placement of
epiretinal electrodes using dictionary learning. Importantly, the optimization
process is informed by a previously established and psychophysically validated
model of simulated prosthetic vision. We systematically evaluate three
different electrode placement strategies across a wide range of possible
phosphene shapes and recommend electrode arrangements that maximize visual
subfield coverage. In the near future, our work may guide the prototyping of
next-generation neuroprostheses.
|
[
{
"created": "Fri, 4 Mar 2022 18:45:43 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jun 2022 23:10:23 GMT",
"version": "v2"
}
] |
2022-07-04
|
[
[
"Bruce",
"Ashley",
""
],
[
"Beyeler",
"Michael",
""
]
] |
Visual neuroprostheses are the only FDA-approved technology for the treatment of retinal degenerative blindness. Although recent work has demonstrated a systematic relationship between electrode location and the shape of the elicited visual percept, this knowledge has yet to be incorporated into retinal prosthesis design, where electrodes are typically arranged on either a rectangular or hexagonal grid. Here we optimize the intraocular placement of epiretinal electrodes using dictionary learning. Importantly, the optimization process is informed by a previously established and psychophysically validated model of simulated prosthetic vision. We systematically evaluate three different electrode placement strategies across a wide range of possible phosphene shapes and recommend electrode arrangements that maximize visual subfield coverage. In the near future, our work may guide the prototyping of next-generation neuroprostheses.
|
1803.03189
|
Gary Wilk
|
Gary Wilk, Rosemary Braun
|
Single nucleotide polymorphisms that modulate microRNA regulation of
gene expression in tumors
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome-wide association studies (GWAS) have identified single nucleotide
polymorphisms (SNPs) associated with trait diversity and disease
susceptibility, yet the functional properties of many genetic variants and
their molecular interactions remains unclear. It has been hypothesized that
SNPs in microRNA binding sites may disrupt gene regulation by microRNAs
(miRNAs), short non-coding RNAs that bind to mRNA and downregulate the target
gene. While a number of studies have been conducted to predict the location of
SNPs in miRNA binding sites, to date there has been no comprehensive analysis
of how SNP variants may impact miRNA regulation of genes. Here we investigate
the functional properties of genetic variants and their effects on miRNA
regulation of gene expression in cancer. Our analysis is motivated by the
hypothesis that distinct alleles may cause differential binding (from miRNAs to
mRNAs or from transcription factors to DNA) and change the expression of genes.
We previously identified pathways--systems of genes conferring specific cell
functions--that are dysregulated by miRNAs in cancer, by comparing
miRNA-pathway associations between healthy and tumor tissue. We draw on these
results as a starting point to assess whether SNPs in genes on dysregulated
pathways are responsible for miRNA dysregulation of individual genes in tumors.
Using an integrative analysis that incorporates miRNA expression, mRNA
expression, and SNP genotype data, we identify SNPs that appear to influence
the association between miRNAs and genes, which we term "regulatory QTLs
(regQTLs)": loci whose alleles impact the regulation of genes by miRNAs. We
describe the method, apply it to analyze four cancer types (breast, liver,
lung, prostate) using data from The Cancer Genome Atlas (TCGA), and provide a
tool to explore the findings.
|
[
{
"created": "Thu, 8 Mar 2018 16:31:51 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Mar 2018 14:13:43 GMT",
"version": "v2"
}
] |
2018-03-21
|
[
[
"Wilk",
"Gary",
""
],
[
"Braun",
"Rosemary",
""
]
] |
Genome-wide association studies (GWAS) have identified single nucleotide polymorphisms (SNPs) associated with trait diversity and disease susceptibility, yet the functional properties of many genetic variants and their molecular interactions remains unclear. It has been hypothesized that SNPs in microRNA binding sites may disrupt gene regulation by microRNAs (miRNAs), short non-coding RNAs that bind to mRNA and downregulate the target gene. While a number of studies have been conducted to predict the location of SNPs in miRNA binding sites, to date there has been no comprehensive analysis of how SNP variants may impact miRNA regulation of genes. Here we investigate the functional properties of genetic variants and their effects on miRNA regulation of gene expression in cancer. Our analysis is motivated by the hypothesis that distinct alleles may cause differential binding (from miRNAs to mRNAs or from transcription factors to DNA) and change the expression of genes. We previously identified pathways--systems of genes conferring specific cell functions--that are dysregulated by miRNAs in cancer, by comparing miRNA-pathway associations between healthy and tumor tissue. We draw on these results as a starting point to assess whether SNPs in genes on dysregulated pathways are responsible for miRNA dysregulation of individual genes in tumors. Using an integrative analysis that incorporates miRNA expression, mRNA expression, and SNP genotype data, we identify SNPs that appear to influence the association between miRNAs and genes, which we term "regulatory QTLs (regQTLs)": loci whose alleles impact the regulation of genes by miRNAs. We describe the method, apply it to analyze four cancer types (breast, liver, lung, prostate) using data from The Cancer Genome Atlas (TCGA), and provide a tool to explore the findings.
|
2310.20031
|
Nicol\`o Cogno
|
Nicol\`o Cogno, Cristian Axenie, Roman Bauer and Vasileios Vavourakis
|
Recipes for calibration and validation of agent-based models in cancer
biomedicine
| null | null | null | null |
q-bio.TO cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Computational models and simulations are not just appealing because of their
intrinsic characteristics across spatiotemporal scales, scalability, and
predictive power, but also because the set of problems in cancer biomedicine
that can be addressed computationally exceeds the set of those amenable to
analytical solutions. Agent-based models and simulations are especially
interesting candidates among computational modelling strategies in cancer
research due to their capabilities to replicate realistic local and global
interaction dynamics at a convenient and relevant scale. Yet, the absence of
methods to validate the consistency of the results across scales can hinder
adoption by turning fine-tuned models into black boxes. This review compiles
relevant literature to explore strategies to leverage high-fidelity simulations
of multi-scale, or multi-level, cancer models with a focus on validation
approached as simulation calibration. We argue that simulation calibration goes
beyond parameter optimization by embedding informative priors to generate
plausible parameter configurations across multiple dimensions.
|
[
{
"created": "Mon, 30 Oct 2023 21:29:54 GMT",
"version": "v1"
}
] |
2023-11-01
|
[
[
"Cogno",
"Nicolò",
""
],
[
"Axenie",
"Cristian",
""
],
[
"Bauer",
"Roman",
""
],
[
"Vavourakis",
"Vasileios",
""
]
] |
Computational models and simulations are not just appealing because of their intrinsic characteristics across spatiotemporal scales, scalability, and predictive power, but also because the set of problems in cancer biomedicine that can be addressed computationally exceeds the set of those amenable to analytical solutions. Agent-based models and simulations are especially interesting candidates among computational modelling strategies in cancer research due to their capabilities to replicate realistic local and global interaction dynamics at a convenient and relevant scale. Yet, the absence of methods to validate the consistency of the results across scales can hinder adoption by turning fine-tuned models into black boxes. This review compiles relevant literature to explore strategies to leverage high-fidelity simulations of multi-scale, or multi-level, cancer models with a focus on validation approached as simulation calibration. We argue that simulation calibration goes beyond parameter optimization by embedding informative priors to generate plausible parameter configurations across multiple dimensions.
|
2007.14941
|
Sitabhra Sinha
|
Anand Pathak, Shakti N. Menon and Sitabhra Sinha
|
Mesoscopic architecture enhances communication across the Macaque
connectome revealing structure-function correspondence in the brain
|
13 pages, 3 figures + 27 pages Supplementary Information
|
Phys. Rev. E 106, 054304 (2022)
|
10.1103/PhysRevE.106.054304
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analyzing the brain in terms of organizational structures at intermediate
scales provides an approach to negotiate the complexity arising from
interactions between its large number of components. Focusing on a wiring
diagram that spans the cortex, basal ganglia and thalamus of the Macaque brain,
we provide a mesoscopic-level description of the topological architecture of
one of the most well-studied mammalian connectomes. The robust modules we
identify each comprise densely inter-connected cortical and sub-cortical areas
that play complementary roles in executing specific cognitive functions. We
find that physical proximity between areas is insufficient to explain the
modular organization, as similar mesoscopic structures can be obtained even
after factoring out the effect of distance constraints on the connectivity. We
observe that the distribution profile of brain areas, classified in terms of
their intra- and inter-modular connectivity, is conserved across the principal
cortical subdivisions, as well as, sub-cortical structures. In particular
provincial hubs, which have significantly higher number of connections with
members of their module, but relatively less well-connected to other modules,
are the only class that exhibits homophily, i.e., a discernible preference to
connect to each other. By considering a process of diffusive propagation we
demonstrate that this architecture, instead of localizing the activity,
facilitates rapid communication across the connectome. By supplementing the
topological information about the Macaque connectome with physical locations,
volumes and functions of the constituent areas and analyzing this augmented
dataset, we reveal a counter-intuitive role played by the modular architecture
of the brain in promoting global interaction.
|
[
{
"created": "Wed, 29 Jul 2020 16:34:47 GMT",
"version": "v1"
}
] |
2023-01-10
|
[
[
"Pathak",
"Anand",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Sinha",
"Sitabhra",
""
]
] |
Analyzing the brain in terms of organizational structures at intermediate scales provides an approach to negotiate the complexity arising from interactions between its large number of components. Focusing on a wiring diagram that spans the cortex, basal ganglia and thalamus of the Macaque brain, we provide a mesoscopic-level description of the topological architecture of one of the most well-studied mammalian connectomes. The robust modules we identify each comprise densely inter-connected cortical and sub-cortical areas that play complementary roles in executing specific cognitive functions. We find that physical proximity between areas is insufficient to explain the modular organization, as similar mesoscopic structures can be obtained even after factoring out the effect of distance constraints on the connectivity. We observe that the distribution profile of brain areas, classified in terms of their intra- and inter-modular connectivity, is conserved across the principal cortical subdivisions, as well as, sub-cortical structures. In particular provincial hubs, which have significantly higher number of connections with members of their module, but relatively less well-connected to other modules, are the only class that exhibits homophily, i.e., a discernible preference to connect to each other. By considering a process of diffusive propagation we demonstrate that this architecture, instead of localizing the activity, facilitates rapid communication across the connectome. By supplementing the topological information about the Macaque connectome with physical locations, volumes and functions of the constituent areas and analyzing this augmented dataset, we reveal a counter-intuitive role played by the modular architecture of the brain in promoting global interaction.
|
2012.12486
|
Valentin Slepukhin
|
Valentin M. Slepukhin (1), Sufyan Ashhad (2), Jack L. Feldman (2),
Alex J. Levine (1,3 and 4) ((1) Department of Physics and Astronomy, UCLA,
(2) Systems Neurobiology Laboratory, Department of Neurobiology, David Geffen
School of Medicine, UCLA, (3) Department of Chemistry and Biochemistry, UCLA,
(4), Department of Biomathematics, UCLA)
|
Microcircuit synchronization and heavy tailed synaptic weight
distribution in preB\"otzinger Complex contribute to generation of breathing
rhythm
|
47 pages, 10 figures
| null | null | null |
q-bio.NC cond-mat.dis-nn nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The preB\"otzinger Complex, the mammalian inspiratory rhythm generator,
encodes inspiratory time as motor pattern. Spike synchronization throughout
this sparsely connected network generates inspiratory bursts albeit with
variable latencies after preinspiratory activity onset in each breathing cycle.
Using preB\"otC rhythmogenic microcircuit minimal models, we examined the
variability in probability and latency to burst, mimicking experiments. Among
various physiologically plausible graphs of 1000 point neurons with
experimentally determined neuronal and synaptic parameters, directed
Erd\H{o}s-R\'enyi graphs best captured the experimentally observed dynamics.
Mechanistically, preB\"otC (de)synchronization and oscillatory dynamics are
regulated by the efferent connectivity of spiking neurons that gates the
amplification of modest preinspiratory activity through input convergence.
Furthermore, to replicate experiments, a lognormal distribution of synaptic
weights was necessary to augment the efficacy of convergent coincident inputs.
These mechanisms enable exceptionally robust yet flexible preB\"otC attractor
dynamics that, we postulate, represent universal temporal-processing and
decision-making computational motifs throughout the brain.
|
[
{
"created": "Wed, 23 Dec 2020 05:01:06 GMT",
"version": "v1"
}
] |
2020-12-24
|
[
[
"Slepukhin",
"Valentin M.",
"",
"1,3 and 4"
],
[
"Ashhad",
"Sufyan",
"",
"1,3 and 4"
],
[
"Feldman",
"Jack L.",
"",
"1,3 and 4"
],
[
"Levine",
"Alex J.",
"",
"1,3 and 4"
]
] |
The preB\"otzinger Complex, the mammalian inspiratory rhythm generator, encodes inspiratory time as motor pattern. Spike synchronization throughout this sparsely connected network generates inspiratory bursts albeit with variable latencies after preinspiratory activity onset in each breathing cycle. Using preB\"otC rhythmogenic microcircuit minimal models, we examined the variability in probability and latency to burst, mimicking experiments. Among various physiologically plausible graphs of 1000 point neurons with experimentally determined neuronal and synaptic parameters, directed Erd\H{o}s-R\'enyi graphs best captured the experimentally observed dynamics. Mechanistically, preB\"otC (de)synchronization and oscillatory dynamics are regulated by the efferent connectivity of spiking neurons that gates the amplification of modest preinspiratory activity through input convergence. Furthermore, to replicate experiments, a lognormal distribution of synaptic weights was necessary to augment the efficacy of convergent coincident inputs. These mechanisms enable exceptionally robust yet flexible preB\"otC attractor dynamics that, we postulate, represent universal temporal-processing and decision-making computational motifs throughout the brain.
|
1602.00444
|
Erik Henningsson
|
Stefan Diehl, Erik Henningsson, Anders Heyden
|
Efficient simulations of tubulin-driven axonal growth
|
Authors' accepted version, (post refereeing). The final publication
(in Journal of Computational Neuroscience) is available at Springer via
http://dx.doi.org/10.1007/s10827-016-0604-x
|
J. Comput. Neurosci. 41(1) (2016) 45--63
|
10.1007/s10827-016-0604-x
| null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work concerns efficient and reliable numerical simulations of the
dynamic behaviour of a moving-boundary model for tubulin-driven axonal growth.
The model is nonlinear and consists of a coupled set of a partial differential
equation (PDE) and two ordinary differential equations. The PDE is defined on a
computational domain with a moving boundary, which is part of the solution.
Numerical simulations based on standard explicit time-stepping methods are too
time consuming due to the small time steps required for numerical stability. On
the other hand standard implicit schemes are too complex due to the nonlinear
equations that needs to be solved in each step. Instead, we propose to use the
Peaceman--Rachford splitting scheme combined with temporal and spatial scalings
of the model. Simulations based on this scheme have shown to be efficient,
accurate, and reliable which makes it possible to evaluate the model, e.g.\ its
dependency on biological and physical model parameters. These evaluations show
among other things that the initial axon growth is very fast, that the active
transport is the dominant reason over diffusion for the growth velocity, and
that the polymerization rate in the growth cone does not affect the final axon
length.
|
[
{
"created": "Mon, 1 Feb 2016 09:41:22 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Apr 2016 13:01:33 GMT",
"version": "v2"
}
] |
2016-08-03
|
[
[
"Diehl",
"Stefan",
""
],
[
"Henningsson",
"Erik",
""
],
[
"Heyden",
"Anders",
""
]
] |
This work concerns efficient and reliable numerical simulations of the dynamic behaviour of a moving-boundary model for tubulin-driven axonal growth. The model is nonlinear and consists of a coupled set of a partial differential equation (PDE) and two ordinary differential equations. The PDE is defined on a computational domain with a moving boundary, which is part of the solution. Numerical simulations based on standard explicit time-stepping methods are too time consuming due to the small time steps required for numerical stability. On the other hand standard implicit schemes are too complex due to the nonlinear equations that needs to be solved in each step. Instead, we propose to use the Peaceman--Rachford splitting scheme combined with temporal and spatial scalings of the model. Simulations based on this scheme have shown to be efficient, accurate, and reliable which makes it possible to evaluate the model, e.g.\ its dependency on biological and physical model parameters. These evaluations show among other things that the initial axon growth is very fast, that the active transport is the dominant reason over diffusion for the growth velocity, and that the polymerization rate in the growth cone does not affect the final axon length.
|
1212.1641
|
Pravin Madhavan
|
Don Praveen Amarasinghe, Andrew Aylwin, Pravin Madhavan and Chris
Pettitt
|
Biomembranes report
| null | null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we analyse and simulate a chemotaxis model given by a system
of stochastic reaction-diffusion equations posed on an evolving surface.
|
[
{
"created": "Wed, 5 Dec 2012 17:07:01 GMT",
"version": "v1"
}
] |
2012-12-10
|
[
[
"Amarasinghe",
"Don Praveen",
""
],
[
"Aylwin",
"Andrew",
""
],
[
"Madhavan",
"Pravin",
""
],
[
"Pettitt",
"Chris",
""
]
] |
In this report, we analyse and simulate a chemotaxis model given by a system of stochastic reaction-diffusion equations posed on an evolving surface.
|
1505.05616
|
Marc-Andr\'e Delsuc
|
J.M.P. Vi\'eville and S. Barluenga and N. Winssinger and M-A. Delsuc
|
Duplex formation and secondary structure of {\gamma}-PNA observed by NMR
and CD
|
13 pages, 6 figures, plus 30 pages of Supp. Mat
| null |
10.1016/j.bpc.2015.09.002
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Peptide Nucleic Acids (PNA) are non-natural oligonucleotides mimics.
{\gamma}-PNA backbone are formed by standard nucleic acids nucleobases
connected by a neutral N-(2-aminoethyl)glycine backbone linked by a peptide
bond. In this study, we use Nuclear Magnetic Resonance (NMR) and Circular
Dichroism (CD) to explore the properties of the supramolecular duplexes formed
by these species. We show that standard Watson-Crick base pair as well as
non-standard ones are formed in solution. The duplexes thus formed present
marked melting transition temperatures substantially higher than their nucleic
acid homologs. Moreover, the presence of a chiral group on the
{\gamma}-peptidic backbone increases further more this transition temperature,
leading to very stable duplexes.
|
[
{
"created": "Thu, 21 May 2015 06:48:27 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Oct 2015 21:33:21 GMT",
"version": "v2"
}
] |
2015-10-13
|
[
[
"Viéville",
"J. M. P.",
""
],
[
"Barluenga",
"S.",
""
],
[
"Winssinger",
"N.",
""
],
[
"Delsuc",
"M-A.",
""
]
] |
Peptide Nucleic Acids (PNA) are non-natural oligonucleotides mimics. {\gamma}-PNA backbone are formed by standard nucleic acids nucleobases connected by a neutral N-(2-aminoethyl)glycine backbone linked by a peptide bond. In this study, we use Nuclear Magnetic Resonance (NMR) and Circular Dichroism (CD) to explore the properties of the supramolecular duplexes formed by these species. We show that standard Watson-Crick base pair as well as non-standard ones are formed in solution. The duplexes thus formed present marked melting transition temperatures substantially higher than their nucleic acid homologs. Moreover, the presence of a chiral group on the {\gamma}-peptidic backbone increases further more this transition temperature, leading to very stable duplexes.
|
2206.03454
|
Bruno Travi Dr.
|
Bruno L. Travi
|
Current status of antihistamines repurposing for infectious diseases
|
This review article compiles information on antihistamine drugs that
have shown activity against multiple pathogens, including parasites,
bacteria, fungi, and viruses. submitted
| null | null | null |
q-bio.TO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Objectives. This review gathers information on the potential role of
antihistamines as anti-infective agents and identifies gaps in research that
have impaired its applicability in human health. Methods. The literature search
encompassed MEDLINE, PubMed and Google Scholar from 1990 to 2022. Results. The
literature search identified 12 antihistamines with activity against different
pathogens. Eight molecules were second-generation antihistamines with
intrinsically lower tendency to cross the blood brain barrier thereby with
reduced side effects. Only five antihistamines had in vivo evaluations in
rodents while one study utilized a wax moth model to determine astemizole
anti-Cryptococcus sp. activity combined with fluconazole. In vitro studies
showed that clemastine was active against Plasmodium, Leishmania, and
Trypanosoma, while terfenadine suppressed Candida spp. and Staphylococcus
aureus growth. In vitro assays found that SARS-coV-2 was inhibited by doxepin,
azelastine, desloratadine, and clemastine. Different antihistamines inhibited
Ebola virus (diphenhydramine, chlorcyclizine), Hepatitis C virus
(chlorcyclizine), and Influenza virus (carbinoxamine, chlorpheniramine).
Generally, in vitro activity (IC50) of antihistamines was in the low to
sub-microM range, except for Staphylococcus epidermidis (loratadine MIC=50
microM) and SARS-coV-2 (desloratadine 70% inhibition at 20 microM). Conclusion.
Many antihistamine drugs showed potential to progress to clinical trials based
on in vitro data and availability of toxicological and pharmacological data.
However, the overall lack of systematic preclinical trials has hampered the
advance of repurposed antihistamines for off label evaluation. The low interest
of pharmaceutical companies has to be counterbalanced through collaborations
between research groups, granting agencies and government to support the needed
clinical trials.
|
[
{
"created": "Tue, 7 Jun 2022 17:18:52 GMT",
"version": "v1"
}
] |
2022-06-08
|
[
[
"Travi",
"Bruno L.",
""
]
] |
Objectives. This review gathers information on the potential role of antihistamines as anti-infective agents and identifies gaps in research that have impaired its applicability in human health. Methods. The literature search encompassed MEDLINE, PubMed and Google Scholar from 1990 to 2022. Results. The literature search identified 12 antihistamines with activity against different pathogens. Eight molecules were second-generation antihistamines with intrinsically lower tendency to cross the blood brain barrier thereby with reduced side effects. Only five antihistamines had in vivo evaluations in rodents while one study utilized a wax moth model to determine astemizole anti-Cryptococcus sp. activity combined with fluconazole. In vitro studies showed that clemastine was active against Plasmodium, Leishmania, and Trypanosoma, while terfenadine suppressed Candida spp. and Staphylococcus aureus growth. In vitro assays found that SARS-coV-2 was inhibited by doxepin, azelastine, desloratadine, and clemastine. Different antihistamines inhibited Ebola virus (diphenhydramine, chlorcyclizine), Hepatitis C virus (chlorcyclizine), and Influenza virus (carbinoxamine, chlorpheniramine). Generally, in vitro activity (IC50) of antihistamines was in the low to sub-microM range, except for Staphylococcus epidermidis (loratadine MIC=50 microM) and SARS-coV-2 (desloratadine 70% inhibition at 20 microM). Conclusion. Many antihistamine drugs showed potential to progress to clinical trials based on in vitro data and availability of toxicological and pharmacological data. However, the overall lack of systematic preclinical trials has hampered the advance of repurposed antihistamines for off label evaluation. The low interest of pharmaceutical companies has to be counterbalanced through collaborations between research groups, granting agencies and government to support the needed clinical trials.
|
2106.15428
|
Richard Betzel
|
Farnaz Zamani Esfahlani, Youngheun Jo, Maria Grazia Puxeddu, Haily
Merritt, Jacob C. Tanner, Sarah Greenwell, Riya Patel, Joshua Faskowitz,
Richard F. Betzel
|
Modularity maximization as a flexible and generic framework for brain
network exploratory analysis
|
18 pages, 4 figures
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
The modular structure of brain networks supports specialized information
processing, complex dynamics, and cost-efficient spatial embedding.
Inter-individual variation in modular structure has been linked to differences
in performance, disease, and development. There exist many data-driven methods
for detecting and comparing modular structure, the most popular of which is
modularity maximization. Although modularity maximization is a general
framework that can be modified and reparamaterized to address domain-specific
research questions, its application to neuroscientific datasets has, thus far,
been narrow. Here, we highlight several strategies in which the
``out-of-the-box'' version of modularity maximization can be extended to
address questions specific to neuroscience. First, we present approaches for
detecting ``space-independent'' modules and for applying modularity
maximization to signed matrices. Next, we show that the modularity maximization
frame is well-suited for detecting task- and condition-specific modules.
Finally, we highlight the role of multi-layer models in detecting and tracking
modules across time, tasks, subjects, and modalities. In summary, modularity
maximization is a flexible and general framework that can be adapted to detect
modular structure resulting from a wide range of hypotheses. This article
highlights opens multiple frontiers for future research and applications.
|
[
{
"created": "Tue, 29 Jun 2021 13:56:21 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Esfahlani",
"Farnaz Zamani",
""
],
[
"Jo",
"Youngheun",
""
],
[
"Puxeddu",
"Maria Grazia",
""
],
[
"Merritt",
"Haily",
""
],
[
"Tanner",
"Jacob C.",
""
],
[
"Greenwell",
"Sarah",
""
],
[
"Patel",
"Riya",
""
],
[
"Faskowitz",
"Joshua",
""
],
[
"Betzel",
"Richard F.",
""
]
] |
The modular structure of brain networks supports specialized information processing, complex dynamics, and cost-efficient spatial embedding. Inter-individual variation in modular structure has been linked to differences in performance, disease, and development. There exist many data-driven methods for detecting and comparing modular structure, the most popular of which is modularity maximization. Although modularity maximization is a general framework that can be modified and reparamaterized to address domain-specific research questions, its application to neuroscientific datasets has, thus far, been narrow. Here, we highlight several strategies in which the ``out-of-the-box'' version of modularity maximization can be extended to address questions specific to neuroscience. First, we present approaches for detecting ``space-independent'' modules and for applying modularity maximization to signed matrices. Next, we show that the modularity maximization frame is well-suited for detecting task- and condition-specific modules. Finally, we highlight the role of multi-layer models in detecting and tracking modules across time, tasks, subjects, and modalities. In summary, modularity maximization is a flexible and general framework that can be adapted to detect modular structure resulting from a wide range of hypotheses. This article highlights opens multiple frontiers for future research and applications.
|
2308.14495
|
Thomas Michelitsch
|
Teo Granger, Thomas Michelitsch, Bernard Collet, Michael Bestehorn,
Alejandro Riascos
|
Compartment model with retarded transition rates
|
Conference proceedings (Conference: From the nonlinear dynamical
systems theory to observational chaos, October 9-11, 2023 - Toulouse), 6
Pages, 3 Figures
| null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our study is devoted to a four-compartment epidemic model of a constant
population of independent random walkers. Each walker is in one of four
compartments (S-susceptible, C-infected but not infectious (period of
incubation), I-infected and infectious, R-recovered and immune) characterizing
the states of health. The walkers navigate independently on a periodic 2D
lattice. Infections occur by collisions of susceptible and infectious walkers.
Once infected, a walker undergoes the delayed cyclic transition pathway S $\to$
C $\to$ I $\to$ R $\to$ S. The random delay times between the transitions
(sojourn times in the compartments) are drawn from independent probability
density functions (PDFs). We analyze the existence of the endemic equilibrium
and stability of the globally healthy state and derive a condition for the
spread of the epidemics which we connect with the basic reproduction number
$R_0>1$. We give quantitative numerical evidence that a simple approach based
on random walkers offers an appropriate microscopic picture of the dynamics for
this class of epidemics.
|
[
{
"created": "Mon, 28 Aug 2023 11:13:27 GMT",
"version": "v1"
}
] |
2023-08-29
|
[
[
"Granger",
"Teo",
""
],
[
"Michelitsch",
"Thomas",
""
],
[
"Collet",
"Bernard",
""
],
[
"Bestehorn",
"Michael",
""
],
[
"Riascos",
"Alejandro",
""
]
] |
Our study is devoted to a four-compartment epidemic model of a constant population of independent random walkers. Each walker is in one of four compartments (S-susceptible, C-infected but not infectious (period of incubation), I-infected and infectious, R-recovered and immune) characterizing the states of health. The walkers navigate independently on a periodic 2D lattice. Infections occur by collisions of susceptible and infectious walkers. Once infected, a walker undergoes the delayed cyclic transition pathway S $\to$ C $\to$ I $\to$ R $\to$ S. The random delay times between the transitions (sojourn times in the compartments) are drawn from independent probability density functions (PDFs). We analyze the existence of the endemic equilibrium and stability of the globally healthy state and derive a condition for the spread of the epidemics which we connect with the basic reproduction number $R_0>1$. We give quantitative numerical evidence that a simple approach based on random walkers offers an appropriate microscopic picture of the dynamics for this class of epidemics.
|
q-bio/0505027
|
Richard Kerner
|
Richard Kerner
|
Combinatorial rules of icosahedral capsid growth
|
New version with figures included
| null | null | null |
q-bio.QM
| null |
A model of growth of icosahedral viral capsids is proposed. It takes into
account the diversity of hexamers' compositions, leading to definite capsid
size. We show that the observed yield of capsid production implies a very high
level of self-organization of elementary building blocks. The exact number of
different protein dimers composing hexamers is related to the size of a given
capsid, labeled by its T-number. Simple rules determining these numbers for
each value of T are deduced and certain consequences are discussed.
|
[
{
"created": "Sun, 15 May 2005 06:04:04 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2005 07:07:28 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Kerner",
"Richard",
""
]
] |
A model of growth of icosahedral viral capsids is proposed. It takes into account the diversity of hexamers' compositions, leading to definite capsid size. We show that the observed yield of capsid production implies a very high level of self-organization of elementary building blocks. The exact number of different protein dimers composing hexamers is related to the size of a given capsid, labeled by its T-number. Simple rules determining these numbers for each value of T are deduced and certain consequences are discussed.
|
2007.01363
|
Armita Nourmohammad
|
Oskar H Schnaack, Armita Nourmohammad
|
Optimal evolutionary decision-making to store immune memory
| null | null | null | null |
q-bio.PE physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adaptive immune system provides a diverse set of molecules that can mount
specific responses against a multitude of pathogens. Memory is a key feature of
adaptive immunity, which allows organisms to respond more readily upon
re-infections. However, differentiation of memory cells is still one of the
least understood cell fate decisions. Here, we introduce a mathematical
framework to characterize optimal strategies to store memory to maximize the
utility of immune response over an organism's lifetime. We show that memory
production should be actively regulated to balance between affinity and
cross-reactivity of immune receptors for an effective protection against
evolving pathogens. Moreover, we predict that specificity of memory should
depend on the organism's lifespan, and shorter-lived organisms with fewer
pathogenic encounters should store more cross-reactive memory. Our framework
provides a baseline to gauge the efficacy of immune memory in light of an
organism's coevolutionary history with pathogens.
|
[
{
"created": "Thu, 2 Jul 2020 20:03:11 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 23:14:52 GMT",
"version": "v2"
}
] |
2021-04-14
|
[
[
"Schnaack",
"Oskar H",
""
],
[
"Nourmohammad",
"Armita",
""
]
] |
The adaptive immune system provides a diverse set of molecules that can mount specific responses against a multitude of pathogens. Memory is a key feature of adaptive immunity, which allows organisms to respond more readily upon re-infections. However, differentiation of memory cells is still one of the least understood cell fate decisions. Here, we introduce a mathematical framework to characterize optimal strategies to store memory to maximize the utility of immune response over an organism's lifetime. We show that memory production should be actively regulated to balance between affinity and cross-reactivity of immune receptors for an effective protection against evolving pathogens. Moreover, we predict that specificity of memory should depend on the organism's lifespan, and shorter-lived organisms with fewer pathogenic encounters should store more cross-reactive memory. Our framework provides a baseline to gauge the efficacy of immune memory in light of an organism's coevolutionary history with pathogens.
|
2205.13088
|
Qi Li
|
Qi Li, Khalique Newaz, and Tijana Milenkovi\'c
|
Towards future directions in data-integrative supervised prediction of
human aging-related genes
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identification of human genes involved in the aging process is critical due
to the incidence of many diseases with age. A state-of-the-art approach for
this purpose infers a weighted dynamic aging-specific subnetwork by mapping
gene expression (GE) levels at different ages onto the protein-protein
interaction network (PPIN). Then, it analyzes this subnetwork in a supervised
manner by training a predictive model to learn how network topologies of known
aging- vs. non-aging-related genes change across ages. Finally, it uses the
trained model to predict novel aging-related genes. However, the best current
subnetwork resulting from this approach still yields suboptimal prediction
accuracy. This could be because it was inferred using outdated GE and PPIN
data. Here, we evaluate whether analyzing a weighted dynamic aging-specific
subnetwork inferred from newer GE and PPIN data improves prediction accuracy
upon analyzing the best current subnetwork inferred from outdated data.
Unexpectedly, we find that not to be the case. To understand this, we perform
aging-related pathway and Gene Ontology (GO) term enrichment analyses. We find
that the suboptimal prediction accuracy, regardless of which GE or PPIN data is
used, may be caused by the current knowledge about which genes are
aging-related being incomplete, or by the current methods for inferring or
analyzing an aging-specific subnetwork being unable to capture all of the
aging-related knowledge. These findings can potentially guide future directions
towards improving supervised prediction of aging-related genes via -omics data
integration.
|
[
{
"created": "Thu, 26 May 2022 00:07:06 GMT",
"version": "v1"
}
] |
2022-05-27
|
[
[
"Li",
"Qi",
""
],
[
"Newaz",
"Khalique",
""
],
[
"Milenković",
"Tijana",
""
]
] |
Identification of human genes involved in the aging process is critical due to the incidence of many diseases with age. A state-of-the-art approach for this purpose infers a weighted dynamic aging-specific subnetwork by mapping gene expression (GE) levels at different ages onto the protein-protein interaction network (PPIN). Then, it analyzes this subnetwork in a supervised manner by training a predictive model to learn how network topologies of known aging- vs. non-aging-related genes change across ages. Finally, it uses the trained model to predict novel aging-related genes. However, the best current subnetwork resulting from this approach still yields suboptimal prediction accuracy. This could be because it was inferred using outdated GE and PPIN data. Here, we evaluate whether analyzing a weighted dynamic aging-specific subnetwork inferred from newer GE and PPIN data improves prediction accuracy upon analyzing the best current subnetwork inferred from outdated data. Unexpectedly, we find that not to be the case. To understand this, we perform aging-related pathway and Gene Ontology (GO) term enrichment analyses. We find that the suboptimal prediction accuracy, regardless of which GE or PPIN data is used, may be caused by the current knowledge about which genes are aging-related being incomplete, or by the current methods for inferring or analyzing an aging-specific subnetwork being unable to capture all of the aging-related knowledge. These findings can potentially guide future directions towards improving supervised prediction of aging-related genes via -omics data integration.
|
1308.1388
|
Mark Howison
|
Mark Howison, Felipe Zapata, Erika J. Edwards, Casey W. Dunn
|
Bayesian genome assembly and assessment by Markov Chain Monte Carlo
sampling
|
17 pages, 5 figures
|
PLoS ONE 9(6): e99497 (2014)
|
10.1371/journal.pone.0099497
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most genome assemblers construct point estimates, choosing a genome sequence
from among many alternative hypotheses that are supported by the data. We
present a Markov Chain Monte Carlo approach to sequence assembly that instead
generates distributions of assembly hypotheses with posterior probabilities,
providing an explicit statistical framework for evaluating alternative
hypotheses and assessing assembly uncertainty. We implement this approach in a
prototype assembler and illustrate its application to the bacteriophage
PhiX174.
|
[
{
"created": "Tue, 6 Aug 2013 19:40:08 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Oct 2013 14:57:40 GMT",
"version": "v2"
}
] |
2014-06-30
|
[
[
"Howison",
"Mark",
""
],
[
"Zapata",
"Felipe",
""
],
[
"Edwards",
"Erika J.",
""
],
[
"Dunn",
"Casey W.",
""
]
] |
Most genome assemblers construct point estimates, choosing a genome sequence from among many alternative hypotheses that are supported by the data. We present a Markov Chain Monte Carlo approach to sequence assembly that instead generates distributions of assembly hypotheses with posterior probabilities, providing an explicit statistical framework for evaluating alternative hypotheses and assessing assembly uncertainty. We implement this approach in a prototype assembler and illustrate its application to the bacteriophage PhiX174.
|
2403.14801
|
Junyoung Kim
|
Junyoung Kim (1), Jingye Yang (2 and 4), Kai Wang (2 and 3), Chunhua
Weng (1) and Cong Liu (1) ((1) Department of Biomedical Informatics, Columbia
University, New York, NY, USA, (2) Raymond G. Perelman Center for Cellular
and Molecular Therapeutics, Children's Hospital of Philadelphia,
Philadelphia, USA, (3) Department of Pathology and Laboratory Medicine,
University of Pennsylvania, Philadelphia, USA, (4) Department of Mathematics,
University of Pennsylvania, Philadelphia, USA)
|
Assessing the Utility of Large Language Models for Phenotype-Driven Gene
Prioritization in Rare Genetic Disorder Diagnosis
|
56 pages, 6 figures, 6 tables, 2 supplementary tables
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phenotype-driven gene prioritization is a critical process in the diagnosis
of rare genetic disorders for identifying and ranking potential disease-causing
genes based on observed physical traits or phenotypes. While traditional
approaches rely on curated knowledge graphs with phenotype-gene relations,
recent advancements in large language models have opened doors to the potential
of AI predictions through extensive training on diverse corpora and complex
models. This study conducted a comprehensive evaluation of five large language
models, including two Generative Pre-trained Transformers series, and three
Llama2 series, assessing their performance across three key metrics: task
completeness, gene prediction accuracy, and adherence to required output
structures. Various experiments explored combinations of models, prompts, input
types, and task difficulty levels. Our findings reveal that even the
best-performing LLM, GPT-4, achieved an accuracy of 16.0%, which still lags
behind traditional bioinformatics tools. Prediction accuracy increased with the
parameter/model size. A similar increasing trend was observed for the task
completion rate, with complicated prompts more likely to increase task
completeness in models smaller than GPT-4. However, complicated prompts are
more likely to decrease the structure compliance rate, but no prompt effects on
GPT-4. Compared to HPO term-based input, LLM was also able to achieve better
than random prediction accuracy by taking free-text input, but slightly lower
than with the HPO input. Bias analysis showed that certain genes, such as
MECP2, CDKL5, and SCN1A, are more likely to be top-ranked, potentially
explaining the variances observed across different datasets. This study
provides valuable insights into the integration of LLMs within genomic
analysis, contributing to the ongoing discussion on the utilization of advanced
LLMs in clinical workflows.
|
[
{
"created": "Thu, 21 Mar 2024 19:29:44 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 20:55:34 GMT",
"version": "v2"
}
] |
2024-04-04
|
[
[
"Kim",
"Junyoung",
"",
"2 and 4"
],
[
"Yang",
"Jingye",
"",
"2 and 4"
],
[
"Wang",
"Kai",
"",
"2 and 3"
],
[
"Weng",
"Chunhua",
""
],
[
"Liu",
"Cong",
""
]
] |
Phenotype-driven gene prioritization is a critical process in the diagnosis of rare genetic disorders for identifying and ranking potential disease-causing genes based on observed physical traits or phenotypes. While traditional approaches rely on curated knowledge graphs with phenotype-gene relations, recent advancements in large language models have opened doors to the potential of AI predictions through extensive training on diverse corpora and complex models. This study conducted a comprehensive evaluation of five large language models, including two Generative Pre-trained Transformers series, and three Llama2 series, assessing their performance across three key metrics: task completeness, gene prediction accuracy, and adherence to required output structures. Various experiments explored combinations of models, prompts, input types, and task difficulty levels. Our findings reveal that even the best-performing LLM, GPT-4, achieved an accuracy of 16.0%, which still lags behind traditional bioinformatics tools. Prediction accuracy increased with the parameter/model size. A similar increasing trend was observed for the task completion rate, with complicated prompts more likely to increase task completeness in models smaller than GPT-4. However, complicated prompts are more likely to decrease the structure compliance rate, but no prompt effects on GPT-4. Compared to HPO term-based input, LLM was also able to achieve better than random prediction accuracy by taking free-text input, but slightly lower than with the HPO input. Bias analysis showed that certain genes, such as MECP2, CDKL5, and SCN1A, are more likely to be top-ranked, potentially explaining the variances observed across different datasets. This study provides valuable insights into the integration of LLMs within genomic analysis, contributing to the ongoing discussion on the utilization of advanced LLMs in clinical workflows.
|
1707.08381
|
Junqiu Wu
|
Ke Liu (1), Xiangyan Sun (3), Jun Ma (3), Zhenyu Zhou (3), Qilin Dong
(4), Shengwen Peng (3), Junqiu Wu (3), Suocheng Tan (3), G\"unter Blobel (2),
and Jie Fan (1) ((1) Accutar Biotechnology, (2) Laboratory of Cell Biology,
Howard Hughes Medical Institute, The Rockefeller University (3) Accutar
Biotechnology (Shanghai), (4) Fudan University)
|
Prediction of amino acid side chain conformation using a deep neural
network
| null | null | null | null |
q-bio.BM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A deep neural network based architecture was constructed to predict amino
acid side chain conformation with unprecedented accuracy. Amino acid side chain
conformation prediction is essential for protein homology modeling and protein
design. Current widely-adopted methods use physics-based energy functions to
evaluate side chain conformation. Here, using a deep neural network
architecture without physics-based assumptions, we have demonstrated that side
chain conformation prediction accuracy can be improved by more than 25%,
especially for aromatic residues compared with current standard methods. More
strikingly, the prediction method presented here is robust enough to identify
individual conformational outliers from high resolution structures in a protein
data bank without providing its structural factors. We envisage that our amino
acid side chain predictor could be used as a quality check step for future
protein structure model validation and many other potential applications such
as side chain assignment in Cryo-electron microscopy, crystallography model
auto-building, protein folding and small molecule ligand docking.
|
[
{
"created": "Wed, 26 Jul 2017 11:22:57 GMT",
"version": "v1"
}
] |
2017-07-27
|
[
[
"Liu",
"Ke",
""
],
[
"Sun",
"Xiangyan",
""
],
[
"Ma",
"Jun",
""
],
[
"Zhou",
"Zhenyu",
""
],
[
"Dong",
"Qilin",
""
],
[
"Peng",
"Shengwen",
""
],
[
"Wu",
"Junqiu",
""
],
[
"Tan",
"Suocheng",
""
],
[
"Blobel",
"Günter",
""
],
[
"Fan",
"Jie",
""
]
] |
A deep neural network based architecture was constructed to predict amino acid side chain conformation with unprecedented accuracy. Amino acid side chain conformation prediction is essential for protein homology modeling and protein design. Current widely-adopted methods use physics-based energy functions to evaluate side chain conformation. Here, using a deep neural network architecture without physics-based assumptions, we have demonstrated that side chain conformation prediction accuracy can be improved by more than 25%, especially for aromatic residues compared with current standard methods. More strikingly, the prediction method presented here is robust enough to identify individual conformational outliers from high resolution structures in a protein data bank without providing its structural factors. We envisage that our amino acid side chain predictor could be used as a quality check step for future protein structure model validation and many other potential applications such as side chain assignment in Cryo-electron microscopy, crystallography model auto-building, protein folding and small molecule ligand docking.
|
2209.04016
|
Hue Sun Chan
|
Jonas Wess\'en, Suman Das, Tanmoy Pal, Hue Sun Chan
|
Analytical Formulation and Field-Theoretic Simulation of
Sequence-Specific Phase Separation of Proteinlike Heteropolymers with Short-
and Long-Spatial-Range Interactions
|
54 pages, 13 figures, 168 references, with typographical errors in
previous versions corrected and clarifications added. Accepted for
publication in the Journal of Physical Chemistry B
|
The Journal of Physical Chemistry B 126, 9222-9245 (2022)
|
10.1021/acs.jpcb.2c06181
| null |
q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
A theory for sequence dependent liquid-liquid phase separation (LLPS) of
intrinsically disordered proteins (IDPs) in the study of biomolecular
condensates is formulated by extending the random phase approximation (RPA) and
field-theoretic simulation (FTS) of heteropolymers with spatially long-range
Coulomb interactions to include the fundamental effects of short-range,
hydrophobic-like interactions between amino acid residues. To this end,
short-range effects are modeled by Yukawa interactions between multiple
nonelectrostatic charges derived from an eigenvalue decomposition of pairwise
residue-residue contact energies. Chain excluded volume is afforded by
incompressibility constraints. A mean-field approximation leads to an effective
Flory $\chi$ parameter, which, in conjunction with RPA, accounts for the
contact-interaction effects of amino acid composition and the sequence-pattern
effects of long-range electrostatics in IDP LLPS, whereas FTS based on the
formulation provides full sequence dependence for both short- and long-range
interactions. This general approach is illustrated here by applications to
variants of a natural IDP in the context of several different amino-acid
interaction schemes as well as a set of different model hydrophobic-polar
sequences sharing the same composition. Effectiveness of the methodology is
verified by coarse-grained explicit-chain molecular dynamics simulations.
|
[
{
"created": "Thu, 8 Sep 2022 19:56:35 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 21:55:44 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Nov 2022 23:44:41 GMT",
"version": "v3"
}
] |
2022-11-28
|
[
[
"Wessén",
"Jonas",
""
],
[
"Das",
"Suman",
""
],
[
"Pal",
"Tanmoy",
""
],
[
"Chan",
"Hue Sun",
""
]
] |
A theory for sequence dependent liquid-liquid phase separation (LLPS) of intrinsically disordered proteins (IDPs) in the study of biomolecular condensates is formulated by extending the random phase approximation (RPA) and field-theoretic simulation (FTS) of heteropolymers with spatially long-range Coulomb interactions to include the fundamental effects of short-range, hydrophobic-like interactions between amino acid residues. To this end, short-range effects are modeled by Yukawa interactions between multiple nonelectrostatic charges derived from an eigenvalue decomposition of pairwise residue-residue contact energies. Chain excluded volume is afforded by incompressibility constraints. A mean-field approximation leads to an effective Flory $\chi$ parameter, which, in conjunction with RPA, accounts for the contact-interaction effects of amino acid composition and the sequence-pattern effects of long-range electrostatics in IDP LLPS, whereas FTS based on the formulation provides full sequence dependence for both short- and long-range interactions. This general approach is illustrated here by applications to variants of a natural IDP in the context of several different amino-acid interaction schemes as well as a set of different model hydrophobic-polar sequences sharing the same composition. Effectiveness of the methodology is verified by coarse-grained explicit-chain molecular dynamics simulations.
|
2402.05886
|
Esteban Paduro
|
Eduardo Cerpa, Nathaly Corrales, Mat\'ias Courdurier, Leonel E.
Medina, Esteban Paduro
|
The impact of high frequency-based stability on the onset of action
potentials in neuron models
|
32 pages, 3 figures
| null | null | null |
q-bio.NC math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the phenomenon of conduction block in model neurons using
high-frequency biphasic stimulation (HFBS). The focus is investigating the
triggering of undesired onset action potentials when the HFBS is turned on. The
approach analyzes the transient behavior of an averaged system corresponding to
the FitzHugh-Nagumo neuron model using Lyapunov and quasi-static methods. The
first result provides a more comprehensive understanding of the onset
activation through a mathematical proof of how to avoid it using a ramp in the
amplitude of the oscillatory source. The second result tests the response of
the blocked system to a piecewise linear stimulus, providing a quantitative
description of how the HFBS strength translates into conduction block
robustness. The results of this work can provide insights for the design of
electrical neurostimulation therapies.
|
[
{
"created": "Thu, 8 Feb 2024 18:23:18 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Cerpa",
"Eduardo",
""
],
[
"Corrales",
"Nathaly",
""
],
[
"Courdurier",
"Matías",
""
],
[
"Medina",
"Leonel E.",
""
],
[
"Paduro",
"Esteban",
""
]
] |
This paper studies the phenomenon of conduction block in model neurons using high-frequency biphasic stimulation (HFBS). The focus is investigating the triggering of undesired onset action potentials when the HFBS is turned on. The approach analyzes the transient behavior of an averaged system corresponding to the FitzHugh-Nagumo neuron model using Lyapunov and quasi-static methods. The first result provides a more comprehensive understanding of the onset activation through a mathematical proof of how to avoid it using a ramp in the amplitude of the oscillatory source. The second result tests the response of the blocked system to a piecewise linear stimulus, providing a quantitative description of how the HFBS strength translates into conduction block robustness. The results of this work can provide insights for the design of electrical neurostimulation therapies.
|
1402.1445
|
Eleonora Alfinito Dr.
|
E. Alfinito and L. Reggiani
|
Opsin vs opsin: new materials for biotechnological applications
|
10 pages, 8 figures revised version with more figures
|
J. Appl. Phys. 116, 064901 (2014)
|
10.1063/1.4892445
| null |
q-bio.BM cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The need of new diagnostic methods satisfying, as an early detection, a low
invasive procedure and a cost-efficient value, is orienting the technological
research toward the use of bio-integrated devices, in particular bio-sensors.
The set of know-why necessary to achieve this goal is wide, from biochemistry
to electronics and is summarized in an emerging branch of electronics, called
\textit{proteotronics}. Proteotronics is here here applied to state a
comparative analysis of the electrical responses coming from type-1 and type-2
opsins. In particular, the procedure is used as an early investigation of a
recently discovered family of opsins, the proteorhodopsins activated by blue
light, BPRs. The results reveal some interesting and unexpected similarities
between proteins of the two families, suggesting the global electrical response
are not strictly linked to the class identity.
|
[
{
"created": "Mon, 3 Feb 2014 16:04:28 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jul 2014 14:52:25 GMT",
"version": "v2"
},
{
"created": "Sat, 9 Aug 2014 08:39:08 GMT",
"version": "v3"
}
] |
2015-06-18
|
[
[
"Alfinito",
"E.",
""
],
[
"Reggiani",
"L.",
""
]
] |
The need of new diagnostic methods satisfying, as an early detection, a low invasive procedure and a cost-efficient value, is orienting the technological research toward the use of bio-integrated devices, in particular bio-sensors. The set of know-why necessary to achieve this goal is wide, from biochemistry to electronics and is summarized in an emerging branch of electronics, called \textit{proteotronics}. Proteotronics is here here applied to state a comparative analysis of the electrical responses coming from type-1 and type-2 opsins. In particular, the procedure is used as an early investigation of a recently discovered family of opsins, the proteorhodopsins activated by blue light, BPRs. The results reveal some interesting and unexpected similarities between proteins of the two families, suggesting the global electrical response are not strictly linked to the class identity.
|
1302.5866
|
Sungwoo Ahn
|
Sungwoo Ahn and Leonid L. Rubchinsky
|
Short desynchronization episodes prevail in synchronous dynamics of
human brain rhythms
|
7 pages, 7 figures. The paper will appear in Chaos
|
Chaos 23(1):013138, 2013
|
10.1063/1.4794793
| null |
q-bio.NC nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural synchronization is believed to be critical for many brain functions.
It frequently exhibits temporal variability, but it is not known if this
variability has a specific temporal patterning. This study explores these
synchronization/desynchronization patterns. We employ recently developed
techniques to analyze the fine temporal structure of phase-locking to study the
temporal patterning of synchrony of the human brain rhythms. We study neural
oscillations recorded by EEG in $\alpha$ and $\beta$ frequency bands in healthy
human subjects at rest and during the execution of a task. While the
phase-locking strength depends on many factors, dynamics of synchrony has a
very specific temporal pattern: synchronous states are interrupted by frequent,
but short desynchronization episodes. The probability for a desynchronization
episode to occur decreased with its duration. The transition matrix between
synchronized and desynchronized states has eigenvalues close to 0 and 1 where
eigenvalue 1 has multiplicity 1, and therefore if the stationary distribution
between these states is perturbed, the system converges back to the stationary
distribution very fast. The qualitative similarity of this patterning across
different subjects, brain states and electrode locations suggests that this may
be a general type of dynamics for the brain. Earlier studies indicate that not
all oscillatory networks have this kind of patterning of
synchronization/desynchronization dynamics. Thus the observed prevalence of
short (but potentially frequent) desynchronization events (length of one cycle
of oscillations) may have important functional implications for the brain.
Numerous short desynchronizations (as opposed to infrequent, but long
desynchronizations) may allow for a quick and efficient formation and break-up
of functionally significant neuronal assemblies.
|
[
{
"created": "Sun, 24 Feb 2013 03:49:44 GMT",
"version": "v1"
}
] |
2013-03-11
|
[
[
"Ahn",
"Sungwoo",
""
],
[
"Rubchinsky",
"Leonid L.",
""
]
] |
Neural synchronization is believed to be critical for many brain functions. It frequently exhibits temporal variability, but it is not known if this variability has a specific temporal patterning. This study explores these synchronization/desynchronization patterns. We employ recently developed techniques to analyze the fine temporal structure of phase-locking to study the temporal patterning of synchrony of the human brain rhythms. We study neural oscillations recorded by EEG in $\alpha$ and $\beta$ frequency bands in healthy human subjects at rest and during the execution of a task. While the phase-locking strength depends on many factors, dynamics of synchrony has a very specific temporal pattern: synchronous states are interrupted by frequent, but short desynchronization episodes. The probability for a desynchronization episode to occur decreased with its duration. The transition matrix between synchronized and desynchronized states has eigenvalues close to 0 and 1 where eigenvalue 1 has multiplicity 1, and therefore if the stationary distribution between these states is perturbed, the system converges back to the stationary distribution very fast. The qualitative similarity of this patterning across different subjects, brain states and electrode locations suggests that this may be a general type of dynamics for the brain. Earlier studies indicate that not all oscillatory networks have this kind of patterning of synchronization/desynchronization dynamics. Thus the observed prevalence of short (but potentially frequent) desynchronization events (length of one cycle of oscillations) may have important functional implications for the brain. Numerous short desynchronizations (as opposed to infrequent, but long desynchronizations) may allow for a quick and efficient formation and break-up of functionally significant neuronal assemblies.
|
1801.10189
|
Jason Swedlow
|
Jan Ellenberg, Jason R Swedlow, Mary Barlow, Charles E Cook, Ardan
Patwardhan, Alvis Brazma, and Ewan Birney
|
Public archives for biological image data
|
13 pages, 1 figure
| null |
10.1038/s41592-018-0195-8
| null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Public data archives are the backbone of modern biological and biomedical
research. While archives for biological molecules and structures are
well-established, resources for imaging data do not yet cover the full range of
spatial and temporal scales or application domains used by the scientific
community. In the last few years, the technical barriers to building such
resources have been solved and the first examples of scientific outputs from
public image data resources, often through linkage to existing molecular
resources, have been published. Using the successes of existing biomolecular
resources as a guide, we present the rationale and principles for the
construction of image data archives and databases that will be the foundation
of the next revolution in biological and biomedical informatics and discovery.
|
[
{
"created": "Tue, 30 Jan 2018 19:44:57 GMT",
"version": "v1"
}
] |
2018-11-05
|
[
[
"Ellenberg",
"Jan",
""
],
[
"Swedlow",
"Jason R",
""
],
[
"Barlow",
"Mary",
""
],
[
"Cook",
"Charles E",
""
],
[
"Patwardhan",
"Ardan",
""
],
[
"Brazma",
"Alvis",
""
],
[
"Birney",
"Ewan",
""
]
] |
Public data archives are the backbone of modern biological and biomedical research. While archives for biological molecules and structures are well-established, resources for imaging data do not yet cover the full range of spatial and temporal scales or application domains used by the scientific community. In the last few years, the technical barriers to building such resources have been solved and the first examples of scientific outputs from public image data resources, often through linkage to existing molecular resources, have been published. Using the successes of existing biomolecular resources as a guide, we present the rationale and principles for the construction of image data archives and databases that will be the foundation of the next revolution in biological and biomedical informatics and discovery.
|
1812.08357
|
Sebastian Roch
|
Sebastien Roch
|
On the variance of internode distance under the multispecies coalescent
| null |
RECOMB-CG 2018
| null | null |
q-bio.PE cs.CE math.PR math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of estimating species trees from unrooted gene tree
topologies in the presence of incomplete lineage sorting, a common phenomenon
that creates gene tree heterogeneity in multilocus datasets. One popular class
of reconstruction methods in this setting is based on internode distances, i.e.
the average graph distance between pairs of species across gene trees. While
statistical consistency in the limit of large numbers of loci has been
established in some cases, little is known about the sample complexity of such
methods. Here we make progress on this question by deriving a lower bound on
the worst-case variance of internode distance which depends linearly on the
corresponding graph distance in the species tree. We also discuss some
algorithmic implications.
|
[
{
"created": "Thu, 20 Dec 2018 04:48:19 GMT",
"version": "v1"
}
] |
2018-12-21
|
[
[
"Roch",
"Sebastien",
""
]
] |
We consider the problem of estimating species trees from unrooted gene tree topologies in the presence of incomplete lineage sorting, a common phenomenon that creates gene tree heterogeneity in multilocus datasets. One popular class of reconstruction methods in this setting is based on internode distances, i.e. the average graph distance between pairs of species across gene trees. While statistical consistency in the limit of large numbers of loci has been established in some cases, little is known about the sample complexity of such methods. Here we make progress on this question by deriving a lower bound on the worst-case variance of internode distance which depends linearly on the corresponding graph distance in the species tree. We also discuss some algorithmic implications.
|
2311.03821
|
Veronica Centorrino
|
Veronica Centorrino, Anand Gokhale, Alexander Davydov, Giovanni Russo,
Francesco Bullo
|
Positive Competitive Networks for Sparse Reconstruction
|
26 pages, 9 Figure, 1 Table
| null | null | null |
q-bio.NC cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose and analyze a continuous-time firing-rate neural network, the
positive firing-rate competitive network (\pfcn), to tackle sparse
reconstruction problems with non-negativity constraints. These problems, which
involve approximating a given input stimulus from a dictionary using a set of
sparse (active) neurons, play a key role in a wide range of domains, including
for example neuroscience, signal processing, and machine learning. First, by
leveraging the theory of proximal operators, we relate the equilibria of a
family of continuous-time firing-rate neural networks to the optimal solutions
of sparse reconstruction problems. Then, we prove that the \pfcn is a positive
system and give rigorous conditions for the convergence to the equilibrium.
Specifically, we show that the convergence: (i) only depends on a property of
the dictionary; (ii) is linear-exponential, in the sense that initially the
convergence rate is at worst linear and then, after a transient, it becomes
exponential. We also prove a number of technical results to assess the
contractivity properties of the neural dynamics of interest. Our analysis
leverages contraction theory to characterize the behavior of a family of
firing-rate competitive networks for sparse reconstruction with and without
non-negativity constraints. Finally, we validate the effectiveness of our
approach via a numerical example.
|
[
{
"created": "Tue, 7 Nov 2023 09:12:39 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 16:00:46 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Mar 2024 12:39:11 GMT",
"version": "v3"
}
] |
2024-03-25
|
[
[
"Centorrino",
"Veronica",
""
],
[
"Gokhale",
"Anand",
""
],
[
"Davydov",
"Alexander",
""
],
[
"Russo",
"Giovanni",
""
],
[
"Bullo",
"Francesco",
""
]
] |
We propose and analyze a continuous-time firing-rate neural network, the positive firing-rate competitive network (\pfcn), to tackle sparse reconstruction problems with non-negativity constraints. These problems, which involve approximating a given input stimulus from a dictionary using a set of sparse (active) neurons, play a key role in a wide range of domains, including for example neuroscience, signal processing, and machine learning. First, by leveraging the theory of proximal operators, we relate the equilibria of a family of continuous-time firing-rate neural networks to the optimal solutions of sparse reconstruction problems. Then, we prove that the \pfcn is a positive system and give rigorous conditions for the convergence to the equilibrium. Specifically, we show that the convergence: (i) only depends on a property of the dictionary; (ii) is linear-exponential, in the sense that initially the convergence rate is at worst linear and then, after a transient, it becomes exponential. We also prove a number of technical results to assess the contractivity properties of the neural dynamics of interest. Our analysis leverages contraction theory to characterize the behavior of a family of firing-rate competitive networks for sparse reconstruction with and without non-negativity constraints. Finally, we validate the effectiveness of our approach via a numerical example.
|
1304.8077
|
Claus Kadelka
|
Claus Kadelka, David Murrugarra, Reinhard Laubenbacher
|
Stabilizing Gene Regulatory Networks Through Feedforward Loops
|
19 pages, 3 figures, (The following article has been accepted by
Chaos. After it is published, it will be found at http://chaos.aip.org)
| null |
10.1063/1.4808248
| null |
q-bio.MN math.DS nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The global dynamics of gene regulatory networks are known to show robustness
to perturbations in the form of intrinsic and extrinsic noise, as well as
mutations of individual genes. One molecular mechanism underlying this
robustness has been identified as the action of so-called microRNAs that
operate via feedforward loops. We present results of a computational study,
using the modeling framework of stochastic Boolean networks, which explores the
role that such network motifs play in stabilizing global dynamics. The paper
introduces a new measure for the stability of stochastic networks. The results
show that certain types of feedforward loops do indeed buffer the network
against stochastic effects.
|
[
{
"created": "Tue, 30 Apr 2013 17:16:38 GMT",
"version": "v1"
},
{
"created": "Thu, 23 May 2013 03:37:37 GMT",
"version": "v2"
}
] |
2015-06-15
|
[
[
"Kadelka",
"Claus",
""
],
[
"Murrugarra",
"David",
""
],
[
"Laubenbacher",
"Reinhard",
""
]
] |
The global dynamics of gene regulatory networks are known to show robustness to perturbations in the form of intrinsic and extrinsic noise, as well as mutations of individual genes. One molecular mechanism underlying this robustness has been identified as the action of so-called microRNAs that operate via feedforward loops. We present results of a computational study, using the modeling framework of stochastic Boolean networks, which explores the role that such network motifs play in stabilizing global dynamics. The paper introduces a new measure for the stability of stochastic networks. The results show that certain types of feedforward loops do indeed buffer the network against stochastic effects.
|
1911.08509
|
Dami\'an G. Hern\'andez
|
Dami\'an G. Hern\'andez, Samuel J. Sober and Ilya Nemenman
|
Unsupervised Bayesian Ising Approximation for revealing the neural
dictionary in songbirds
| null | null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of deciphering how low-level patterns (action potentials in the
brain, amino acids in a protein, etc.) drive high-level biological features
(sensorimotor behavior, enzymatic function) represents the central challenge of
quantitative biology. The lack of general methods for doing so from the size of
datasets that can be collected experimentally severely limits our understanding
of the biological world. For example, in neuroscience, some sensory and motor
codes have been shown to consist of precisely timed multi-spike patterns.
However, the combinatorial complexity of such pattern codes have precluded
development of methods for their comprehensive analysis. Thus, just as it is
hard to predict a protein's function based on its sequence, we still do not
understand how to accurately predict an organism's behavior based on neural
activity. Here we derive a method for solving this class of problems. We
demonstrate its utility in an application to neural data, detecting precisely
timed spike patterns that code for specific motor behaviors in a songbird vocal
system. Our method detects such codewords with an arbitrary number of spikes,
does so from small data sets, and accounts for dependencies in occurrences of
codewords. Detecting such dictionaries of important spike patterns --- rather
than merely identifying the timescale on which such patterns exist, as in some
prior approaches --- opens the door for understanding fine motor control and
the neural bases of sensorimotor learning in animals. For example, for the
first time, we identify differences in encoding motor exploration versus
typical behavior. Crucially, our method can be used not only for analysis of
neural systems, but also for understanding the structure of correlations in
other biological and nonbiological datasets.
|
[
{
"created": "Tue, 19 Nov 2019 19:11:49 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Hernández",
"Damián G.",
""
],
[
"Sober",
"Samuel J.",
""
],
[
"Nemenman",
"Ilya",
""
]
] |
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here we derive a method for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. Our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such dictionaries of important spike patterns --- rather than merely identifying the timescale on which such patterns exist, as in some prior approaches --- opens the door for understanding fine motor control and the neural bases of sensorimotor learning in animals. For example, for the first time, we identify differences in encoding motor exploration versus typical behavior. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
|
1611.02332
|
Peter Gawthrop
|
Peter J. Gawthrop and Edmund J. Crampin
|
Energy-based Analysis of Biomolecular Pathways
| null |
Proc. R. Soc. A 2017 473 20160825
|
10.1098/rspa.2016.0825
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decomposition of biomolecular reaction networks into pathways is a powerful
approach to the analysis of metabolic and signalling networks. Current
approaches based on analysis of the stoichiometric matrix reveal information
about steady-state mass flows (reaction rates) through the network. In this
work we show how pathway analysis of biomolecular networks can be extended
using an energy-based approach to provide information about energy flows
through the network. This energy-based approach is developed using the
engineering-inspired bond graph methodology to represent biomolecular reaction
networks.
The approach is introduced using glycolysis as an exemplar; and is then
applied to analyse the efficiency of free energy transduction in a biomolecular
cycle model of a transporter protein (Sodium-Glucose Transport Protein 1,
SGLT1).
The overall aim of our work is to present a framework for modelling and
analysis of biomolecular reactions and processes which considers energy flows
and losses as well as mass transport.
|
[
{
"created": "Mon, 7 Nov 2016 22:49:56 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2017 09:34:08 GMT",
"version": "v2"
},
{
"created": "Tue, 23 May 2017 07:41:21 GMT",
"version": "v3"
}
] |
2018-08-14
|
[
[
"Gawthrop",
"Peter J.",
""
],
[
"Crampin",
"Edmund J.",
""
]
] |
Decomposition of biomolecular reaction networks into pathways is a powerful approach to the analysis of metabolic and signalling networks. Current approaches based on analysis of the stoichiometric matrix reveal information about steady-state mass flows (reaction rates) through the network. In this work we show how pathway analysis of biomolecular networks can be extended using an energy-based approach to provide information about energy flows through the network. This energy-based approach is developed using the engineering-inspired bond graph methodology to represent biomolecular reaction networks. The approach is introduced using glycolysis as an exemplar; and is then applied to analyse the efficiency of free energy transduction in a biomolecular cycle model of a transporter protein (Sodium-Glucose Transport Protein 1, SGLT1). The overall aim of our work is to present a framework for modelling and analysis of biomolecular reactions and processes which considers energy flows and losses as well as mass transport.
|
1210.3561
|
Miroslaw Rewekant PhD MD
|
S. Piekarski, M. Rewekant
|
On separation of time scales in pharmacokinetics
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A lot of criticism against the standard formulation of pharmacokinetics has
been raised by several authors. It seems that the natural reaction for that
criticism is to comment it from the point of view of the theory of conservation
laws. Simple example of balance equations for the intravenous administration of
drug has been given in 2011 and the corresponding equations for extravasal
administration are in the text. In principle, the equations of that kind allow
one to describe in the self consistent manner different processes of
administration, distribution, metabolism and elimination of drugs. Moreover, it
is possible to model different pharmacokinetic parameters of the
non-compartmental pharmacokinetics and therefore to comment criticism of
Rosigno. However, for practical purposes one needs approximate methods, in
particular, those based on separation of the time scales. In this text, such
method is described and its effectiveness is discussed. Basic equations are in
the next chapter. Final remarks are at the end of the text.
|
[
{
"created": "Fri, 12 Oct 2012 16:13:37 GMT",
"version": "v1"
}
] |
2012-10-15
|
[
[
"Piekarski",
"S.",
""
],
[
"Rewekant",
"M.",
""
]
] |
A lot of criticism against the standard formulation of pharmacokinetics has been raised by several authors. It seems that the natural reaction for that criticism is to comment it from the point of view of the theory of conservation laws. Simple example of balance equations for the intravenous administration of drug has been given in 2011 and the corresponding equations for extravasal administration are in the text. In principle, the equations of that kind allow one to describe in the self consistent manner different processes of administration, distribution, metabolism and elimination of drugs. Moreover, it is possible to model different pharmacokinetic parameters of the non-compartmental pharmacokinetics and therefore to comment criticism of Rosigno. However, for practical purposes one needs approximate methods, in particular, those based on separation of the time scales. In this text, such method is described and its effectiveness is discussed. Basic equations are in the next chapter. Final remarks are at the end of the text.
|
2209.09680
|
Huw Day
|
Huw Day, N C Snaith
|
Stochastic Models for Replication Origin Spacings in Eukaryotic DNA
Replication
|
Reviewer feedback indicated we failed to acknowledge background
literature
| null | null | null |
q-bio.QM math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider eukaryotic DNA replication and in particular the role of
replication origins in this process. We focus on origins which are `active' -
that is, trigger themselves in the process before being read by the replication
forks of other origins. We initially consider the spacings of these active
replication origins in comparison to certain probability distributions of
spacings taken from random matrix theory. We see how the spacings between
neighbouring eigenvalues from certain collections of random matrices has some
potential for modelling the spacing between active origins. This suitability
can be further augmented with the use of uniform thinning which acts as a
continuous deformation between correlated eigenvalue spacings and exponential
(Poissonian) spacings. We model the process as a modified 2D Poisson process
with an added exclusion rule to identify active points based on their position
on the chromosome and trigger time relative to other origins. We see how this
can be reduced to a stochastic geometry problem and show analytically that two
active origins are unlikely to be close together, regardless of how many
non-active points are between them. In particular, we see how these active
origins repel linearly. We then see how data from various DNA datasets match
with simulations from our model. We see that whilst there is variety in the DNA
data, comparing the data with the model provides insight into the replication
origin distribution of various organisms.
|
[
{
"created": "Tue, 20 Sep 2022 12:31:46 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Nov 2022 14:03:53 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Apr 2023 10:33:35 GMT",
"version": "v3"
}
] |
2023-04-11
|
[
[
"Day",
"Huw",
""
],
[
"Snaith",
"N C",
""
]
] |
We consider eukaryotic DNA replication and in particular the role of replication origins in this process. We focus on origins which are `active' - that is, trigger themselves in the process before being read by the replication forks of other origins. We initially consider the spacings of these active replication origins in comparison to certain probability distributions of spacings taken from random matrix theory. We see how the spacings between neighbouring eigenvalues from certain collections of random matrices has some potential for modelling the spacing between active origins. This suitability can be further augmented with the use of uniform thinning which acts as a continuous deformation between correlated eigenvalue spacings and exponential (Poissonian) spacings. We model the process as a modified 2D Poisson process with an added exclusion rule to identify active points based on their position on the chromosome and trigger time relative to other origins. We see how this can be reduced to a stochastic geometry problem and show analytically that two active origins are unlikely to be close together, regardless of how many non-active points are between them. In particular, we see how these active origins repel linearly. We then see how data from various DNA datasets match with simulations from our model. We see that whilst there is variety in the DNA data, comparing the data with the model provides insight into the replication origin distribution of various organisms.
|
2403.07925
|
David C. Williams
|
David C. Williams and Neil Inala
|
Physics-informed generative model for drug-like molecule conformers
|
To appear in the Journal of Chemical Information and Modeling
| null |
10.1021/acs.jcim.3c01816
| null |
q-bio.BM cs.LG physics.chem-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a diffusion-based, generative model for conformer generation. Our
model is focused on the reproduction of bonded structure and is constructed
from the associated terms traditionally found in classical force fields to
ensure a physically relevant representation. Techniques in deep learning are
used to infer atom typing and geometric parameters from a training set.
Conformer sampling is achieved by taking advantage of recent advancements in
diffusion-based generation. By training on large, synthetic data sets of
diverse, drug-like molecules optimized with the semiempirical GFN2-xTB method,
high accuracy is achieved for bonded parameters, exceeding that of
conventional, knowledge-based methods. Results are also compared to
experimental structures from the Protein Databank (PDB) and Cambridge
Structural Database (CSD).
|
[
{
"created": "Thu, 29 Feb 2024 17:11:08 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 00:21:25 GMT",
"version": "v2"
}
] |
2024-03-18
|
[
[
"Williams",
"David C.",
""
],
[
"Inala",
"Neil",
""
]
] |
We present a diffusion-based, generative model for conformer generation. Our model is focused on the reproduction of bonded structure and is constructed from the associated terms traditionally found in classical force fields to ensure a physically relevant representation. Techniques in deep learning are used to infer atom typing and geometric parameters from a training set. Conformer sampling is achieved by taking advantage of recent advancements in diffusion-based generation. By training on large, synthetic data sets of diverse, drug-like molecules optimized with the semiempirical GFN2-xTB method, high accuracy is achieved for bonded parameters, exceeding that of conventional, knowledge-based methods. Results are also compared to experimental structures from the Protein Databank (PDB) and Cambridge Structural Database (CSD).
|
1803.04363
|
Jingyi Zheng
|
Jingyi Zheng, Fushing Hsieh
|
Information of Epileptic Mechanism and its Systemic Change-points in a
Zebrafish's Brain-wide Calcium Imaging Video Data
|
8 Pages, 11 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The epileptic mechanism is postulated as that an animal's neurons gradually
diminish their inhibition function coupled with enhanced excitation when an
epileptic event is approaching. Calcium imaging technique is designed to
directly record brain-wide neurons activity in order to discover the underlying
epileptic mechanism. In this paper, using one brain-wide calcium imaging video
of Zebrafish, we compute dynamic pattern information of the epileptic
mechanism, and devise three graphical displays to show the visible functional
aspect of epileptic mechanism over five inter-ictal periods. The foundation of
our data-driven computations for such dynamic patterns relies on one universal
phenomenon discovered across 696 informative pixels. This universality is that
each pixel's progressive 5-percentile process oscillates in an irregular
fashion at first, but, after the middle point of inter-ictal period, the
oscillation is replaced by a steady increasing trend. Such dynamic patterns are
collectively transformed into a visible systemic change-point as an early
warning signal (EWS) of an incoming epileptic event. We conclude through the
graphic displays that pattern information extracted from the calcium imaging
video realistically reveals the Zebrafish's authentic epileptic mechanism.
|
[
{
"created": "Wed, 14 Feb 2018 01:27:03 GMT",
"version": "v1"
}
] |
2018-03-13
|
[
[
"Zheng",
"Jingyi",
""
],
[
"Hsieh",
"Fushing",
""
]
] |
The epileptic mechanism is postulated as that an animal's neurons gradually diminish their inhibition function coupled with enhanced excitation when an epileptic event is approaching. Calcium imaging technique is designed to directly record brain-wide neurons activity in order to discover the underlying epileptic mechanism. In this paper, using one brain-wide calcium imaging video of Zebrafish, we compute dynamic pattern information of the epileptic mechanism, and devise three graphical displays to show the visible functional aspect of epileptic mechanism over five inter-ictal periods. The foundation of our data-driven computations for such dynamic patterns relies on one universal phenomenon discovered across 696 informative pixels. This universality is that each pixel's progressive 5-percentile process oscillates in an irregular fashion at first, but, after the middle point of inter-ictal period, the oscillation is replaced by a steady increasing trend. Such dynamic patterns are collectively transformed into a visible systemic change-point as an early warning signal (EWS) of an incoming epileptic event. We conclude through the graphic displays that pattern information extracted from the calcium imaging video realistically reveals the Zebrafish's authentic epileptic mechanism.
|
1712.06042
|
Sergei Maslov
|
Akshit Goyal, Veronika Dubinkina, Sergei Maslov
|
Multiple stable states in microbial communities explained by the stable
marriage problem
|
24 pages (including SI), 4 figures (+3 supplementary figures)
|
The ISME Journal 12, 2823-2834 (2018)
|
10.1038/s41396-018-0222-x
| null |
q-bio.PE cs.GT physics.bio-ph q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Experimental studies of microbial communities routinely reveal that they have
multiple stable states. While each of these states is generally resilient,
certain perturbations such as antibiotics, probiotics and diet shifts, result
in transitions to other states. Can we reliably both predict such stable states
as well as direct and control transitions between them? Here we present a new
conceptual model inspired by the stable marriage problem in game theory and
economics in which microbial communities naturally exhibit multiple stable
states, each state with a different species' abundance profile. Our model's
core ingredient is that microbes utilize nutrients one at a time while
competing with each other. Using only two ranked tables, one with microbes'
nutrient preferences and one with their competitive abilities, we can determine
all possible stable states as well as predict inter-state transitions,
triggered by the removal or addition of a specific nutrient or microbe.
Further, using an example of 7 Bacteroides species common to the human gut
utilizing 9 polysaccharides, we predict that mutual complementarity in nutrient
preferences enables these species to coexist at high abundances.
|
[
{
"created": "Sun, 17 Dec 2017 01:33:57 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2018 08:32:37 GMT",
"version": "v2"
}
] |
2019-02-15
|
[
[
"Goyal",
"Akshit",
""
],
[
"Dubinkina",
"Veronika",
""
],
[
"Maslov",
"Sergei",
""
]
] |
Experimental studies of microbial communities routinely reveal that they have multiple stable states. While each of these states is generally resilient, certain perturbations such as antibiotics, probiotics and diet shifts, result in transitions to other states. Can we reliably both predict such stable states as well as direct and control transitions between them? Here we present a new conceptual model inspired by the stable marriage problem in game theory and economics in which microbial communities naturally exhibit multiple stable states, each state with a different species' abundance profile. Our model's core ingredient is that microbes utilize nutrients one at a time while competing with each other. Using only two ranked tables, one with microbes' nutrient preferences and one with their competitive abilities, we can determine all possible stable states as well as predict inter-state transitions, triggered by the removal or addition of a specific nutrient or microbe. Further, using an example of 7 Bacteroides species common to the human gut utilizing 9 polysaccharides, we predict that mutual complementarity in nutrient preferences enables these species to coexist at high abundances.
|
0808.0043
|
Jan Biro Dr
|
Jan C Biro
|
The Invention of Proteomic Code and mRNA Assisted Protein Folding
|
24 pages including 2 tables and 6 figures
| null | null | null |
q-bio.BM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background The theoretical requirements for a genetic code were well defined
and modeled by George Gamow and Francis Crick in the 50-es. Their models
failed. However the valid Genetic Code, provided by Nirenberg and Matthaei in
1961, ignores many theoretical requirements for a perfect Code. Something is
simply missing from the canonical Code.
Results The 3x redundancy of the Genetic code is usually explained as a
necessity to increase the resistance of the mutation resistance of the genetic
information. However it has many additional roles. 1.) It has a periodical
structure which corresponds to the physico-chemical and structural properties
of amino acids. 2.) It provides physico-chemical definition of codon
boundaries. 3.) It defines a code for amino acid co-locations (interactions) in
the coded proteins. 4.) It regulates, through wobble bases the free folding
energy (and structure) of mRNAs. I shortly review the history of the Genetic
Code as well as my own published observations to provide a novel, original
explanation of its redundancy.
Conclusions The redundant Genetic Code contains biological information which
is additional to the 64/20 definition of amino acids. This additional
information is used to define the 3D structure of coding nucleic acids as well
as the coded proteins and it is called the Proteomic Code and mRNA Assisted
Protein Folding.
|
[
{
"created": "Fri, 1 Aug 2008 00:06:50 GMT",
"version": "v1"
}
] |
2008-08-04
|
[
[
"Biro",
"Jan C",
""
]
] |
Background The theoretical requirements for a genetic code were well defined and modeled by George Gamow and Francis Crick in the 50-es. Their models failed. However the valid Genetic Code, provided by Nirenberg and Matthaei in 1961, ignores many theoretical requirements for a perfect Code. Something is simply missing from the canonical Code. Results The 3x redundancy of the Genetic code is usually explained as a necessity to increase the resistance of the mutation resistance of the genetic information. However it has many additional roles. 1.) It has a periodical structure which corresponds to the physico-chemical and structural properties of amino acids. 2.) It provides physico-chemical definition of codon boundaries. 3.) It defines a code for amino acid co-locations (interactions) in the coded proteins. 4.) It regulates, through wobble bases the free folding energy (and structure) of mRNAs. I shortly review the history of the Genetic Code as well as my own published observations to provide a novel, original explanation of its redundancy. Conclusions The redundant Genetic Code contains biological information which is additional to the 64/20 definition of amino acids. This additional information is used to define the 3D structure of coding nucleic acids as well as the coded proteins and it is called the Proteomic Code and mRNA Assisted Protein Folding.
|
1506.07906
|
Aaron King
|
Clayton E. Cressler, Marguerite A. Butler, and Aaron A. King
|
Detecting adaptive evolution in phylogenetic comparative analysis using
the Ornstein-Uhlenbeck model
|
38 pages, in press at Systematic Biology
| null |
10.1093/sysbio/syv043
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phylogenetic comparative analysis is an approach to inferring evolutionary
process from a combination of phylogenetic and phenotypic data. The last few
years have seen increasingly sophisticated models employed in the evaluation of
more and more detailed evolutionary hypotheses, including adaptive hypotheses
with multiple selective optima and hypotheses with rate variation within and
across lineages. The statistical performance of these sophisticated models has
received relatively little systematic attention, however. We conducted an
extensive simulation study to quantify the statistical properties of a class of
models toward the simpler end of the spectrum that model phenotypic evolution
using Ornstein-Uhlenbeck processes. We focused on identifying where, how, and
why these methods break down so that users can apply them with greater
understanding of their strengths and weaknesses. Our analysis identifies three
key determinants of performance: a discriminability ratio, a signal-to-noise
ratio, and the number of taxa sampled. Interestingly, we find that
model-selection power can be high even in regions that were previously thought
to be difficult, such as when tree size is small. On the other hand, we find
that model parameters are in many circumstances difficult to estimate
accurately, indicating a relative paucity of information in the data relative
to these parameters. Nevertheless, we note that accurate model selection is
often possible when parameters are only weakly identified. Our results have
implications for more sophisticated methods inasmuch as the latter are
generalizations of the case we study.
|
[
{
"created": "Thu, 25 Jun 2015 21:57:02 GMT",
"version": "v1"
}
] |
2021-05-27
|
[
[
"Cressler",
"Clayton E.",
""
],
[
"Butler",
"Marguerite A.",
""
],
[
"King",
"Aaron A.",
""
]
] |
Phylogenetic comparative analysis is an approach to inferring evolutionary process from a combination of phylogenetic and phenotypic data. The last few years have seen increasingly sophisticated models employed in the evaluation of more and more detailed evolutionary hypotheses, including adaptive hypotheses with multiple selective optima and hypotheses with rate variation within and across lineages. The statistical performance of these sophisticated models has received relatively little systematic attention, however. We conducted an extensive simulation study to quantify the statistical properties of a class of models toward the simpler end of the spectrum that model phenotypic evolution using Ornstein-Uhlenbeck processes. We focused on identifying where, how, and why these methods break down so that users can apply them with greater understanding of their strengths and weaknesses. Our analysis identifies three key determinants of performance: a discriminability ratio, a signal-to-noise ratio, and the number of taxa sampled. Interestingly, we find that model-selection power can be high even in regions that were previously thought to be difficult, such as when tree size is small. On the other hand, we find that model parameters are in many circumstances difficult to estimate accurately, indicating a relative paucity of information in the data relative to these parameters. Nevertheless, we note that accurate model selection is often possible when parameters are only weakly identified. Our results have implications for more sophisticated methods inasmuch as the latter are generalizations of the case we study.
|
2009.02152
|
Xiaoqi Zhang
|
Xiaoqi Zhang, Zheng Ji, Yanqiao Zheng, Xinyue Ye, Dong Li
|
Evaluating the effect of city lock-down on controlling COVID-19
propagation through deep learning and network science models
|
27 pages, 9 figures
|
[J]. Cities, 2020: 102869
|
10.1016/j.cities.2020.102869
| null |
q-bio.PE physics.soc-ph q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The special epistemic characteristics of the COVID-19, such as the long
incubation period and the infection through asymptomatic cases, put severe
challenge to the containment of its outbreak. By the end of March 2020, China
has successfully controlled the within-spreading of COVID-19 at a high cost of
locking down most of its major cities, including the epicenter, Wuhan. Since
the low accuracy of outbreak data before the mid of Feb. 2020 forms a major
technical concern on those studies based on statistic inference from the early
outbreak. We apply the supervised learning techniques to identify and train
NP-Net-SIR model which turns out robust under poor data quality condition. By
the trained model parameters, we analyze the connection between population flow
and the cross-regional infection connection strength, based on which a set of
counterfactual analysis is carried out to study the necessity of lock-down and
substitutability between lock-down and the other containment measures. Our
findings support the existence of non-lock-down-typed measures that can reach
the same containment consequence as the lock-down, and provide useful guideline
for the design of a more flexible containment strategy.
|
[
{
"created": "Fri, 4 Sep 2020 12:39:12 GMT",
"version": "v1"
}
] |
2020-09-07
|
[
[
"Zhang",
"Xiaoqi",
""
],
[
"Ji",
"Zheng",
""
],
[
"Zheng",
"Yanqiao",
""
],
[
"Ye",
"Xinyue",
""
],
[
"Li",
"Dong",
""
]
] |
The special epistemic characteristics of the COVID-19, such as the long incubation period and the infection through asymptomatic cases, put severe challenge to the containment of its outbreak. By the end of March 2020, China has successfully controlled the within-spreading of COVID-19 at a high cost of locking down most of its major cities, including the epicenter, Wuhan. Since the low accuracy of outbreak data before the mid of Feb. 2020 forms a major technical concern on those studies based on statistic inference from the early outbreak. We apply the supervised learning techniques to identify and train NP-Net-SIR model which turns out robust under poor data quality condition. By the trained model parameters, we analyze the connection between population flow and the cross-regional infection connection strength, based on which a set of counterfactual analysis is carried out to study the necessity of lock-down and substitutability between lock-down and the other containment measures. Our findings support the existence of non-lock-down-typed measures that can reach the same containment consequence as the lock-down, and provide useful guideline for the design of a more flexible containment strategy.
|
2204.00067
|
Muhammad Ammar Malik
|
Muhammad Ammar Malik, Alexander S. Lundervold and Tom Michoel
|
rfPhen2Gen: A machine learning based association study of brain imaging
phenotypes to genotypes
| null | null | null | null |
q-bio.GN cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imaging genetic studies aim to find associations between genetic variants and
imaging quantitative traits. Traditional genome-wide association studies (GWAS)
are based on univariate statistical tests, but when multiple traits are
analyzed together they suffer from a multiple-testing problem and from not
taking into account correlations among the traits. An alternative approach to
multi-trait GWAS is to reverse the functional relation between genotypes and
traits, by fitting a multivariate regression model to predict genotypes from
multiple traits simultaneously. However, current reverse genotype prediction
approaches are mostly based on linear models. Here, we evaluated random forest
regression (RFR) as a method to predict SNPs from imaging QTs and identify
biologically relevant associations. We learned machine learning models to
predict 518,484 SNPs using 56 brain imaging QTs. We observed that genotype
regression error is a better indicator of permutation p-value significance than
genotype classification accuracy. SNPs within the known Alzheimer disease (AD)
risk gene APOE had lowest RMSE for lasso and random forest, but not ridge
regression. Moreover, random forests identified additional SNPs that were not
prioritized by the linear models but are known to be associated with
brain-related disorders. Feature selection identified well-known brain regions
associated with AD,like the hippocampus and amygdala, as important predictors
of the most significant SNPs. In summary, our results indicate that non-linear
methods like random forests may offer additional insights into
phenotype-genotype associations compared to traditional linear multi-variate
GWAS methods.
|
[
{
"created": "Thu, 31 Mar 2022 20:15:22 GMT",
"version": "v1"
}
] |
2022-04-04
|
[
[
"Malik",
"Muhammad Ammar",
""
],
[
"Lundervold",
"Alexander S.",
""
],
[
"Michoel",
"Tom",
""
]
] |
Imaging genetic studies aim to find associations between genetic variants and imaging quantitative traits. Traditional genome-wide association studies (GWAS) are based on univariate statistical tests, but when multiple traits are analyzed together they suffer from a multiple-testing problem and from not taking into account correlations among the traits. An alternative approach to multi-trait GWAS is to reverse the functional relation between genotypes and traits, by fitting a multivariate regression model to predict genotypes from multiple traits simultaneously. However, current reverse genotype prediction approaches are mostly based on linear models. Here, we evaluated random forest regression (RFR) as a method to predict SNPs from imaging QTs and identify biologically relevant associations. We learned machine learning models to predict 518,484 SNPs using 56 brain imaging QTs. We observed that genotype regression error is a better indicator of permutation p-value significance than genotype classification accuracy. SNPs within the known Alzheimer disease (AD) risk gene APOE had lowest RMSE for lasso and random forest, but not ridge regression. Moreover, random forests identified additional SNPs that were not prioritized by the linear models but are known to be associated with brain-related disorders. Feature selection identified well-known brain regions associated with AD,like the hippocampus and amygdala, as important predictors of the most significant SNPs. In summary, our results indicate that non-linear methods like random forests may offer additional insights into phenotype-genotype associations compared to traditional linear multi-variate GWAS methods.
|
2105.09998
|
Eliana Paix\~ao
|
Eliana Celestino da Paixao do Rodrigues dos Santos
|
Distribuicao e diversidade de herbaceas de sub-bosque em uma floresta de
terra firme da amazonia meridional
|
Masters thesis, in Portuguese. 55 paginas e 9 figuras
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Environmental heterogeneity is a determining factor of the structure of
biological communities. Thus, understanding the distribution of species along
environmental gradients provides assistance to conservation. The goal of this
study was to determine the distribution pattern of the herbaceous community in
three areas of the Southern Amazon. Sampling was conducted in three modules
totaling 39 permanent plots according to the protocol of collection of the
Program for Research in Biodiversity. All herbaceous and ground hemiepiphyte
subjects above 5 cm were recorded. Multivariate analyses were used to summarize
the species composition, multiple regression models were used to determine if
environmental variables and disturbance caused by logging influenced the
composition of the herbaceous community. We recorded 7.965 individuals
representing 70 species. The distance of the watercourse was the main factor
associated with the distribution of the species, interactions between variables
showed that canopy openness and sand content also influence the species
composition, and there was no effect on the number of trees cut. Species
richness increased in areas where canopy cover was higher and it decreases as
it becomes more distant from the watercourse. The occurrences of preferred
habitats for some species have, in addition to an ecological interest, a
practical significance for the conservation and management of these species.
Currently, the area of preservation of streams provided by the Forest Code in
effect is 30 m for rivers up to 10 m wide. However, this study shows that the
range of protection should be extended to at least 100 m wide.
|
[
{
"created": "Thu, 20 May 2021 19:17:11 GMT",
"version": "v1"
}
] |
2021-05-24
|
[
[
"Santos",
"Eliana Celestino da Paixao do Rodrigues dos",
""
]
] |
Environmental heterogeneity is a determining factor of the structure of biological communities. Thus, understanding the distribution of species along environmental gradients provides assistance to conservation. The goal of this study was to determine the distribution pattern of the herbaceous community in three areas of the Southern Amazon. Sampling was conducted in three modules totaling 39 permanent plots according to the protocol of collection of the Program for Research in Biodiversity. All herbaceous and ground hemiepiphyte subjects above 5 cm were recorded. Multivariate analyses were used to summarize the species composition, multiple regression models were used to determine if environmental variables and disturbance caused by logging influenced the composition of the herbaceous community. We recorded 7.965 individuals representing 70 species. The distance of the watercourse was the main factor associated with the distribution of the species, interactions between variables showed that canopy openness and sand content also influence the species composition, and there was no effect on the number of trees cut. Species richness increased in areas where canopy cover was higher and it decreases as it becomes more distant from the watercourse. The occurrences of preferred habitats for some species have, in addition to an ecological interest, a practical significance for the conservation and management of these species. Currently, the area of preservation of streams provided by the Forest Code in effect is 30 m for rivers up to 10 m wide. However, this study shows that the range of protection should be extended to at least 100 m wide.
|
2112.10548
|
Jong Woo Kim
|
Jong Woo Kim (1), Niels Krausch (1), Judit Aizpuru (1), Tilman Barz
(2), Sergio Lucia (3), Ernesto C. Mart\'inez (4), Peter Neubauer (1), Mariano
Nicolas Cruz Bournazou (1 and 5) ((1) Technische Universit\"at Berlin, Chair
of Bioprocess Engineering, Strasse des 17. Juni 135, 10623 Berlin, Germany,
(2) AIT Austrian Institute of Technology GmbH, Giefingasse 2, 1210 Vienna,
Austria, (3) Technische Universit\"at Dortmund, Department of Biochemical and
Chemical Engineering, Emil-Figge-Strasse 70, 44227 Dortmund, Germany, (4)
INGAR (CONICET-UTN), Avellandeda 3657, S3002GJC Santa Fe, Argentina, (5)
DataHow AG, Z\"urichstrasse 137, 8600 D\"ubendorf, Switzerland)
|
Model predictive control guided with optimal experimental design for
pulse-based parallel cultivation
|
6 pages, 4 figures, submitted to IFAC Conference
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optimal experimental design for parameter precision attempts to maximize the
information content in experimental data for a most effective identification of
parametric model. With the recent developments in miniaturization and
parallelization of cultivation platforms for high-throughput screening of
optimal growth conditions massive amounts of informative data can be generated
with few experiments. Increasing the quantity of the data means to increase the
number of parameters and experimental design variables which might deteriorate
the identifiability and hamper the online computation of optimal inputs. To
reduce the problem complexity, in this work, we introduce an auxiliary
controller at a lower level that tracks the optimal feeding strategy computed
by a high-level optimizer in an online fashion. The hierarchical framework is
especially interesting for the operation under constraints. The key aspect of
this method are discussed together with an in silico study considering parallel
glucose limited bacterial fed batch cultivations.
|
[
{
"created": "Mon, 20 Dec 2021 14:20:42 GMT",
"version": "v1"
}
] |
2021-12-21
|
[
[
"Kim",
"Jong Woo",
"",
"1 and 5"
],
[
"Krausch",
"Niels",
"",
"1 and 5"
],
[
"Aizpuru",
"Judit",
"",
"1 and 5"
],
[
"Barz",
"Tilman",
"",
"1 and 5"
],
[
"Lucia",
"Sergio",
"",
"1 and 5"
],
[
"Martínez",
"Ernesto C.",
"",
"1 and 5"
],
[
"Neubauer",
"Peter",
"",
"1 and 5"
],
[
"Bournazou",
"Mariano Nicolas Cruz",
"",
"1 and 5"
]
] |
Optimal experimental design for parameter precision attempts to maximize the information content in experimental data for a most effective identification of parametric model. With the recent developments in miniaturization and parallelization of cultivation platforms for high-throughput screening of optimal growth conditions massive amounts of informative data can be generated with few experiments. Increasing the quantity of the data means to increase the number of parameters and experimental design variables which might deteriorate the identifiability and hamper the online computation of optimal inputs. To reduce the problem complexity, in this work, we introduce an auxiliary controller at a lower level that tracks the optimal feeding strategy computed by a high-level optimizer in an online fashion. The hierarchical framework is especially interesting for the operation under constraints. The key aspect of this method are discussed together with an in silico study considering parallel glucose limited bacterial fed batch cultivations.
|
0807.1039
|
Thierry Rabilloud
|
Thierry Rabilloud (BBSI)
|
Keynotes on membrane proteomics
| null |
Sub-cellular biochemistry 43 (2007) 3-11
| null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This review article deals with the specificities of the proteomics analysis
of membrane proteins.
|
[
{
"created": "Mon, 7 Jul 2008 15:27:20 GMT",
"version": "v1"
}
] |
2008-07-08
|
[
[
"Rabilloud",
"Thierry",
"",
"BBSI"
]
] |
This review article deals with the specificities of the proteomics analysis of membrane proteins.
|
2102.03373
|
Stefano De Leo
|
Stefano De Leo, Manoel P. Araujo
|
A modelling study across the Italian regions: Lockdown, testing
strategy, colored zones, and skew-normal distributions. How a numerical index
of pandemic criticality could be useful in tackling the CoViD-19
|
25 pages, 10 figures, 3 tables
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As Europe is facing the second wave of the CoViD-19 pandemic, each country
should carefully review how it dealt with the first wave of outbreak. Lessons
from the first experience should be useful to avoid indiscriminate closures
and, above all, to determine universal (understandable) parameters to guide the
introduction of containment measures to reduce the spreading of the virus. The
use of few (effective) parameters is indeed of extreme importance to create a
link between authorities and population, allowing the latter to understand the
reason for some restrictions and, consequently, to allow an active
participation in the fight against the pandemic.
Testing strategies, fitting skew parameters (as mean, mode, standard
deviation, and skewness), mortality rates, and weekly CoViD-19 spreading data,
as more people are getting infected, were used to compare the first wave of the
outbreak in the Italian regions and to determine which parameters have to be
checked before introducing restrictive containment measures. We propose few
\textit{universal} parameters that, once appropriately weighed, could be useful
to correctly differentiate the pandemic situation in the national territory and
to rapidly assign the properly pandemic risk to each region.
|
[
{
"created": "Fri, 5 Feb 2021 19:00:37 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 12:45:45 GMT",
"version": "v2"
}
] |
2021-02-10
|
[
[
"De Leo",
"Stefano",
""
],
[
"Araujo",
"Manoel P.",
""
]
] |
As Europe is facing the second wave of the CoViD-19 pandemic, each country should carefully review how it dealt with the first wave of outbreak. Lessons from the first experience should be useful to avoid indiscriminate closures and, above all, to determine universal (understandable) parameters to guide the introduction of containment measures to reduce the spreading of the virus. The use of few (effective) parameters is indeed of extreme importance to create a link between authorities and population, allowing the latter to understand the reason for some restrictions and, consequently, to allow an active participation in the fight against the pandemic. Testing strategies, fitting skew parameters (as mean, mode, standard deviation, and skewness), mortality rates, and weekly CoViD-19 spreading data, as more people are getting infected, were used to compare the first wave of the outbreak in the Italian regions and to determine which parameters have to be checked before introducing restrictive containment measures. We propose few \textit{universal} parameters that, once appropriately weighed, could be useful to correctly differentiate the pandemic situation in the national territory and to rapidly assign the properly pandemic risk to each region.
|
1403.3066
|
Frederick Matsen IV
|
Connor O. McCoy, Trevor Bedford, Vladimir N. Minin, Philip Bradley,
Harlan Robins and Frederick A. Matsen IV
|
Quantifying evolutionary constraints on B cell affinity maturation
|
Previously entitled "Substitution and site-specific selection driving
B cell affinity maturation is consistent across individuals"
| null |
10.1098/rstb.2014-0244
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The antibody repertoire of each individual is continuously updated by the
evolutionary process of B cell receptor mutation and selection. It has recently
become possible to gain detailed information concerning this process through
high-throughput sequencing. Here, we develop modern statistical molecular
evolution methods for the analysis of B cell sequence data, and then apply them
to a very deep short-read data set of B cell receptors. We find that the
substitution process is conserved across individuals but varies significantly
across gene segments. We investigate selection on B cell receptors using a
novel method that side-steps the difficulties encountered by previous work in
differentiating between selection and motif-driven mutation; this is done
through stochastic mapping and empirical Bayes estimators that compare the
evolution of in-frame and out-of-frame rearrangements. We use this new method
to derive a per-residue map of selection, which provides a more nuanced view of
the constraints on framework and variable regions.
|
[
{
"created": "Wed, 12 Mar 2014 19:01:41 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jul 2014 23:41:02 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jan 2015 22:45:43 GMT",
"version": "v3"
},
{
"created": "Fri, 8 May 2015 13:56:59 GMT",
"version": "v4"
}
] |
2015-05-11
|
[
[
"McCoy",
"Connor O.",
""
],
[
"Bedford",
"Trevor",
""
],
[
"Minin",
"Vladimir N.",
""
],
[
"Bradley",
"Philip",
""
],
[
"Robins",
"Harlan",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] |
The antibody repertoire of each individual is continuously updated by the evolutionary process of B cell receptor mutation and selection. It has recently become possible to gain detailed information concerning this process through high-throughput sequencing. Here, we develop modern statistical molecular evolution methods for the analysis of B cell sequence data, and then apply them to a very deep short-read data set of B cell receptors. We find that the substitution process is conserved across individuals but varies significantly across gene segments. We investigate selection on B cell receptors using a novel method that side-steps the difficulties encountered by previous work in differentiating between selection and motif-driven mutation; this is done through stochastic mapping and empirical Bayes estimators that compare the evolution of in-frame and out-of-frame rearrangements. We use this new method to derive a per-residue map of selection, which provides a more nuanced view of the constraints on framework and variable regions.
|
1911.11948
|
Sang Woo Park
|
Sang Woo Park and Benjamin M. Bolker
|
A note on observation processes in epidemic models
|
9 pages, 2 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many disease models focus on characterizing the underlying transmission
mechanism but make simple, possibly naive assumptions about how infections are
reported. In this note, we use a simple deterministic
Susceptible-Infected-Removed (SIR) model to compare two common assumptions
about disease incidence reports: individuals can report their infection as soon
as they become infected or as soon as they recover. We show that incorrect
assumptions about the underlying observation processes can bias estimates of
the basic reproduction number and lead to overly narrow confidence intervals.
|
[
{
"created": "Wed, 27 Nov 2019 04:37:51 GMT",
"version": "v1"
}
] |
2019-11-28
|
[
[
"Park",
"Sang Woo",
""
],
[
"Bolker",
"Benjamin M.",
""
]
] |
Many disease models focus on characterizing the underlying transmission mechanism but make simple, possibly naive assumptions about how infections are reported. In this note, we use a simple deterministic Susceptible-Infected-Removed (SIR) model to compare two common assumptions about disease incidence reports: individuals can report their infection as soon as they become infected or as soon as they recover. We show that incorrect assumptions about the underlying observation processes can bias estimates of the basic reproduction number and lead to overly narrow confidence intervals.
|
1010.3845
|
Anna Melbinger
|
Anna Melbinger, Jonas Cremer, Erwin Frey
|
Evolutionary game theory in growing populations
|
4 pages, 2 figures and 2 pages supplementary information
|
Phys. Rev. Lett. 105, 178101 (2010)
|
10.1103/PhysRevLett.105.178101
| null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing theoretical models of evolution focus on the relative fitness
advantages of different mutants in a population while the dynamic behavior of
the population size is mostly left unconsidered. We here present a generic
stochastic model which combines the growth dynamics of the population and its
internal evolution. Our model thereby accounts for the fact that both
evolutionary and growth dynamics are based on individual reproduction events
and hence are highly coupled and stochastic in nature. We exemplify our
approach by studying the dilemma of cooperation in growing populations and show
that genuinely stochastic events can ease the dilemma by leading to a transient
but robust increase in cooperation
|
[
{
"created": "Tue, 19 Oct 2010 10:19:13 GMT",
"version": "v1"
}
] |
2010-10-20
|
[
[
"Melbinger",
"Anna",
""
],
[
"Cremer",
"Jonas",
""
],
[
"Frey",
"Erwin",
""
]
] |
Existing theoretical models of evolution focus on the relative fitness advantages of different mutants in a population while the dynamic behavior of the population size is mostly left unconsidered. We here present a generic stochastic model which combines the growth dynamics of the population and its internal evolution. Our model thereby accounts for the fact that both evolutionary and growth dynamics are based on individual reproduction events and hence are highly coupled and stochastic in nature. We exemplify our approach by studying the dilemma of cooperation in growing populations and show that genuinely stochastic events can ease the dilemma by leading to a transient but robust increase in cooperation
|
2212.06505
|
Katherine Benjamin
|
Katherine Benjamin, Aneesha Bhandari, Zhouchun Shang, Yanan Xing,
Yanru An, Nannan Zhang, Yong Hou, Ulrike Tillmann, Katherine R. Bull, Heather
A. Harrington
|
Multiscale topology classifies and quantifies cell types in subcellular
spatial transcriptomics
|
Main text: 8 pages, 4 figures. Supplement: 12 pages, 5 figures
| null | null | null |
q-bio.QM math.AT q-bio.GN stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
Spatial transcriptomics has the potential to transform our understanding of
RNA expression in tissues. Classical array-based technologies produce
multiple-cell-scale measurements requiring deconvolution to recover single cell
information. However, rapid advances in subcellular measurement of RNA
expression at whole-transcriptome depth necessitate a fundamentally different
approach. To integrate single-cell RNA-seq data with nanoscale spatial
transcriptomics, we present a topological method for automatic cell type
identification (TopACT). Unlike popular decomposition approaches to
multicellular resolution data, TopACT is able to pinpoint the spatial locations
of individual sparsely dispersed cells without prior knowledge of cell
boundaries. Pairing TopACT with multiparameter persistent homology landscapes
predicts immune cells forming a peripheral ring structure within kidney
glomeruli in a murine model of lupus nephritis, which we experimentally
validate with immunofluorescent imaging. The proposed topological data analysis
unifies multiple biological scales, from subcellular gene expression to
multicellular tissue organization.
|
[
{
"created": "Tue, 13 Dec 2022 11:36:25 GMT",
"version": "v1"
}
] |
2022-12-14
|
[
[
"Benjamin",
"Katherine",
""
],
[
"Bhandari",
"Aneesha",
""
],
[
"Shang",
"Zhouchun",
""
],
[
"Xing",
"Yanan",
""
],
[
"An",
"Yanru",
""
],
[
"Zhang",
"Nannan",
""
],
[
"Hou",
"Yong",
""
],
[
"Tillmann",
"Ulrike",
""
],
[
"Bull",
"Katherine R.",
""
],
[
"Harrington",
"Heather A.",
""
]
] |
Spatial transcriptomics has the potential to transform our understanding of RNA expression in tissues. Classical array-based technologies produce multiple-cell-scale measurements requiring deconvolution to recover single cell information. However, rapid advances in subcellular measurement of RNA expression at whole-transcriptome depth necessitate a fundamentally different approach. To integrate single-cell RNA-seq data with nanoscale spatial transcriptomics, we present a topological method for automatic cell type identification (TopACT). Unlike popular decomposition approaches to multicellular resolution data, TopACT is able to pinpoint the spatial locations of individual sparsely dispersed cells without prior knowledge of cell boundaries. Pairing TopACT with multiparameter persistent homology landscapes predicts immune cells forming a peripheral ring structure within kidney glomeruli in a murine model of lupus nephritis, which we experimentally validate with immunofluorescent imaging. The proposed topological data analysis unifies multiple biological scales, from subcellular gene expression to multicellular tissue organization.
|
1111.6494
|
Taiki Takahashi
|
Taiki Takahashi
|
Toward molecular neuroeconomics of obesity
| null | null | null | null |
q-bio.NC q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because obesity is a risk factor for many serious illnesses such as diabetes,
better understandings of obesity and eating disorders have been attracting
attention in neurobiology, psychiatry, and neuroeconomics. This paper presents
future study directions by unifying (i) economic theory of addiction and
obesity (Becker and Murphy, 1988; Levy 2002; Dragone 2009), and (ii) recent
empirical findings in neuroeconomics and neurobiology of obesity and addiction.
It is suggested that neurobiological substrates such as adiponectin, dopamine
(D2 receptors), endocannabinoids, ghrelin, leptin, nesfatin-1, norepinephrine,
orexin, oxytocin, serotonin, vasopressin, CCK, GLP-1, MCH, PYY, and stress
hormones (e.g., CRF) in the brain (e.g., OFC, VTA, NAcc, and the hypothalamus)
may determine parameters in the economic theory of obesity. Also, the
importance of introducing time-inconsistent and gain/loss-asymmetrical temporal
discounting (intertemporal choice) models based on Tsallis' statistics and
incorporating time-perception parameters into the neuroeconomic theory is
emphasized. Future directions in the application of the theory to studies in
neuroeconomics and neuropsychiatry of obesity at the molecular level, which may
help medical/psychopharmacological treatments of obesity (e.g., with
sibutramine), are discussed.
|
[
{
"created": "Tue, 22 Nov 2011 22:41:59 GMT",
"version": "v1"
}
] |
2011-11-29
|
[
[
"Takahashi",
"Taiki",
""
]
] |
Because obesity is a risk factor for many serious illnesses such as diabetes, better understandings of obesity and eating disorders have been attracting attention in neurobiology, psychiatry, and neuroeconomics. This paper presents future study directions by unifying (i) economic theory of addiction and obesity (Becker and Murphy, 1988; Levy 2002; Dragone 2009), and (ii) recent empirical findings in neuroeconomics and neurobiology of obesity and addiction. It is suggested that neurobiological substrates such as adiponectin, dopamine (D2 receptors), endocannabinoids, ghrelin, leptin, nesfatin-1, norepinephrine, orexin, oxytocin, serotonin, vasopressin, CCK, GLP-1, MCH, PYY, and stress hormones (e.g., CRF) in the brain (e.g., OFC, VTA, NAcc, and the hypothalamus) may determine parameters in the economic theory of obesity. Also, the importance of introducing time-inconsistent and gain/loss-asymmetrical temporal discounting (intertemporal choice) models based on Tsallis' statistics and incorporating time-perception parameters into the neuroeconomic theory is emphasized. Future directions in the application of the theory to studies in neuroeconomics and neuropsychiatry of obesity at the molecular level, which may help medical/psychopharmacological treatments of obesity (e.g., with sibutramine), are discussed.
|
1101.1013
|
Chris Adami
|
Evan D. Dorn, Kenneth H. Nealson, and Christoph Adami
|
Monomer abundance distribution patterns as a universal biosignature:
Examples from terrestrial and digital life
|
35 pages, 5 figures. Supplementary material (two movie files)
available upon request. To appear in J. Mol. Evol
|
J. Molec. Evol.72:283-295, 2011
| null | null |
q-bio.BM astro-ph.EP physics.bio-ph q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Organisms leave a distinctive chemical signature in their environment because
they synthesize those molecules that maximize their fitness. As a result, the
relative concentrations of related chemical monomers in life-bearing
environmental samples reflect, in part, those compounds' adaptive utility. In
contrast, rates of molecular synthesis in a lifeless environment are dictated
by reaction kinetics and thermodynamics, so concentrations of related monomers
in abiotic samples tend to exhibit specific patterns dominated by small, easily
formed, low-formation-energy molecules. We contend that this distinction can
serve as a universal biosignature: the measurement of chemical concentration
ratios that belie formation kinetics or equilibrium thermodynamics indicates
the likely presence of life. We explore the features of this biosignature as
observed in amino acids and carboxylic acids, using published data from
numerous studies of terrestrial sediments, abiotic (spark, UV, and high-energy
proton) synthesis experments, and meteorite bodies. We then compare these data
to the results of experimental studies of an evolving digital life system. We
observe the robust and repeatable evolution of an analogous biosignature in a
digital lifeform, suggesting that evolutionary selection necessarily constrains
organism composition and that the monomer abundance biosignature phenomenon is
universal to evolved biosystems.
|
[
{
"created": "Wed, 5 Jan 2011 15:51:02 GMT",
"version": "v1"
}
] |
2011-05-05
|
[
[
"Dorn",
"Evan D.",
""
],
[
"Nealson",
"Kenneth H.",
""
],
[
"Adami",
"Christoph",
""
]
] |
Organisms leave a distinctive chemical signature in their environment because they synthesize those molecules that maximize their fitness. As a result, the relative concentrations of related chemical monomers in life-bearing environmental samples reflect, in part, those compounds' adaptive utility. In contrast, rates of molecular synthesis in a lifeless environment are dictated by reaction kinetics and thermodynamics, so concentrations of related monomers in abiotic samples tend to exhibit specific patterns dominated by small, easily formed, low-formation-energy molecules. We contend that this distinction can serve as a universal biosignature: the measurement of chemical concentration ratios that belie formation kinetics or equilibrium thermodynamics indicates the likely presence of life. We explore the features of this biosignature as observed in amino acids and carboxylic acids, using published data from numerous studies of terrestrial sediments, abiotic (spark, UV, and high-energy proton) synthesis experments, and meteorite bodies. We then compare these data to the results of experimental studies of an evolving digital life system. We observe the robust and repeatable evolution of an analogous biosignature in a digital lifeform, suggesting that evolutionary selection necessarily constrains organism composition and that the monomer abundance biosignature phenomenon is universal to evolved biosystems.
|
1308.0879
|
Takanobu Yamanobe
|
Takanobu Yamanobe
|
Global Dynamics of a Stochastic Neuronal Oscillator
|
45 pages, 9 figures, authors homepage
http://niseiri.med.hokudai.ac.jp/yamanobe/indexeng.html
| null |
10.1103/PhysRevE.88.052709
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonlinear oscillators have been used to model neurons that fire periodically
in the absence of input. These oscillators, which are called neuronal
oscillator, share some common response structures with other biological
oscillations. In this study, we analyze the dependence of the global dynamics
of an impulse-driven stochastic neuronal oscillator on the relaxation rate to
the limit cycle, the strength of the intrinsic noise, and the impulsive input
parameters. To do this, we use a Markov operator that both reflects the density
evolution of the oscillator and is an extension of the phase transition curve,
which describes the phase shift due to a single isolated impulse. Previously,
we derived the Markov operator for the finite relaxation rate that describes
the dynamics of the entire phase plane. Here, we construct a Markov operator
for the infinite relaxation rate that describes the stochastic dynamics
restricted to the limit cycle. In both cases, the response of the stochastic
neuronal oscillator to time-varying impulses is described by a product of
Markov operators. Furthermore, we calculate the number of spikes between two
consecutive impulses to relate the dynamics of the oscillator to the number of
spikes per unit time and the interspike interval density. Specifically, we
analyze the dynamics of the number of spikes per unit time based on the
properties of the Markov operators. Each Markov operator can be decomposed into
stationary and transient components based on the properties of the eigenvalues
and eigenfunctions. This allows us to evaluate the difference in the number of
spikes per unit time between the stationary and transient responses of the
oscillator, which we show to be based on the dependence of the oscillator on
past activity. Our analysis shows how the duration of the past neuronal
activity depends on the relaxation rate, the noise strength and the input
parameters.
|
[
{
"created": "Mon, 5 Aug 2013 03:42:36 GMT",
"version": "v1"
}
] |
2015-06-16
|
[
[
"Yamanobe",
"Takanobu",
""
]
] |
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillator, share some common response structures with other biological oscillations. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength and the input parameters.
|
2006.14752
|
Shashank Reddy Vadyala
|
Shashank Reddy Vadyala, Sai Nethra Betgeri, Eric A. Sherer, Amod
Amritphale
|
Prediction of the Number of COVID-19 Confirmed Cases Based on
K-Means-LSTM
| null | null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COVID-19 is a pandemic disease that began to rapidly spread in the US with
the first case detected on January 19, 2020, in Washington State. March 9,
2020, and then increased rapidly with total cases of 25,739 as of April 20,
2020. The Covid-19 pandemic is so unnerving that it is difficult to understand
how any person is affected by the virus. Although most people with coronavirus
81%, according to the U.S. Centers for Disease Control and Prevention (CDC),
will have little to mild symptoms, others may rely on a ventilator to breathe
or not at all. SEIR models have broad applicability in predicting the outcome
of the population with a variety of diseases. However, many researchers use
these models without validating the necessary hypotheses. Far too many
researchers often "overfit" the data by using too many predictor variables and
small sample sizes to create models. Models thus developed are unlikely to
stand validity check on a separate group of population and regions. The
researcher remains unaware that overfitting has occurred, without attempting
such validation. In the paper, we present a combination algorithm that combines
similar days features selection based on the region using Xgboost, K Means, and
long short-term memory (LSTM) neural networks to construct a prediction model
(i.e., K-Means-LSTM) for short-term COVID-19 cases forecasting in Louisana
state USA. The weighted k-means algorithm based on extreme gradient boosting is
used to evaluate the similarity between the forecasts and past days. The
results show that the method with K-Means-LSTM has a higher accuracy with an
RMSE of 601.20 whereas the SEIR model with an RMSE of 3615.83.
|
[
{
"created": "Fri, 26 Jun 2020 01:42:07 GMT",
"version": "v1"
}
] |
2020-06-30
|
[
[
"Vadyala",
"Shashank Reddy",
""
],
[
"Betgeri",
"Sai Nethra",
""
],
[
"Sherer",
"Eric A.",
""
],
[
"Amritphale",
"Amod",
""
]
] |
COVID-19 is a pandemic disease that began to rapidly spread in the US with the first case detected on January 19, 2020, in Washington State. March 9, 2020, and then increased rapidly with total cases of 25,739 as of April 20, 2020. The Covid-19 pandemic is so unnerving that it is difficult to understand how any person is affected by the virus. Although most people with coronavirus 81%, according to the U.S. Centers for Disease Control and Prevention (CDC), will have little to mild symptoms, others may rely on a ventilator to breathe or not at all. SEIR models have broad applicability in predicting the outcome of the population with a variety of diseases. However, many researchers use these models without validating the necessary hypotheses. Far too many researchers often "overfit" the data by using too many predictor variables and small sample sizes to create models. Models thus developed are unlikely to stand validity check on a separate group of population and regions. The researcher remains unaware that overfitting has occurred, without attempting such validation. In the paper, we present a combination algorithm that combines similar days features selection based on the region using Xgboost, K Means, and long short-term memory (LSTM) neural networks to construct a prediction model (i.e., K-Means-LSTM) for short-term COVID-19 cases forecasting in Louisana state USA. The weighted k-means algorithm based on extreme gradient boosting is used to evaluate the similarity between the forecasts and past days. The results show that the method with K-Means-LSTM has a higher accuracy with an RMSE of 601.20 whereas the SEIR model with an RMSE of 3615.83.
|
2309.16685
|
Truong Son Hy
|
Nhat Khang Ngo and Truong Son Hy
|
Target-aware Variational Auto-encoders for Ligand Generation with
Multimodal Protein Representation Learning
| null | null | null | null |
q-bio.BM cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Without knowledge of specific pockets, generating ligands based on the global
structure of a protein target plays a crucial role in drug discovery as it
helps reduce the search space for potential drug-like candidates in the
pipeline. However, contemporary methods require optimizing tailored networks
for each protein, which is arduous and costly. To address this issue, we
introduce TargetVAE, a target-aware variational auto-encoder that generates
ligands with high binding affinities to arbitrary protein targets, guided by a
novel multimodal deep neural network built based on graph Transformers as the
prior for the generative model. This is the first effort to unify different
representations of proteins (e.g., sequence of amino-acids, 3D structure) into
a single model that we name as Protein Multimodal Network (PMN). Our multimodal
architecture learns from the entire protein structures and is able to capture
their sequential, topological and geometrical information. We showcase the
superiority of our approach by conducting extensive experiments and
evaluations, including the assessment of generative model quality, ligand
generation for unseen targets, docking score computation, and binding affinity
prediction. Empirical results demonstrate the promising performance of our
proposed approach. Our software package is publicly available at
https://github.com/HySonLab/Ligand_Generation
|
[
{
"created": "Wed, 2 Aug 2023 12:08:17 GMT",
"version": "v1"
}
] |
2023-10-02
|
[
[
"Ngo",
"Nhat Khang",
""
],
[
"Hy",
"Truong Son",
""
]
] |
Without knowledge of specific pockets, generating ligands based on the global structure of a protein target plays a crucial role in drug discovery as it helps reduce the search space for potential drug-like candidates in the pipeline. However, contemporary methods require optimizing tailored networks for each protein, which is arduous and costly. To address this issue, we introduce TargetVAE, a target-aware variational auto-encoder that generates ligands with high binding affinities to arbitrary protein targets, guided by a novel multimodal deep neural network built based on graph Transformers as the prior for the generative model. This is the first effort to unify different representations of proteins (e.g., sequence of amino-acids, 3D structure) into a single model that we name as Protein Multimodal Network (PMN). Our multimodal architecture learns from the entire protein structures and is able to capture their sequential, topological and geometrical information. We showcase the superiority of our approach by conducting extensive experiments and evaluations, including the assessment of generative model quality, ligand generation for unseen targets, docking score computation, and binding affinity prediction. Empirical results demonstrate the promising performance of our proposed approach. Our software package is publicly available at https://github.com/HySonLab/Ligand_Generation
|
2007.11183
|
Shiyang Lai
|
Shiyang Lai, Tianqi Zhao, Ningyuan Fan
|
Inferring incubation period distribution of COVID-19 based on SEAIR
Model
|
9 pages, 3 figures, 1 table
| null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To reduce the biases of traditional survey-based methods, this paper proposes
an epidemic model-based approach to inference the incubation period
distribution of COVID-19 utilizing the publicly reported confirmed case number.
We construct an epidemic model, namely SEAIR, and take advantage of the dynamic
transmission process depicted by SEAIR to estimate the onset probability in
each day of exposed individuals in eight impacted countries. Based on these
estimations, the general incubation probability distribution of COVID-19 has
been revealed. The proposed method can avoid several biases of traditional
survey-based methods. However, due to the mathematical-model-based nature of
this method, the inference results are somewhat sensitive to the setting of
parameters. Therefore, this method should be practiced reasonably on the basis
of a certain understanding of the studied epidemic.
|
[
{
"created": "Wed, 22 Jul 2020 03:31:45 GMT",
"version": "v1"
}
] |
2020-07-23
|
[
[
"Lai",
"Shiyang",
""
],
[
"Zhao",
"Tianqi",
""
],
[
"Fan",
"Ningyuan",
""
]
] |
To reduce the biases of traditional survey-based methods, this paper proposes an epidemic model-based approach to inference the incubation period distribution of COVID-19 utilizing the publicly reported confirmed case number. We construct an epidemic model, namely SEAIR, and take advantage of the dynamic transmission process depicted by SEAIR to estimate the onset probability in each day of exposed individuals in eight impacted countries. Based on these estimations, the general incubation probability distribution of COVID-19 has been revealed. The proposed method can avoid several biases of traditional survey-based methods. However, due to the mathematical-model-based nature of this method, the inference results are somewhat sensitive to the setting of parameters. Therefore, this method should be practiced reasonably on the basis of a certain understanding of the studied epidemic.
|
2012.10331
|
Jacques P\'ecr\'eaux
|
Ma\"el Balluet, Florian Sizaire, Youssef El Habouz, Thomas Walter,
J\'er\'emy Pont, Baptiste Giroux, Otmane Bouchareb, Marc Tramier, Jacques
P\'ecr\'eaux
|
Neural network fast-classifies biological images using features selected
after their random-forests-importance to power smart microscopy
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial intelligence is nowadays used for cell detection and
classification in optical microscopy, during post-acquisition analysis. The
microscopes are now fully automated and next expected to be smart, to make
acquisition decisions based on the images. It calls for analysing them on the
fly. Biology further imposes training on a reduced dataset due to cost and time
to prepare the samples and have the datasets annotated by experts. We propose
here a real-time image processing, compliant with these specifications by
balancing accurate detection and execution performance. We characterised the
images using a generic, high-dimensional feature extractor. We then classified
the images using machine learning for the sake of understanding the
contribution of each feature in decision and execution time. We found that the
non-linear-classifier random forests outperformed Fisher's linear discriminant.
More importantly, the most discriminant and time-consuming features could be
excluded without any significant loss in accuracy, offering a substantial gain
in execution time. It suggests a feature-group redundancy likely related to the
biology of the observed cells. We offer a method to select fast and
discriminant features. In our assay, a 79.6 $\pm$ 2.4 % accurate classification
of a cell took 68.7 $\pm$ 3.5 ms (mean $\pm$ SD, 5-fold cross-validation nested
in 10 bootstrap repeats), corresponding to 14 cells per second, dispatched into
8 phases of the cell cycle using 12 feature-groups and operating a consumer
market ARM-based embedded system. Interestingly, a simple neural network
offered similar performances paving the way to faster training and
classification, using parallel execution on a general-purpose graphic
processing unit. Finally, this strategy is also usable for deep neural networks
paving the way to optimising these algorithms for smart microscopy.
|
[
{
"created": "Fri, 18 Dec 2020 16:20:37 GMT",
"version": "v1"
}
] |
2021-10-18
|
[
[
"Balluet",
"Maël",
""
],
[
"Sizaire",
"Florian",
""
],
[
"Habouz",
"Youssef El",
""
],
[
"Walter",
"Thomas",
""
],
[
"Pont",
"Jérémy",
""
],
[
"Giroux",
"Baptiste",
""
],
[
"Bouchareb",
"Otmane",
""
],
[
"Tramier",
"Marc",
""
],
[
"Pécréaux",
"Jacques",
""
]
] |
Artificial intelligence is nowadays used for cell detection and classification in optical microscopy, during post-acquisition analysis. The microscopes are now fully automated and next expected to be smart, to make acquisition decisions based on the images. It calls for analysing them on the fly. Biology further imposes training on a reduced dataset due to cost and time to prepare the samples and have the datasets annotated by experts. We propose here a real-time image processing, compliant with these specifications by balancing accurate detection and execution performance. We characterised the images using a generic, high-dimensional feature extractor. We then classified the images using machine learning for the sake of understanding the contribution of each feature in decision and execution time. We found that the non-linear-classifier random forests outperformed Fisher's linear discriminant. More importantly, the most discriminant and time-consuming features could be excluded without any significant loss in accuracy, offering a substantial gain in execution time. It suggests a feature-group redundancy likely related to the biology of the observed cells. We offer a method to select fast and discriminant features. In our assay, a 79.6 $\pm$ 2.4 % accurate classification of a cell took 68.7 $\pm$ 3.5 ms (mean $\pm$ SD, 5-fold cross-validation nested in 10 bootstrap repeats), corresponding to 14 cells per second, dispatched into 8 phases of the cell cycle using 12 feature-groups and operating a consumer market ARM-based embedded system. Interestingly, a simple neural network offered similar performances paving the way to faster training and classification, using parallel execution on a general-purpose graphic processing unit. Finally, this strategy is also usable for deep neural networks paving the way to optimising these algorithms for smart microscopy.
|
2408.02988
|
Amir Heydari
|
Amir Heydari, Abbas Ahmadi, Tae Hyung Kim, Berkin Bilgic
|
Fast Whole-Brain MR Multi-Parametric Mapping with Scan-Specific
Self-Supervised Networks
| null | null | null | null |
q-bio.QM physics.med-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quantification of tissue parameters using MRI is emerging as a powerful tool
in clinical diagnosis and research studies. The need for multiple long scans
with different acquisition parameters prohibits quantitative MRI from reaching
widespread adoption in routine clinical and research exams. Accelerated
parameter mapping techniques leverage parallel imaging, signal modelling and
deep learning to offer more practical quantitative MRI acquisitions. However,
the achievable acceleration and the quality of maps are often limited. Joint
MAPLE is a recent state-of-the-art multi-parametric and scan-specific parameter
mapping technique with promising performance at high acceleration rates. It
synergistically combines parallel imaging, model-based and machine learning
approaches for joint mapping of T1, T2*, proton density and the field
inhomogeneity. However, Joint MAPLE suffers from prohibitively long
reconstruction time to estimate the maps from a multi-echo, multi-flip angle
(MEMFA) dataset at high resolution in a scan-specific manner. In this work, we
propose a faster version of Joint MAPLE which retains the mapping performance
of the original version. Coil compression, random slice selection,
parameter-specific learning rates and transfer learning are synergistically
combined in the proposed framework. It speeds-up the reconstruction time up to
700 times than the original version and processes a whole-brain MEMFA dataset
in 21 minutes on average, which originally requires ~260 hours for Joint MAPLE.
The mapping performance of the proposed framework is ~2-fold better than the
standard and the state-of-the-art evaluated reconstruction techniques on
average in terms of the root mean squared error.
|
[
{
"created": "Tue, 6 Aug 2024 06:43:06 GMT",
"version": "v1"
}
] |
2024-08-07
|
[
[
"Heydari",
"Amir",
""
],
[
"Ahmadi",
"Abbas",
""
],
[
"Kim",
"Tae Hyung",
""
],
[
"Bilgic",
"Berkin",
""
]
] |
Quantification of tissue parameters using MRI is emerging as a powerful tool in clinical diagnosis and research studies. The need for multiple long scans with different acquisition parameters prohibits quantitative MRI from reaching widespread adoption in routine clinical and research exams. Accelerated parameter mapping techniques leverage parallel imaging, signal modelling and deep learning to offer more practical quantitative MRI acquisitions. However, the achievable acceleration and the quality of maps are often limited. Joint MAPLE is a recent state-of-the-art multi-parametric and scan-specific parameter mapping technique with promising performance at high acceleration rates. It synergistically combines parallel imaging, model-based and machine learning approaches for joint mapping of T1, T2*, proton density and the field inhomogeneity. However, Joint MAPLE suffers from prohibitively long reconstruction time to estimate the maps from a multi-echo, multi-flip angle (MEMFA) dataset at high resolution in a scan-specific manner. In this work, we propose a faster version of Joint MAPLE which retains the mapping performance of the original version. Coil compression, random slice selection, parameter-specific learning rates and transfer learning are synergistically combined in the proposed framework. It speeds-up the reconstruction time up to 700 times than the original version and processes a whole-brain MEMFA dataset in 21 minutes on average, which originally requires ~260 hours for Joint MAPLE. The mapping performance of the proposed framework is ~2-fold better than the standard and the state-of-the-art evaluated reconstruction techniques on average in terms of the root mean squared error.
|
2005.04796
|
Steven Frank
|
Steven A. Frank
|
Metabolic heat in microbial conflict and cooperation
|
V3: Added new figure, more references, some editing. V2: Add several
key references, comments about heat diffusion as single-cell scale
|
Frontiers in Ecology and Evolution 8:275 (2020)
|
10.3389/fevo.2020.00275
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Many microbes live in habitats below their optimum temperature. Retention of
metabolic heat by aggregation or insulation would boost growth. Generation of
excess metabolic heat may also provide benefit. A cell that makes excess
metabolic heat pays the cost of production, whereas the benefit may be shared
by neighbors within a zone of local heat capture. Metabolic heat as a shareable
public good raises interesting questions about conflict and cooperation of heat
production and capture. Metabolic heat may also be deployed as a weapon.
Species with greater thermotolerance gain by raising local temperature to
outcompete less thermotolerant taxa. Metabolic heat may provide defense against
bacteriophage attack, by analogy with fever in vertebrates. This article
outlines the theory of metabolic heat in microbial conflict and cooperation,
presenting several predictions for future study.
|
[
{
"created": "Sun, 10 May 2020 22:06:40 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jul 2020 18:49:58 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Jul 2020 16:15:28 GMT",
"version": "v3"
}
] |
2020-10-22
|
[
[
"Frank",
"Steven A.",
""
]
] |
Many microbes live in habitats below their optimum temperature. Retention of metabolic heat by aggregation or insulation would boost growth. Generation of excess metabolic heat may also provide benefit. A cell that makes excess metabolic heat pays the cost of production, whereas the benefit may be shared by neighbors within a zone of local heat capture. Metabolic heat as a shareable public good raises interesting questions about conflict and cooperation of heat production and capture. Metabolic heat may also be deployed as a weapon. Species with greater thermotolerance gain by raising local temperature to outcompete less thermotolerant taxa. Metabolic heat may provide defense against bacteriophage attack, by analogy with fever in vertebrates. This article outlines the theory of metabolic heat in microbial conflict and cooperation, presenting several predictions for future study.
|
1506.06988
|
Christoph Adami
|
Christoph Adami and Thomas LaBar
|
From Entropy to Information: Biased Typewriters and the Origin of Life
|
19 pages, 8 figures in "From Matter to Life: Information and
Causality", S.I. Walker, P.C.W. Davies, and G. F. R. Ellis, eds., (Cambridge
University Press, 2017) pp. 130-154
| null | null | null |
q-bio.PE cs.IT math.IT nlin.AO q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The origin of life can be understood mathematically to be the origin of
information that can replicate. The likelihood that entropy spontaneously
becomes information can be calculated from first principles, and depends
exponentially on the amount of information that is necessary for replication.
We do not know what the minimum amount of information for self-replication is
because it must depend on the local chemistry, but we can study how this
likelihood behaves in different known chemistries, and we can study ways in
which this likelihood can be enhanced. Here we present evidence from numerical
simulations (using the digital life chemistry "Avida") that using a biased
probability distribution for the creation of monomers (the "biased typewriter")
can exponentially increase the likelihood of spontaneous emergence of
information from entropy. We show that this likelihood may depend on the length
of the sequence that the information is embedded in, but in a non-trivial
manner: there may be an optimum sequence length that maximizes the likelihood.
We conclude that the likelihood of spontaneous emergence of self-replication is
much more malleable than previously thought, and that the biased probability
distributions of monomers that are the norm in biochemistry may significantly
enhance these likelihoods
|
[
{
"created": "Tue, 23 Jun 2015 13:36:54 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jan 2017 16:46:56 GMT",
"version": "v2"
}
] |
2017-01-09
|
[
[
"Adami",
"Christoph",
""
],
[
"LaBar",
"Thomas",
""
]
] |
The origin of life can be understood mathematically to be the origin of information that can replicate. The likelihood that entropy spontaneously becomes information can be calculated from first principles, and depends exponentially on the amount of information that is necessary for replication. We do not know what the minimum amount of information for self-replication is because it must depend on the local chemistry, but we can study how this likelihood behaves in different known chemistries, and we can study ways in which this likelihood can be enhanced. Here we present evidence from numerical simulations (using the digital life chemistry "Avida") that using a biased probability distribution for the creation of monomers (the "biased typewriter") can exponentially increase the likelihood of spontaneous emergence of information from entropy. We show that this likelihood may depend on the length of the sequence that the information is embedded in, but in a non-trivial manner: there may be an optimum sequence length that maximizes the likelihood. We conclude that the likelihood of spontaneous emergence of self-replication is much more malleable than previously thought, and that the biased probability distributions of monomers that are the norm in biochemistry may significantly enhance these likelihoods
|
0812.4295
|
Georgy Karev
|
G.P. Karev
|
How to explore replicator equations?
|
7 pages; Proceedings of the 6th International Conference on
Differential Equations and Dynamical Systems, Baltimore MD, 2008
| null | null | null |
q-bio.QM q-bio.PE
|
http://creativecommons.org/licenses/publicdomain/
|
Replicator equations (RE) are among the basic tools in mathematical theory of
selection and evolution. We develop a method for reducing a wide class of the
RE, which in general are systems of differential equations in Banach space to
escort systems of ODEs that in many cases can be explored analytically. The
method has potential for different applications; some examples are given.
|
[
{
"created": "Mon, 22 Dec 2008 21:26:17 GMT",
"version": "v1"
}
] |
2008-12-24
|
[
[
"Karev",
"G. P.",
""
]
] |
Replicator equations (RE) are among the basic tools in mathematical theory of selection and evolution. We develop a method for reducing a wide class of the RE, which in general are systems of differential equations in Banach space to escort systems of ODEs that in many cases can be explored analytically. The method has potential for different applications; some examples are given.
|
0905.0042
|
Dmitry Fedosov
|
Dmitry A. Fedosov, Bruce Caswell, George E. Karniadakis
|
General coarse-grained red blood cell models: I. Mechanics
|
16 pages, 7 figures
| null | null | null |
q-bio.CB cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a rigorous procedure to derive coarse-grained red blood cell (RBC)
models, which lead to accurate mechanical properties of realistic RBCs. Based
on a semi-analytic theory linear and non-linear elastic properties of the RBC
membrane can be matched with those obtained in optical tweezers stretching
experiments. In addition, we develop a nearly stress-free model which avoids a
number of pitfalls of existing RBC models, such as non-biconcave equilibrium
shape and dependence of RBC mechanical properties on the triangulation quality.
The proposed RBC model is suitable for use in many existing numerical methods,
such as Lattice Boltzmann, Multiparticle Collision Dynamics, Immersed Boundary,
etc.
|
[
{
"created": "Fri, 1 May 2009 03:43:01 GMT",
"version": "v1"
}
] |
2009-05-04
|
[
[
"Fedosov",
"Dmitry A.",
""
],
[
"Caswell",
"Bruce",
""
],
[
"Karniadakis",
"George E.",
""
]
] |
We present a rigorous procedure to derive coarse-grained red blood cell (RBC) models, which lead to accurate mechanical properties of realistic RBCs. Based on a semi-analytic theory linear and non-linear elastic properties of the RBC membrane can be matched with those obtained in optical tweezers stretching experiments. In addition, we develop a nearly stress-free model which avoids a number of pitfalls of existing RBC models, such as non-biconcave equilibrium shape and dependence of RBC mechanical properties on the triangulation quality. The proposed RBC model is suitable for use in many existing numerical methods, such as Lattice Boltzmann, Multiparticle Collision Dynamics, Immersed Boundary, etc.
|
q-bio/0510052
|
Ariel Schwartz
|
Ariel S. Schwartz, Eugene W. Myers, Lior Pachter
|
Alignment Metric Accuracy
| null | null | null | null |
q-bio.QM math.ST stat.TH
| null |
We propose a metric for the space of multiple sequence alignments that can be
used to compare two alignments to each other. In the case where one of the
alignments is a reference alignment, the resulting accuracy measure improves
upon previous approaches, and provides a balanced assessment of the fidelity of
both matches and gaps. Furthermore, in the case where a reference alignment is
not available, we provide empirical evidence that the distance from an
alignment produced by one program to predicted alignments from other programs
can be used as a control for multiple alignment experiments. In particular, we
show that low accuracy alignments can be effectively identified and discarded.
We also show that in the case of pairwise sequence alignment, it is possible to
find an alignment that maximizes the expected value of our accuracy measure.
Unlike previous approaches based on expected accuracy alignment that tend to
maximize sensitivity at the expense of specificity, our method is able to
identify unalignable sequence, thereby increasing overall accuracy. In
addition, the algorithm allows for control of the sensitivity/specificity
tradeoff via the adjustment of a single parameter. These results are confirmed
with simulation studies that show that unalignable regions can be distinguished
from homologous, conserved sequences. Finally, we propose an extension of the
pairwise alignment method to multiple alignment. Our method, which we call
AMAP, outperforms existing protein sequence multiple alignment programs on
benchmark datasets. A webserver and software downloads are available at
http://bio.math.berkeley.edu/amap/ .
|
[
{
"created": "Thu, 27 Oct 2005 22:49:50 GMT",
"version": "v1"
}
] |
2011-11-09
|
[
[
"Schwartz",
"Ariel S.",
""
],
[
"Myers",
"Eugene W.",
""
],
[
"Pachter",
"Lior",
""
]
] |
We propose a metric for the space of multiple sequence alignments that can be used to compare two alignments to each other. In the case where one of the alignments is a reference alignment, the resulting accuracy measure improves upon previous approaches, and provides a balanced assessment of the fidelity of both matches and gaps. Furthermore, in the case where a reference alignment is not available, we provide empirical evidence that the distance from an alignment produced by one program to predicted alignments from other programs can be used as a control for multiple alignment experiments. In particular, we show that low accuracy alignments can be effectively identified and discarded. We also show that in the case of pairwise sequence alignment, it is possible to find an alignment that maximizes the expected value of our accuracy measure. Unlike previous approaches based on expected accuracy alignment that tend to maximize sensitivity at the expense of specificity, our method is able to identify unalignable sequence, thereby increasing overall accuracy. In addition, the algorithm allows for control of the sensitivity/specificity tradeoff via the adjustment of a single parameter. These results are confirmed with simulation studies that show that unalignable regions can be distinguished from homologous, conserved sequences. Finally, we propose an extension of the pairwise alignment method to multiple alignment. Our method, which we call AMAP, outperforms existing protein sequence multiple alignment programs on benchmark datasets. A webserver and software downloads are available at http://bio.math.berkeley.edu/amap/ .
|
2109.00852
|
Kok Yew Ng Dr
|
Niamh McCallan and Scot Davidson and Kok Yew Ng and Pardis Biglarbeigi
and Dewar Finlay and Boon Leong Lan and James McLaughlin
|
Seizure Classification of EEG based on Wavelet Signal Denoising Using a
Novel Channel Selection Algorithm
|
8 pages, 6 figures, accepted for publication at the 13th Asia Pacific
Signal and Information Processing Association Annual Summit and Conference
(APSIPA ASC)
|
2021 Asia-Pacific Signal and Information Processing Association
Annual Summit and Conference (APSIPA ASC), 2021, pp. 1269-1276
| null | null |
q-bio.PE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epilepsy is a disorder of the nervous system that can affect people of any
age group. With roughly 50 million people worldwide diagnosed with the
disorder, it is one of the most common neurological disorders. The EEG is an
indispensable tool for diagnosis of epileptic seizures in an ideal case, as
brain waves from an epileptic person will present distinct abnormalities.
However, in real world situations there will often be biological and electrical
noise interference, as well as the issue of a multichannel signal, which
introduce a great challenge for seizure detection. For this study, the Temple
University Hospital (TUH) EEG Seizure Corpus dataset was used. This paper
proposes a novel channel selection method which isolates different frequency
ranges within five channels. This is based upon the frequencies at which normal
brain waveforms exhibit. A one second window was selected, with a 0.5 second
overlap. Wavelet signal denoising was performed using Daubechies 4 wavelet
decomposition, thresholding was applied using minimax soft thresholding
criteria. Filter banking was used to localise frequency ranges from five
specific channels. Statistical features were then derived from the outputs.
After performing bagged tree classification using 500 learners, a test accuracy
of 0.82 was achieved.
|
[
{
"created": "Thu, 2 Sep 2021 11:41:33 GMT",
"version": "v1"
}
] |
2022-02-21
|
[
[
"McCallan",
"Niamh",
""
],
[
"Davidson",
"Scot",
""
],
[
"Ng",
"Kok Yew",
""
],
[
"Biglarbeigi",
"Pardis",
""
],
[
"Finlay",
"Dewar",
""
],
[
"Lan",
"Boon Leong",
""
],
[
"McLaughlin",
"James",
""
]
] |
Epilepsy is a disorder of the nervous system that can affect people of any age group. With roughly 50 million people worldwide diagnosed with the disorder, it is one of the most common neurological disorders. The EEG is an indispensable tool for diagnosis of epileptic seizures in an ideal case, as brain waves from an epileptic person will present distinct abnormalities. However, in real world situations there will often be biological and electrical noise interference, as well as the issue of a multichannel signal, which introduce a great challenge for seizure detection. For this study, the Temple University Hospital (TUH) EEG Seizure Corpus dataset was used. This paper proposes a novel channel selection method which isolates different frequency ranges within five channels. This is based upon the frequencies at which normal brain waveforms exhibit. A one second window was selected, with a 0.5 second overlap. Wavelet signal denoising was performed using Daubechies 4 wavelet decomposition, thresholding was applied using minimax soft thresholding criteria. Filter banking was used to localise frequency ranges from five specific channels. Statistical features were then derived from the outputs. After performing bagged tree classification using 500 learners, a test accuracy of 0.82 was achieved.
|
2312.12628
|
Geoffrey Van Dover
|
Geoffrey van Dover, Josh Javor, Jourdan Ewoldt, Ha Eun Lee, Mikhail
Zhernenkov, Guillaume Freychet, Patryk Wasik, Dana Brown, David Bishop,
Christopher Chen
|
Structural maturation of myofilaments in engineered 3D cardiac
microtissues characterized using small angle X-ray scattering
| null | null | null | null |
q-bio.TO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Understanding the structural and functional development of human-induced
pluripotent stem-cell-derived cardiomyocytes is essential to engineering
cardiac tissue that enables pharmaceutical testing, modeling diseases, and
designing therapies. Here we use a method not commonly applied to biological
materials, small angle X-ray scattering, to characterize the structural
development of human-induced pluripotent stem-cell-derived cardiomyocytes
within 3D engineered tissues during their preliminary stages of maturation. An
X-ray scattering experimental method enables the reliable characterization of
the cardiomyocyte myofilament spacing with maturation time. The myofilament
lattice spacing monotonically decreases as the tissue matures from its initial
post-seeding state over the span of ten days. Visualization of the spacing at a
grid of positions in the tissue provides an approach to characterizing the
maturation and organization of cardiomyocyte myofilaments and has the potential
to help elucidate mechanisms of pathophysiology, and disease progression,
thereby stimulating new biological hypotheses in stem cell engineering.
|
[
{
"created": "Tue, 19 Dec 2023 22:16:33 GMT",
"version": "v1"
}
] |
2023-12-21
|
[
[
"van Dover",
"Geoffrey",
""
],
[
"Javor",
"Josh",
""
],
[
"Ewoldt",
"Jourdan",
""
],
[
"Lee",
"Ha Eun",
""
],
[
"Zhernenkov",
"Mikhail",
""
],
[
"Freychet",
"Guillaume",
""
],
[
"Wasik",
"Patryk",
""
],
[
"Brown",
"Dana",
""
],
[
"Bishop",
"David",
""
],
[
"Chen",
"Christopher",
""
]
] |
Understanding the structural and functional development of human-induced pluripotent stem-cell-derived cardiomyocytes is essential to engineering cardiac tissue that enables pharmaceutical testing, modeling diseases, and designing therapies. Here we use a method not commonly applied to biological materials, small angle X-ray scattering, to characterize the structural development of human-induced pluripotent stem-cell-derived cardiomyocytes within 3D engineered tissues during their preliminary stages of maturation. An X-ray scattering experimental method enables the reliable characterization of the cardiomyocyte myofilament spacing with maturation time. The myofilament lattice spacing monotonically decreases as the tissue matures from its initial post-seeding state over the span of ten days. Visualization of the spacing at a grid of positions in the tissue provides an approach to characterizing the maturation and organization of cardiomyocyte myofilaments and has the potential to help elucidate mechanisms of pathophysiology, and disease progression, thereby stimulating new biological hypotheses in stem cell engineering.
|
1809.00809
|
Angus McLure
|
Angus McLure, Kathryn Glass
|
Some simple rules for estimating reproduction numbers in the presence of
reservoir exposure or imported cases
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The basic reproduction number ($R_0$) is a threshold parameter for disease
extinction or survival in isolated populations. However no human population is
fully isolated from other human or animal populations. We use compartmental
models to derive simple rules for the basic reproduction number for populations
with local person-to-person transmission and exposure from some other source:
either a reservoir exposure or imported cases. We introduce the idea of a
reservoir-driven or importation-driven disease: diseases that would become
extinct in the population of interest without reservoir exposure or imported
cases (since $R_0<1$), but nevertheless may be sufficiently transmissible that
many or most infections are acquired from humans in that population. We show
that in the simplest case, $R_0<1$ if and only if the proportion of infections
acquired from the external source exceeds the disease prevalence and explore
how population heterogeneity and the interactions of multiple strains affect
this rule. We apply these rules in two cases studies of Clostridium difficile
infection and colonisation: C. difficile in the hospital setting accounting for
imported cases, and C. difficile in the general human population accounting for
exposure to animal reservoirs. We demonstrate that even the hospital-adapted,
highly-transmissible NAP1/RT027 strain of C. difficile had a reproduction
number <1 in a landmark study of hospitalised patients and therefore was
sustained by colonised and infected admissions to the study hospital. We argue
that C. difficile should be considered reservoir-driven if as little as 13.0%
of transmission can be attributed to animal reservoirs.
|
[
{
"created": "Tue, 4 Sep 2018 06:57:33 GMT",
"version": "v1"
}
] |
2018-09-05
|
[
[
"McLure",
"Angus",
""
],
[
"Glass",
"Kathryn",
""
]
] |
The basic reproduction number ($R_0$) is a threshold parameter for disease extinction or survival in isolated populations. However no human population is fully isolated from other human or animal populations. We use compartmental models to derive simple rules for the basic reproduction number for populations with local person-to-person transmission and exposure from some other source: either a reservoir exposure or imported cases. We introduce the idea of a reservoir-driven or importation-driven disease: diseases that would become extinct in the population of interest without reservoir exposure or imported cases (since $R_0<1$), but nevertheless may be sufficiently transmissible that many or most infections are acquired from humans in that population. We show that in the simplest case, $R_0<1$ if and only if the proportion of infections acquired from the external source exceeds the disease prevalence and explore how population heterogeneity and the interactions of multiple strains affect this rule. We apply these rules in two cases studies of Clostridium difficile infection and colonisation: C. difficile in the hospital setting accounting for imported cases, and C. difficile in the general human population accounting for exposure to animal reservoirs. We demonstrate that even the hospital-adapted, highly-transmissible NAP1/RT027 strain of C. difficile had a reproduction number <1 in a landmark study of hospitalised patients and therefore was sustained by colonised and infected admissions to the study hospital. We argue that C. difficile should be considered reservoir-driven if as little as 13.0% of transmission can be attributed to animal reservoirs.
|
1703.03777
|
Wieland Brendel
|
Wieland Brendel, Ralph Bourdoukan, Pietro Vertechi, Christian K.
Machens, Sophie Den\'eve
|
Learning to represent signals spike by spike
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key question in neuroscience is at which level functional meaning emerges
from biophysical phenomena. In most vertebrate systems, precise functions are
assigned at the level of neural populations, while single-neurons are deemed
unreliable and redundant. Here we challenge this view and show that many
single-neuron quantities, including voltages, firing thresholds, excitation,
inhibition, and spikes, acquire precise functional meaning whenever a network
learns to transmit information parsimoniously and precisely to the next layer.
Based on the hypothesis that neural circuits generate precise population codes
under severe constraints on metabolic costs, we derive synaptic plasticity
rules that allow a network to represent its time-varying inputs with maximal
accuracy. We provide exact solutions to the learnt optimal states, and we
predict the properties of an entire network from its input distribution and the
cost of activity. Single-neuron variability and tuning curves as typically
observed in cortex emerge over the course of learning, but paradoxically
coincide with a precise, non-redundant spike-based population code. Our work
suggests that neural circuits operate far more accurately than previously
thought, and that no spike is fired in vain.
|
[
{
"created": "Fri, 10 Mar 2017 17:41:36 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2017 15:59:59 GMT",
"version": "v2"
}
] |
2017-03-17
|
[
[
"Brendel",
"Wieland",
""
],
[
"Bourdoukan",
"Ralph",
""
],
[
"Vertechi",
"Pietro",
""
],
[
"Machens",
"Christian K.",
""
],
[
"Denéve",
"Sophie",
""
]
] |
A key question in neuroscience is at which level functional meaning emerges from biophysical phenomena. In most vertebrate systems, precise functions are assigned at the level of neural populations, while single-neurons are deemed unreliable and redundant. Here we challenge this view and show that many single-neuron quantities, including voltages, firing thresholds, excitation, inhibition, and spikes, acquire precise functional meaning whenever a network learns to transmit information parsimoniously and precisely to the next layer. Based on the hypothesis that neural circuits generate precise population codes under severe constraints on metabolic costs, we derive synaptic plasticity rules that allow a network to represent its time-varying inputs with maximal accuracy. We provide exact solutions to the learnt optimal states, and we predict the properties of an entire network from its input distribution and the cost of activity. Single-neuron variability and tuning curves as typically observed in cortex emerge over the course of learning, but paradoxically coincide with a precise, non-redundant spike-based population code. Our work suggests that neural circuits operate far more accurately than previously thought, and that no spike is fired in vain.
|
1911.04835
|
Hongyi Li Dr.
|
Li Hongyi, Yin Yajun, Yang Chongqing, Chen Min, Wang Fang, Ma Chao, Li
Hua, Kong Yiya, Ji Fusui, Hu Jun
|
Active interfacial dynamic transport of fluid in fibrous connective
tissues and a hypothesis of interstitial fluid circulatory system
|
15 pages, 2 figures, 18 conferences
|
Cell Proliferation. 2020;00:e12760
|
10.1111/cpr.12760
| null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fluid in interstitial spaces accounts for ~20% of an adult body weight. Does
it circulate around the body like vascular circulations besides a diffusive and
short-ranged transport? This bold conjecture has been debated for decades. As a
conventional physiological concept, interstitial space was the space between
cells and a micron-sized space. Fluid in interstitial spaces is thought to be
entrapped within interstitial matrix. However, our serial data have further
defined an interfacial transport zone on a solid fiber of interstitial matrix.
Within this fine space that is probably nanosized, fluid can transport along a
fiber under a driving power. Since 2006, our imaging data from volunteers and
cadavers have revealed a long-distance extravascular pathway for interstitial
fluid flow, comprising four types of anatomic distributions at least. The
framework of each extravascular pathway contains the longitudinally assembled
and oriented fibers, working as a fibrous guiderail for fluid flow.
Interestingly, our data showed that the movement of fluid in a fibrous pathway
is in response to a dynamic driving source and named as dynamotaxis. By
analysis of some representative studies and our experimental results, a
hypothesis of interstitial fluid circulatory system is proposed.
|
[
{
"created": "Tue, 12 Nov 2019 13:21:19 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Nov 2019 08:20:48 GMT",
"version": "v2"
}
] |
2021-07-06
|
[
[
"Hongyi",
"Li",
""
],
[
"Yajun",
"Yin",
""
],
[
"Chongqing",
"Yang",
""
],
[
"Min",
"Chen",
""
],
[
"Fang",
"Wang",
""
],
[
"Chao",
"Ma",
""
],
[
"Hua",
"Li",
""
],
[
"Yiya",
"Kong",
""
],
[
"Fusui",
"Ji",
""
],
[
"Jun",
"Hu",
""
]
] |
Fluid in interstitial spaces accounts for ~20% of an adult body weight. Does it circulate around the body like vascular circulations besides a diffusive and short-ranged transport? This bold conjecture has been debated for decades. As a conventional physiological concept, interstitial space was the space between cells and a micron-sized space. Fluid in interstitial spaces is thought to be entrapped within interstitial matrix. However, our serial data have further defined an interfacial transport zone on a solid fiber of interstitial matrix. Within this fine space that is probably nanosized, fluid can transport along a fiber under a driving power. Since 2006, our imaging data from volunteers and cadavers have revealed a long-distance extravascular pathway for interstitial fluid flow, comprising four types of anatomic distributions at least. The framework of each extravascular pathway contains the longitudinally assembled and oriented fibers, working as a fibrous guiderail for fluid flow. Interestingly, our data showed that the movement of fluid in a fibrous pathway is in response to a dynamic driving source and named as dynamotaxis. By analysis of some representative studies and our experimental results, a hypothesis of interstitial fluid circulatory system is proposed.
|
2105.10578
|
Rowan Swiers
|
Cheng Ye, Rowan Swiers, Stephen Bonner, Ian Barrett
|
A Knowledge Graph-Enhanced Tensor Factorisation Model for Discovering
Drug Targets
|
16 pages, 4 figures, IEEE/ACM Transactions on Computational Biology
and Bioinformatics
| null |
10.1109/TCBB.2022.3197320
| null |
q-bio.QM cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The drug discovery and development process is a long and expensive one,
costing over 1 billion USD on average per drug and taking 10-15 years. To
reduce the high levels of attrition throughout the process, there has been a
growing interest in applying machine learning methodologies to various stages
of drug discovery and development in the recent decade, especially at the
earliest stage identification of druggable disease genes. In this paper, we
have developed a new tensor factorisation model to predict potential drug
targets (genes or proteins) for treating diseases. We created a three
dimensional data tensor consisting of 1,048 gene targets, 860 diseases and
230,011 evidence attributes and clinical outcomes connecting them, using data
extracted from the Open Targets and PharmaProjects databases. We enriched the
data with gene target representations learned from a drug discovery oriented
knowledge graph and applied our proposed method to predict the clinical
outcomes for unseen gene target and disease pairs. We designed three evaluation
strategies to measure the prediction performance and benchmarked several
commonly used machine learning classifiers together with Bayesian matrix and
tensor factorisation methods. The result shows that incorporating knowledge
graph embeddings significantly improves the prediction accuracy and that
training tensor factorisation alongside a dense neural network outperforms all
other baselines. In summary, our framework combines two actively studied
machine learning approaches to disease target identification, namely tensor
factorisation and knowledge graph representation learning, which could be a
promising avenue for further exploration in data driven drug discovery.
|
[
{
"created": "Thu, 20 May 2021 16:19:00 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Jul 2021 14:51:10 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Aug 2022 14:08:55 GMT",
"version": "v3"
}
] |
2022-08-22
|
[
[
"Ye",
"Cheng",
""
],
[
"Swiers",
"Rowan",
""
],
[
"Bonner",
"Stephen",
""
],
[
"Barrett",
"Ian",
""
]
] |
The drug discovery and development process is a long and expensive one, costing over 1 billion USD on average per drug and taking 10-15 years. To reduce the high levels of attrition throughout the process, there has been a growing interest in applying machine learning methodologies to various stages of drug discovery and development in the recent decade, especially at the earliest stage identification of druggable disease genes. In this paper, we have developed a new tensor factorisation model to predict potential drug targets (genes or proteins) for treating diseases. We created a three dimensional data tensor consisting of 1,048 gene targets, 860 diseases and 230,011 evidence attributes and clinical outcomes connecting them, using data extracted from the Open Targets and PharmaProjects databases. We enriched the data with gene target representations learned from a drug discovery oriented knowledge graph and applied our proposed method to predict the clinical outcomes for unseen gene target and disease pairs. We designed three evaluation strategies to measure the prediction performance and benchmarked several commonly used machine learning classifiers together with Bayesian matrix and tensor factorisation methods. The result shows that incorporating knowledge graph embeddings significantly improves the prediction accuracy and that training tensor factorisation alongside a dense neural network outperforms all other baselines. In summary, our framework combines two actively studied machine learning approaches to disease target identification, namely tensor factorisation and knowledge graph representation learning, which could be a promising avenue for further exploration in data driven drug discovery.
|
1911.02551
|
Tal Galili
|
Tal Galili, Alan OCallaghan, Jonathan Sidi, Carson Sievert
|
heatmaply: an R package for creating interactive cluster heatmaps for
online publishing
|
3 pages
|
Bioinformatics 34.9 (2017): 1600-1602
|
10.1093/bioinformatics/btx657
| null |
q-bio.QM stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Summary: heatmaply is an R package for easily creating interactive cluster
heatmaps that can be shared online as a stand-alone HTML file. Interactivity
includes a tooltip display of values when hovering over cells, as well as the
ability to zoom in to specific sections of the figure from the data matrix, the
side dendrograms, or annotated labels. Thanks to the synergistic relationship
between heatmaply and other R packages, the user is empowered by a refined
control over the statistical and visual aspects of the heatmap layout.
Availability and implementation: The heatmaply package is available under the
GPL-2 Open Source license. It comes with a detailed vignette, and is freely
available from: http://cran.r-project.org/package=heatmaply.
Supplementary information: Supplementary data are available at Bioinformatics
online.
|
[
{
"created": "Mon, 4 Nov 2019 13:33:44 GMT",
"version": "v1"
}
] |
2019-11-07
|
[
[
"Galili",
"Tal",
""
],
[
"OCallaghan",
"Alan",
""
],
[
"Sidi",
"Jonathan",
""
],
[
"Sievert",
"Carson",
""
]
] |
Summary: heatmaply is an R package for easily creating interactive cluster heatmaps that can be shared online as a stand-alone HTML file. Interactivity includes a tooltip display of values when hovering over cells, as well as the ability to zoom in to specific sections of the figure from the data matrix, the side dendrograms, or annotated labels. Thanks to the synergistic relationship between heatmaply and other R packages, the user is empowered by a refined control over the statistical and visual aspects of the heatmap layout. Availability and implementation: The heatmaply package is available under the GPL-2 Open Source license. It comes with a detailed vignette, and is freely available from: http://cran.r-project.org/package=heatmaply. Supplementary information: Supplementary data are available at Bioinformatics online.
|
2404.17952
|
Yujiang Wang
|
Heather Woodhouse, Gerard Hall, Callum Simpson, Csaba Kozma, Frances
Turner, Gabrielle M. Schroeder, Beate Diehl, John S. Duncan, Jiajie Mo, Kai
Zhang, Aswin Chari, Martin Tisdall, Friederike Moeller, Chris Petkov, Matthew
A. Howard, George M. Ibrahim, Elizabeth Donner, Nebras M. Warsi, Raheel
Ahmed, Peter N. Taylor, Yujiang Wang
|
Multi-centre normative brain mapping of intracranial EEG lifespan
patterns in the human brain
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Understanding healthy human brain function is crucial to identify
and map pathological tissue within it. Whilst previous studies have mapped
intracranial EEG (icEEG) from non-epileptogenic brain regions, these maps do
not consider the effects of age and sex. Further, most existing work on icEEG
has often suffered from a small sample size due to this modality's invasive
nature. Here, we substantially increase the subject sample size compared to
existing literature, to create a multi-centre, normative map of brain activity
which additionally considers the effects of age, sex and recording site.
Methods: Using interictal icEEG recordings from n = 513 subjects originating
from 15 centres, we constructed a normative map of non-pathological brain
activity by regressing age and sex on relative band power in five frequency
bands, whilst accounting for the site effect.
Results: Recording site significantly impacted normative icEEG maps in all
frequency bands, and age was a more influential predictor of band power than
sex. The age effect varied by frequency band, but no spatial patterns were
observed at the region-specific level. Certainty about regression coefficients
was also frequency band specific and moderately impacted by sample size.
Conclusion: The concept of a normative map is well-established in
neuroscience research and particularly relevant to the icEEG modality, which
does not allow healthy control baselines. Our key results regarding the site
and age effect guide future work utilising normative maps in icEEG.
|
[
{
"created": "Sat, 27 Apr 2024 16:18:37 GMT",
"version": "v1"
}
] |
2024-04-30
|
[
[
"Woodhouse",
"Heather",
""
],
[
"Hall",
"Gerard",
""
],
[
"Simpson",
"Callum",
""
],
[
"Kozma",
"Csaba",
""
],
[
"Turner",
"Frances",
""
],
[
"Schroeder",
"Gabrielle M.",
""
],
[
"Diehl",
"Beate",
""
],
[
"Duncan",
"John S.",
""
],
[
"Mo",
"Jiajie",
""
],
[
"Zhang",
"Kai",
""
],
[
"Chari",
"Aswin",
""
],
[
"Tisdall",
"Martin",
""
],
[
"Moeller",
"Friederike",
""
],
[
"Petkov",
"Chris",
""
],
[
"Howard",
"Matthew A.",
""
],
[
"Ibrahim",
"George M.",
""
],
[
"Donner",
"Elizabeth",
""
],
[
"Warsi",
"Nebras M.",
""
],
[
"Ahmed",
"Raheel",
""
],
[
"Taylor",
"Peter N.",
""
],
[
"Wang",
"Yujiang",
""
]
] |
Background: Understanding healthy human brain function is crucial to identify and map pathological tissue within it. Whilst previous studies have mapped intracranial EEG (icEEG) from non-epileptogenic brain regions, these maps do not consider the effects of age and sex. Further, most existing work on icEEG has often suffered from a small sample size due to this modality's invasive nature. Here, we substantially increase the subject sample size compared to existing literature, to create a multi-centre, normative map of brain activity which additionally considers the effects of age, sex and recording site. Methods: Using interictal icEEG recordings from n = 513 subjects originating from 15 centres, we constructed a normative map of non-pathological brain activity by regressing age and sex on relative band power in five frequency bands, whilst accounting for the site effect. Results: Recording site significantly impacted normative icEEG maps in all frequency bands, and age was a more influential predictor of band power than sex. The age effect varied by frequency band, but no spatial patterns were observed at the region-specific level. Certainty about regression coefficients was also frequency band specific and moderately impacted by sample size. Conclusion: The concept of a normative map is well-established in neuroscience research and particularly relevant to the icEEG modality, which does not allow healthy control baselines. Our key results regarding the site and age effect guide future work utilising normative maps in icEEG.
|
2112.13210
|
Isaac Ronald Ward
|
Isaac Ronald Ward, Ling Wang, Juan lu, Mohammed Bennamoun, Girish
Dwivedi, Frank M Sanfilippo
|
Explainable Artificial Intelligence for Pharmacovigilance: What Features
Are Important When Predicting Adverse Outcomes?
|
Comput Methods Programs Biomed. 2021 Nov;212:106415. Epub 2021 Sep 26
| null |
10.1016/j.cmpb.2021.106415
| null |
q-bio.QM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Explainable Artificial Intelligence (XAI) has been identified as a viable
method for determining the importance of features when making predictions using
Machine Learning (ML) models. In this study, we created models that take an
individual's health information (e.g. their drug history and comorbidities) as
inputs, and predict the probability that the individual will have an Acute
Coronary Syndrome (ACS) adverse outcome. Using XAI, we quantified the
contribution that specific drugs had on these ACS predictions, thus creating an
XAI-based technique for pharmacovigilance monitoring, using ACS as an example
of the adverse outcome to detect. Individuals aged over 65 who were supplied
Musculo-skeletal system (anatomical therapeutic chemical (ATC) class M) or
Cardiovascular system (ATC class C) drugs between 1993 and 2009 were
identified, and their drug histories, comorbidities, and other key features
were extracted from linked Western Australian datasets. Multiple ML models were
trained to predict if these individuals would have an ACS related adverse
outcome (i.e., death or hospitalisation with a discharge diagnosis of ACS), and
a variety of ML and XAI techniques were used to calculate which features --
specifically which drugs -- led to these predictions. The drug dispensing
features for rofecoxib and celecoxib were found to have a greater than zero
contribution to ACS related adverse outcome predictions (on average), and it
was found that ACS related adverse outcomes can be predicted with 72% accuracy.
Furthermore, the XAI libraries LIME and SHAP were found to successfully
identify both important and unimportant features, with SHAP slightly
outperforming LIME. ML models trained on linked administrative health datasets
in tandem with XAI algorithms can successfully quantify feature importance, and
with further development, could potentially be used as pharmacovigilance
monitoring techniques.
|
[
{
"created": "Sat, 25 Dec 2021 09:00:08 GMT",
"version": "v1"
}
] |
2021-12-28
|
[
[
"Ward",
"Isaac Ronald",
""
],
[
"Wang",
"Ling",
""
],
[
"lu",
"Juan",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Dwivedi",
"Girish",
""
],
[
"Sanfilippo",
"Frank M",
""
]
] |
Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome. Using XAI, we quantified the contribution that specific drugs had on these ACS predictions, thus creating an XAI-based technique for pharmacovigilance monitoring, using ACS as an example of the adverse outcome to detect. Individuals aged over 65 who were supplied Musculo-skeletal system (anatomical therapeutic chemical (ATC) class M) or Cardiovascular system (ATC class C) drugs between 1993 and 2009 were identified, and their drug histories, comorbidities, and other key features were extracted from linked Western Australian datasets. Multiple ML models were trained to predict if these individuals would have an ACS related adverse outcome (i.e., death or hospitalisation with a discharge diagnosis of ACS), and a variety of ML and XAI techniques were used to calculate which features -- specifically which drugs -- led to these predictions. The drug dispensing features for rofecoxib and celecoxib were found to have a greater than zero contribution to ACS related adverse outcome predictions (on average), and it was found that ACS related adverse outcomes can be predicted with 72% accuracy. Furthermore, the XAI libraries LIME and SHAP were found to successfully identify both important and unimportant features, with SHAP slightly outperforming LIME. ML models trained on linked administrative health datasets in tandem with XAI algorithms can successfully quantify feature importance, and with further development, could potentially be used as pharmacovigilance monitoring techniques.
|
2407.04025
|
Ilenna Jones Dr
|
Ilenna Simone Jones and Konrad Paul Kording
|
Efficient optimization of ODE neuron models using gradient descent
|
25 pages, 4 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuroscientists fit morphologically and biophysically detailed neuron
simulations to physiological data, often using evolutionary algorithms.
However, such gradient-free approaches are computationally expensive, making
convergence slow when neuron models have many parameters. Here we introduce a
gradient-based algorithm using differentiable ODE solvers that scales well to
high-dimensional problems. GPUs make parallel simulations fast and gradient
calculations make optimization efficient. We verify the utility of our approach
optimizing neuron models with active dendrites with heterogeneously distributed
ion channel densities. We find that individually stimulating and recording all
dendritic compartments makes such model parameters identifiable. Identification
breaks down gracefully as fewer stimulation and recording sites are given.
Differentiable neuron models, which should be added to popular neuron
simulation packages, promise a new era of optimizable neuron models with many
free parameters, a key feature of real neurons.
|
[
{
"created": "Thu, 4 Jul 2024 16:10:27 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Jul 2024 19:49:09 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Jones",
"Ilenna Simone",
""
],
[
"Kording",
"Konrad Paul",
""
]
] |
Neuroscientists fit morphologically and biophysically detailed neuron simulations to physiological data, often using evolutionary algorithms. However, such gradient-free approaches are computationally expensive, making convergence slow when neuron models have many parameters. Here we introduce a gradient-based algorithm using differentiable ODE solvers that scales well to high-dimensional problems. GPUs make parallel simulations fast and gradient calculations make optimization efficient. We verify the utility of our approach optimizing neuron models with active dendrites with heterogeneously distributed ion channel densities. We find that individually stimulating and recording all dendritic compartments makes such model parameters identifiable. Identification breaks down gracefully as fewer stimulation and recording sites are given. Differentiable neuron models, which should be added to popular neuron simulation packages, promise a new era of optimizable neuron models with many free parameters, a key feature of real neurons.
|
2210.16292
|
Leonardo Martini
|
Leonardo Martini, Adriano Fazzone, Michele Gentili, Luca Becchetti,
Brian Hobbs
|
Network Based Approach to Gene Prioritization at Genome-Wide Association
Study Loci
| null | null | null | null |
q-bio.QM q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Motivation: Genome-wide association studies (GWAS) have successfully
identified thousands of genetic risk loci for complex traits and diseases. Most
of these GWAS loci lie in regulatory regions of the genome and the gene through
which each GWAS risk locus exerts its effects is not always clear. Many
computational methods utilizing biological data sources have been proposed to
identify putative casual genes at GWAS loci; however, these methods can be
improved upon. Results: We present the Relations-Maximization Method, a dense
module searching method to identify putative causal genes at GWAS loci through
the generation of candidate sub-networks derived by integrating association
signals from GWAS data into the gene co-regulation network. We employ our
method in a chronic obstructive pulmonary disease GWAS. We perform an
extensive, comparative study of Relations-Maximization Method's performance
against well-established baselines.
|
[
{
"created": "Fri, 28 Oct 2022 17:41:57 GMT",
"version": "v1"
}
] |
2022-10-31
|
[
[
"Martini",
"Leonardo",
""
],
[
"Fazzone",
"Adriano",
""
],
[
"Gentili",
"Michele",
""
],
[
"Becchetti",
"Luca",
""
],
[
"Hobbs",
"Brian",
""
]
] |
Motivation: Genome-wide association studies (GWAS) have successfully identified thousands of genetic risk loci for complex traits and diseases. Most of these GWAS loci lie in regulatory regions of the genome and the gene through which each GWAS risk locus exerts its effects is not always clear. Many computational methods utilizing biological data sources have been proposed to identify putative casual genes at GWAS loci; however, these methods can be improved upon. Results: We present the Relations-Maximization Method, a dense module searching method to identify putative causal genes at GWAS loci through the generation of candidate sub-networks derived by integrating association signals from GWAS data into the gene co-regulation network. We employ our method in a chronic obstructive pulmonary disease GWAS. We perform an extensive, comparative study of Relations-Maximization Method's performance against well-established baselines.
|
1209.5032
|
Betul Kacar
|
Betul Kacar, Eric Gaucher
|
Towards the Recapitulation of Ancient History in the Laboratory:
Combining Synthetic Biology with Experimental Evolution
|
8 pages, 4 figures
|
Artificial Life XIII: Proceedings of the Thirteenth International
Conference on the Synthesis and Simulation of Living Systems. pp 11-18.
Cambridge, MA: MIT Press 2012
| null | null |
q-bio.PE q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One way to understand the role history plays on evolutionary trajectories is
by giving ancient life a second opportunity to evolve. Our ability to
empirically perform such an experiment, however, is limited by current
experimental designs. Combining ancestral sequence reconstruction with
synthetic biology allows us to resurrect the past within a modern context and
has expanded our understanding of protein functionality within a historical
context. Experimental evolution, on the other hand, provides us with the
ability to study evolution in action, under controlled conditions in the
laboratory. Here we describe a novel experimental setup that integrates two
disparate fields - ancestral sequence reconstruction and experimental
evolution. This allows us to rewind and replay the evolutionary history of
ancient biomolecules in the laboratory. We anticipate that our combination will
provide a deeper understanding of the underlying roles that contingency and
determinism play in shaping evolutionary processes.
|
[
{
"created": "Sun, 23 Sep 2012 02:15:22 GMT",
"version": "v1"
}
] |
2012-09-25
|
[
[
"Kacar",
"Betul",
""
],
[
"Gaucher",
"Eric",
""
]
] |
One way to understand the role history plays on evolutionary trajectories is by giving ancient life a second opportunity to evolve. Our ability to empirically perform such an experiment, however, is limited by current experimental designs. Combining ancestral sequence reconstruction with synthetic biology allows us to resurrect the past within a modern context and has expanded our understanding of protein functionality within a historical context. Experimental evolution, on the other hand, provides us with the ability to study evolution in action, under controlled conditions in the laboratory. Here we describe a novel experimental setup that integrates two disparate fields - ancestral sequence reconstruction and experimental evolution. This allows us to rewind and replay the evolutionary history of ancient biomolecules in the laboratory. We anticipate that our combination will provide a deeper understanding of the underlying roles that contingency and determinism play in shaping evolutionary processes.
|
1403.3626
|
Bahram Houchmandzadeh
|
Bahram Houchmandzadeh (LIPhy)
|
Noise driven emergence of cooperative behavior
| null | null | null | null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative behaviors are defined as the production of common goods
benefitting all members of the community at the producer's cost. They could
seem to be in contradiction with natural selection, as non-cooperators have an
increased fitness compared to cooperators. Understanding the emergence of
cooperation has necessitated the development of concepts and models (inclusive
fitness, multilevel selection, ...) attributing deterministic advantages to
this behavior. In contrast to these models, we show here that cooperative
behaviors can emerge by taking into account the stochastic nature of
evolutionary dynamics : when cooperative behaviors increase the carrying
capacity of the habitat, they also increase the genetic drift against
non-cooperators. Using the Wright-Fisher models of population genetics, we
compute exactly this increased genetic drift and its consequences on the
fixation probability of both types of individuals. This computation leads to a
simple criterion: cooperative behavior dominates when the relative increase in
carrying capacity of the habitat caused by cooperators is higher than the
selection pressure against them. This is a purely stochastic effect with no
deterministic interpretation.
|
[
{
"created": "Fri, 14 Mar 2014 16:12:18 GMT",
"version": "v1"
}
] |
2014-03-17
|
[
[
"Houchmandzadeh",
"Bahram",
"",
"LIPhy"
]
] |
Cooperative behaviors are defined as the production of common goods benefitting all members of the community at the producer's cost. They could seem to be in contradiction with natural selection, as non-cooperators have an increased fitness compared to cooperators. Understanding the emergence of cooperation has necessitated the development of concepts and models (inclusive fitness, multilevel selection, ...) attributing deterministic advantages to this behavior. In contrast to these models, we show here that cooperative behaviors can emerge by taking into account the stochastic nature of evolutionary dynamics : when cooperative behaviors increase the carrying capacity of the habitat, they also increase the genetic drift against non-cooperators. Using the Wright-Fisher models of population genetics, we compute exactly this increased genetic drift and its consequences on the fixation probability of both types of individuals. This computation leads to a simple criterion: cooperative behavior dominates when the relative increase in carrying capacity of the habitat caused by cooperators is higher than the selection pressure against them. This is a purely stochastic effect with no deterministic interpretation.
|
2007.16069
|
Ivan Gutierrez-Sagredo
|
Angel Ballesteros, Alfonso Blasco, Ivan Gutierrez-Sagredo
|
Exact closed-form solution of a modified SIR model
|
17 pages. New section added
| null | null | null |
q-bio.PE math.DS physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exact analytical solution in closed form of a modified SIR system where
recovered individuals are removed from the population is presented. In this
dynamical system the populations $S(t)$ and $R(t)$ of susceptible and recovered
individuals are found to be generalized logistic functions, while infective
ones $I(t)$ are given by a generalized logistic function times an exponential,
all of them with the same characteristic time. The dynamics of this modified
SIR system is analyzed and the exact computation of some epidemiologically
relevant quantities is performed. The main differences between this modified
SIR model and original SIR one are presented and explained in terms of the
zeroes of their respective conserved quantities. Moreover, it is shown that the
modified SIR model with time-dependent transmission rate can be also solved in
closed form for certain realistic transmission rate functions.
|
[
{
"created": "Fri, 31 Jul 2020 13:38:40 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Oct 2020 10:28:09 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Nov 2020 10:57:22 GMT",
"version": "v3"
}
] |
2020-11-11
|
[
[
"Ballesteros",
"Angel",
""
],
[
"Blasco",
"Alfonso",
""
],
[
"Gutierrez-Sagredo",
"Ivan",
""
]
] |
The exact analytical solution in closed form of a modified SIR system where recovered individuals are removed from the population is presented. In this dynamical system the populations $S(t)$ and $R(t)$ of susceptible and recovered individuals are found to be generalized logistic functions, while infective ones $I(t)$ are given by a generalized logistic function times an exponential, all of them with the same characteristic time. The dynamics of this modified SIR system is analyzed and the exact computation of some epidemiologically relevant quantities is performed. The main differences between this modified SIR model and original SIR one are presented and explained in terms of the zeroes of their respective conserved quantities. Moreover, it is shown that the modified SIR model with time-dependent transmission rate can be also solved in closed form for certain realistic transmission rate functions.
|
1302.1752
|
Dmitry Karabanov
|
D.P. Karabanov
|
Genetical adaptation of common kilka Clupeonella cultriventris
(Nordmann, 1840) (Actinopterygii: Clupeidae)
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is the first time, when genetic diversity and the common kilka population
structure were investigated throughout the areal. Data about the species
condition at the Upper Volga basins were updated accordingly to modern
condition. Physiological and ecological adaptations to northern water basins
were evaluated. Significance of interaction between some loci and the most
important abiotic environmental factors at the selection was demonstrated. The
results suggest that the common kilka Clupeonella cultriventris (Nordmann,
1840) is presented by uniform set of population throughout the areal. The
successive expansion of the species through the cascade of the Volga's water
reservoirs can be explained by complex of genetic and biochemical adaptation
for sweet water habitat. It was supposed that the sweet water populations
originated from Saratov's backwaters sweet water population. Seasonal
fluctuations of abiotic and biotic environmental factors have significant
influence on genotype distribution at the newly formed population. This book is
intended for use by ichthyologists, ecologists, environmental protection and
management of natural resources specialists. In Russian.
|
[
{
"created": "Thu, 7 Feb 2013 14:10:29 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Feb 2013 11:14:26 GMT",
"version": "v2"
}
] |
2013-02-13
|
[
[
"Karabanov",
"D. P.",
""
]
] |
It is the first time, when genetic diversity and the common kilka population structure were investigated throughout the areal. Data about the species condition at the Upper Volga basins were updated accordingly to modern condition. Physiological and ecological adaptations to northern water basins were evaluated. Significance of interaction between some loci and the most important abiotic environmental factors at the selection was demonstrated. The results suggest that the common kilka Clupeonella cultriventris (Nordmann, 1840) is presented by uniform set of population throughout the areal. The successive expansion of the species through the cascade of the Volga's water reservoirs can be explained by complex of genetic and biochemical adaptation for sweet water habitat. It was supposed that the sweet water populations originated from Saratov's backwaters sweet water population. Seasonal fluctuations of abiotic and biotic environmental factors have significant influence on genotype distribution at the newly formed population. This book is intended for use by ichthyologists, ecologists, environmental protection and management of natural resources specialists. In Russian.
|
2103.00677
|
Vikas Srivastava
|
Thomas Usherwood, Zachary LaJoie and Vikas Srivastava (corresponding
author)
|
Modeling and prediction of COVID-19 in the United States considering
population behavior and vaccination
|
11 pages, 7 figures
|
A model and predictions for COVID-19 considering population
behavior and vaccination. Sciietific Reports 11, 12051 (2021)
|
10.1038/s41598-021-91514-7
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COVID-19 has devastated the entire global community. Vaccines present an
opportunity to mitigate the pandemic; however, the effect of vaccination
coupled with the behavioral response of the population is not well understood.
We propose a model that incorporates two important dynamically varying
population behaviors: level of caution and sense of safety. Level of caution
increases with the number of infectious cases, while an increasing sense of
safety with increased vaccination lowers precautionary behaviors. To the best
of our knowledge, this is the first model that can effectively reproduce the
complete time history of COVID-19 infections for various regions of the United
States and provides relatable measures of dynamic changes in the population
behavior and disease transmission rates. We propose a parameter d_I as a direct
measure of a population's caution against an infectious disease, that can be
obtained from the ongoing new infectious cases. The model provides a method for
quantitative measure of critical infectious disease attributes for a population
including highest disease transmission rate, effective disease transmission
rate, and disease related precautionary behavior. We predict future COVID-19
pandemic trends in the United States accounting for vaccine rollout and
behavioral response. Although a high rate of vaccination is critical to quickly
end the pandemic, we find that a return towards pre-pandemic social behavior
due to increased sense of safety during vaccine deployment, can cause an
alarming surge in infections. Our results indicate that at the current rate of
vaccination, the new infection cases for COVID-19 in the United States will
approach zero by the end of August 2021. The model can be used for predicting
future epidemic and pandemic dynamics before and during vaccination.
|
[
{
"created": "Mon, 1 Mar 2021 01:13:29 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"Usherwood",
"Thomas",
"",
"corresponding\n author"
],
[
"LaJoie",
"Zachary",
"",
"corresponding\n author"
],
[
"Srivastava",
"Vikas",
"",
"corresponding\n author"
]
] |
COVID-19 has devastated the entire global community. Vaccines present an opportunity to mitigate the pandemic; however, the effect of vaccination coupled with the behavioral response of the population is not well understood. We propose a model that incorporates two important dynamically varying population behaviors: level of caution and sense of safety. Level of caution increases with the number of infectious cases, while an increasing sense of safety with increased vaccination lowers precautionary behaviors. To the best of our knowledge, this is the first model that can effectively reproduce the complete time history of COVID-19 infections for various regions of the United States and provides relatable measures of dynamic changes in the population behavior and disease transmission rates. We propose a parameter d_I as a direct measure of a population's caution against an infectious disease, that can be obtained from the ongoing new infectious cases. The model provides a method for quantitative measure of critical infectious disease attributes for a population including highest disease transmission rate, effective disease transmission rate, and disease related precautionary behavior. We predict future COVID-19 pandemic trends in the United States accounting for vaccine rollout and behavioral response. Although a high rate of vaccination is critical to quickly end the pandemic, we find that a return towards pre-pandemic social behavior due to increased sense of safety during vaccine deployment, can cause an alarming surge in infections. Our results indicate that at the current rate of vaccination, the new infection cases for COVID-19 in the United States will approach zero by the end of August 2021. The model can be used for predicting future epidemic and pandemic dynamics before and during vaccination.
|
2004.01295
|
Ram\'on Enrique Ramayo Gonz\'alez
|
Ram\'on E. R. Gonz\'alez
|
Different scenarios in the Dynamics of SARS-Cov-2 Infection: an adapted
ODE model
|
11 pages, 7 figures and 8 tables
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A mathematical model to calculate the transmissibility of SARS-Cov-2 in Wuhan
City was developed and published recently by Tian-Mu Chen et al., Infectious
Diseases of Poverty, 2020, https://doi.org/10.1186/s40249-020-00640-3. This
paper improves this model in order to study the effect of different scenarios
that include actions to contain the pandemic, such as isolation and quarantine
of infected and at-risk people. Comparisons made between the different
scenarios show that the progress of the infection is found to strongly depend
on measures taken in each case. The particular case of Brazil was studied,
showing the dynamics of the first days of the infection in comparison with the
different scenarios contained in the model and the reality of the Brazilian
health system was exposed, in front of each possible scenario. The relative
evolution of the number of new infection and reported cases was employed to
estimate a containment date of the pandemic. Finally, the basic reproduction
number R0 values were estimated for each scenario, ranging from 4.04 to 1.12.
|
[
{
"created": "Thu, 2 Apr 2020 22:47:39 GMT",
"version": "v1"
}
] |
2020-04-06
|
[
[
"González",
"Ramón E. R.",
""
]
] |
A mathematical model to calculate the transmissibility of SARS-Cov-2 in Wuhan City was developed and published recently by Tian-Mu Chen et al., Infectious Diseases of Poverty, 2020, https://doi.org/10.1186/s40249-020-00640-3. This paper improves this model in order to study the effect of different scenarios that include actions to contain the pandemic, such as isolation and quarantine of infected and at-risk people. Comparisons made between the different scenarios show that the progress of the infection is found to strongly depend on measures taken in each case. The particular case of Brazil was studied, showing the dynamics of the first days of the infection in comparison with the different scenarios contained in the model and the reality of the Brazilian health system was exposed, in front of each possible scenario. The relative evolution of the number of new infection and reported cases was employed to estimate a containment date of the pandemic. Finally, the basic reproduction number R0 values were estimated for each scenario, ranging from 4.04 to 1.12.
|
1409.1096
|
Colin Gillespie
|
Colin S. Gillespie, Andrew Golightly
|
Diagnostics for assessing the linear noise and moment closure
approximations
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solving the chemical master equation exactly is typically not possible, so
instead we must rely on simulation based methods. Unfortunately, drawing exact
realisations, results in simulating every reaction that occurs. This will
preclude the use of exact simulators for models of any realistic size and so
approximate algorithms become important. In this paper we describe a general
framework for assessing the accuracy of the linear noise and two moment
approximations. By constructing an efficient space filling design over the
parameter region of interest, we present a number of useful diagnostic tools
that aids modellers in assessing whether the approximation is suitable. In
particular, we leverage the normality assumption of the linear noise and moment
closure approximations.
|
[
{
"created": "Wed, 3 Sep 2014 14:11:59 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2016 12:50:54 GMT",
"version": "v2"
}
] |
2016-08-31
|
[
[
"Gillespie",
"Colin S.",
""
],
[
"Golightly",
"Andrew",
""
]
] |
Solving the chemical master equation exactly is typically not possible, so instead we must rely on simulation based methods. Unfortunately, drawing exact realisations, results in simulating every reaction that occurs. This will preclude the use of exact simulators for models of any realistic size and so approximate algorithms become important. In this paper we describe a general framework for assessing the accuracy of the linear noise and two moment approximations. By constructing an efficient space filling design over the parameter region of interest, we present a number of useful diagnostic tools that aids modellers in assessing whether the approximation is suitable. In particular, we leverage the normality assumption of the linear noise and moment closure approximations.
|
1312.2120
|
Xi Huo
|
Xi Huo
|
Modeling of Contact Tracing in Epidemic Populations Structured by
Disease Age
| null | null | null | null |
q-bio.PE math.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider an age-structured epidemic model with two basic public health
interventions: (i) identifying and isolating symptomatic cases, and (ii)
tracing and quarantine of the contacts of identified infectives. The dynamics
of the infected population are modeled by a nonlinear infection-age-dependent
partial differential equation, which is coupled with an ordinary differential
equation that describes the dynamics of the susceptible population. Theoretical
results about global existence and uniqueness of positive solutions are proved.
We also present two practical applications of our model: (1) we assess public
health guidelines about emergency preparedness and response in the event of a
smallpox bioterrorist attack; (2) we simulate the 2003 SARS outbreak in Taiwan
and estimate the number of cases avoided by contact tracing. Our model can be
applied as a rational basis for decision makers to guide interventions and
deploy public health resources in future epidemics.
|
[
{
"created": "Sat, 7 Dec 2013 18:02:46 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Mar 2014 20:46:37 GMT",
"version": "v2"
}
] |
2014-03-13
|
[
[
"Huo",
"Xi",
""
]
] |
We consider an age-structured epidemic model with two basic public health interventions: (i) identifying and isolating symptomatic cases, and (ii) tracing and quarantine of the contacts of identified infectives. The dynamics of the infected population are modeled by a nonlinear infection-age-dependent partial differential equation, which is coupled with an ordinary differential equation that describes the dynamics of the susceptible population. Theoretical results about global existence and uniqueness of positive solutions are proved. We also present two practical applications of our model: (1) we assess public health guidelines about emergency preparedness and response in the event of a smallpox bioterrorist attack; (2) we simulate the 2003 SARS outbreak in Taiwan and estimate the number of cases avoided by contact tracing. Our model can be applied as a rational basis for decision makers to guide interventions and deploy public health resources in future epidemics.
|
1308.3824
|
Fabio Pichierri
|
Fabio Pichierri
|
Protein conformational dynamics and electronic structure
|
11 pages, 4 figures
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum mechanical calculations are performed on 116 conformers of the
protein ubiquitin (Lange et al., Science 2008, 320, 1471-1475). The results
indicate that the heat of formation (HOF), dipole moment, energy of the
frontier orbitals HOMO and LUMO, and HOMO-LUMO gap fluctuate within their
corresponding ranges. This study thus provides a link between the
conformational dynamics of a protein and its electronic structure.
|
[
{
"created": "Sun, 18 Aug 2013 02:38:09 GMT",
"version": "v1"
}
] |
2013-08-20
|
[
[
"Pichierri",
"Fabio",
""
]
] |
Quantum mechanical calculations are performed on 116 conformers of the protein ubiquitin (Lange et al., Science 2008, 320, 1471-1475). The results indicate that the heat of formation (HOF), dipole moment, energy of the frontier orbitals HOMO and LUMO, and HOMO-LUMO gap fluctuate within their corresponding ranges. This study thus provides a link between the conformational dynamics of a protein and its electronic structure.
|
1908.06733
|
Christian R\"over
|
Moreno Ursino, Christian R\"over, Sarah Zohar and Tim Friede
|
Random-effects meta-analysis of phase I dose-finding studies using
stochastic process priors
|
23 pages, 6 figures, 7 tables
|
The Annals of Applied Statistics, 15(1):174-193, 2021
|
10.1214/20-AOAS1390
| null |
q-bio.QM stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phase I dose-finding studies aim at identifying the maximal tolerated dose
(MTD). It is not uncommon that several dose-finding studies are conducted,
although often with some variation in the administration mode or dose panel.
For instance, sorafenib (BAY 43-900) was used as monotherapy in at least 29
phase I trials according to a recent search in clinicaltrials.gov. Since the
toxicity may not be directly related to the specific indication, synthesizing
the information from several studies might be worthwhile. However, this is
rarely done in practice and only a fixed-effect meta-analysis framework was
proposed to date. We developed a Bayesian random-effects meta-analysis
methodology to pool several phase I trials and suggest the MTD. A curve free
hierarchical model on the logistic scale with random effects, accounting for
between-trial heterogeneity, is used to model the probability of toxicity
across the investigated doses. An Ornstein-Uhlenbeck Gaussian process is
adopted for the random effects structure. Prior distributions for the curve
free model are based on a latent Gamma process. An extensive simulation study
showed good performance of the proposed method also under model deviations.
Sharing information between phase I studies can improve the precision of MTD
selection, at least when the number of trials is reasonably large.
|
[
{
"created": "Thu, 1 Aug 2019 07:16:40 GMT",
"version": "v1"
}
] |
2021-03-24
|
[
[
"Ursino",
"Moreno",
""
],
[
"Röver",
"Christian",
""
],
[
"Zohar",
"Sarah",
""
],
[
"Friede",
"Tim",
""
]
] |
Phase I dose-finding studies aim at identifying the maximal tolerated dose (MTD). It is not uncommon that several dose-finding studies are conducted, although often with some variation in the administration mode or dose panel. For instance, sorafenib (BAY 43-900) was used as monotherapy in at least 29 phase I trials according to a recent search in clinicaltrials.gov. Since the toxicity may not be directly related to the specific indication, synthesizing the information from several studies might be worthwhile. However, this is rarely done in practice and only a fixed-effect meta-analysis framework was proposed to date. We developed a Bayesian random-effects meta-analysis methodology to pool several phase I trials and suggest the MTD. A curve free hierarchical model on the logistic scale with random effects, accounting for between-trial heterogeneity, is used to model the probability of toxicity across the investigated doses. An Ornstein-Uhlenbeck Gaussian process is adopted for the random effects structure. Prior distributions for the curve free model are based on a latent Gamma process. An extensive simulation study showed good performance of the proposed method also under model deviations. Sharing information between phase I studies can improve the precision of MTD selection, at least when the number of trials is reasonably large.
|
2111.10695
|
Antony Sagayaraj
|
Toni Sagayaraj, Carsten Eickhoff
|
Image-Like Graph Representations for Improved Molecular Property
Prediction
| null | null | null | null |
q-bio.QM cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Research into deep learning models for molecular property prediction has
primarily focused on the development of better Graph Neural Network (GNN)
architectures. Though new GNN variants continue to improve performance, their
modifications share a common theme of alleviating problems intrinsic to their
fundamental graph-to-graph nature. In this work, we examine these limitations
and propose a new molecular representation that bypasses the need for GNNs
entirely, dubbed CubeMol. Our fixed-dimensional stochastic representation, when
paired with a transformer model, exceeds the performance of state-of-the-art
GNN models and provides a path for scalability.
|
[
{
"created": "Sat, 20 Nov 2021 22:39:11 GMT",
"version": "v1"
}
] |
2021-11-23
|
[
[
"Sagayaraj",
"Toni",
""
],
[
"Eickhoff",
"Carsten",
""
]
] |
Research into deep learning models for molecular property prediction has primarily focused on the development of better Graph Neural Network (GNN) architectures. Though new GNN variants continue to improve performance, their modifications share a common theme of alleviating problems intrinsic to their fundamental graph-to-graph nature. In this work, we examine these limitations and propose a new molecular representation that bypasses the need for GNNs entirely, dubbed CubeMol. Our fixed-dimensional stochastic representation, when paired with a transformer model, exceeds the performance of state-of-the-art GNN models and provides a path for scalability.
|
2008.03493
|
R\'emi Eyraud
|
Philipp O. Tsvetkov, R\'emi Eyraud, St\'ephane Ayache, Anton A.
Bougaev, Soazig Malesinski, Hamed Benazha, Svetlana Gorokhova, Christophe
Buffat, Caroline Dehais, Marc Sanson, Franck Bielle, Dominique
Figarella-Branger, Olivier Chinot, Emeline Tabouret, Fran\c{c}ois Devred
|
An AI-powered blood test to detect cancer using nanoDSF
| null | null | null | null |
q-bio.QM cs.LG q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a novel cancer diagnostic method based on plasma denaturation
profiles obtained by a non-conventional use of Differential Scanning
Fluorimetry. We show that 84 glioma patients and 63 healthy controls can be
automatically classified using denaturation profiles with the help of machine
learning algorithms with 92% accuracy. Proposed high throughput workflow can be
applied to any type of cancer and could become a powerful pan-cancer diagnostic
and monitoring tool from a simple blood test.
|
[
{
"created": "Sat, 8 Aug 2020 11:20:53 GMT",
"version": "v1"
}
] |
2020-08-11
|
[
[
"Tsvetkov",
"Philipp O.",
""
],
[
"Eyraud",
"Rémi",
""
],
[
"Ayache",
"Stéphane",
""
],
[
"Bougaev",
"Anton A.",
""
],
[
"Malesinski",
"Soazig",
""
],
[
"Benazha",
"Hamed",
""
],
[
"Gorokhova",
"Svetlana",
""
],
[
"Buffat",
"Christophe",
""
],
[
"Dehais",
"Caroline",
""
],
[
"Sanson",
"Marc",
""
],
[
"Bielle",
"Franck",
""
],
[
"Figarella-Branger",
"Dominique",
""
],
[
"Chinot",
"Olivier",
""
],
[
"Tabouret",
"Emeline",
""
],
[
"Devred",
"François",
""
]
] |
We describe a novel cancer diagnostic method based on plasma denaturation profiles obtained by a non-conventional use of Differential Scanning Fluorimetry. We show that 84 glioma patients and 63 healthy controls can be automatically classified using denaturation profiles with the help of machine learning algorithms with 92% accuracy. Proposed high throughput workflow can be applied to any type of cancer and could become a powerful pan-cancer diagnostic and monitoring tool from a simple blood test.
|
1502.06172
|
Carina Curto
|
Chad Giusti, Eva Pastalkova, Carina Curto, Vladimir Itskov
|
Clique topology reveals intrinsic geometric structure in neural
correlations
|
29 pages, 4 figures, 13 supplementary figures (last two authors
contributed equally)
|
PNAS, vol. 112 no. 44, pp. 13455-13460, 2015
|
10.1073/pnas.1506407112
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting meaningful structure in neural activity and connectivity data is
challenging in the presence of hidden nonlinearities, where traditional
eigenvalue-based methods may be misleading. We introduce a novel approach to
matrix analysis, called clique topology, that extracts features of the data
invariant under nonlinear monotone transformations. These features can be used
to detect both random and geometric structure, and depend only on the relative
ordering of matrix entries. We then analyzed the activity of pyramidal neurons
in rat hippocampus, recorded while the animal was exploring a two-dimensional
environment, and confirmed that our method is able to detect geometric
organization using only the intrinsic pattern of neural correlations.
Remarkably, we found similar results during non-spatial behaviors such as wheel
running and REM sleep. This suggests that the geometric structure of
correlations is shaped by the underlying hippocampal circuits, and is not
merely a consequence of position coding. We propose that clique topology is a
powerful new tool for matrix analysis in biological settings, where the
relationship of observed quantities to more meaningful variables is often
nonlinear and unknown.
|
[
{
"created": "Sun, 22 Feb 2015 03:17:24 GMT",
"version": "v1"
}
] |
2015-11-24
|
[
[
"Giusti",
"Chad",
""
],
[
"Pastalkova",
"Eva",
""
],
[
"Curto",
"Carina",
""
],
[
"Itskov",
"Vladimir",
""
]
] |
Detecting meaningful structure in neural activity and connectivity data is challenging in the presence of hidden nonlinearities, where traditional eigenvalue-based methods may be misleading. We introduce a novel approach to matrix analysis, called clique topology, that extracts features of the data invariant under nonlinear monotone transformations. These features can be used to detect both random and geometric structure, and depend only on the relative ordering of matrix entries. We then analyzed the activity of pyramidal neurons in rat hippocampus, recorded while the animal was exploring a two-dimensional environment, and confirmed that our method is able to detect geometric organization using only the intrinsic pattern of neural correlations. Remarkably, we found similar results during non-spatial behaviors such as wheel running and REM sleep. This suggests that the geometric structure of correlations is shaped by the underlying hippocampal circuits, and is not merely a consequence of position coding. We propose that clique topology is a powerful new tool for matrix analysis in biological settings, where the relationship of observed quantities to more meaningful variables is often nonlinear and unknown.
|
1304.7945
|
Dmytro Grytskyy
|
Dmytro Grytskyy, Tom Tetzlaff, Markus Diesmann, Moritz Helias
|
A unified view on weakly correlated recurrent networks
| null | null |
10.3389/fncom.2013.00131
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The diversity of neuron models used in contemporary theoretical neuroscience
to investigate specific properties of covariances raises the question how these
models relate to each other. In particular it is hard to distinguish between
generic properties and peculiarities due to the abstracted model. Here we
present a unified view on pairwise covariances in recurrent networks in the
irregular regime. We consider the binary neuron model, the leaky
integrate-and-fire model, and the Hawkes process. We show that linear
approximation maps each of these models to either of two classes of linear rate
models, including the Ornstein-Uhlenbeck process as a special case. The classes
differ in the location of additive noise in the rate dynamics, which is on the
output side for spiking models and on the input side for the binary model. Both
classes allow closed form solutions for the covariance. For output noise it
separates into an echo term and a term due to correlated input. The unified
framework enables us to transfer results between models. For example, we
generalize the binary model and the Hawkes process to the presence of
conduction delays and simplify derivations for established results. Our
approach is applicable to general network structures and suitable for
population averages. The derived averages are exact for fixed out-degree
network architectures and approximate for fixed in-degree. We demonstrate how
taking into account fluctuations in the linearization procedure increases the
accuracy of the effective theory and we explain the class dependent differences
between covariances in the time and the frequency domain. Finally we show that
the oscillatory instability emerging in networks of integrate-and-fire models
with delayed inhibitory feedback is a model-invariant feature: the same
structure of poles in the complex frequency plane determines the population
power spectra.
|
[
{
"created": "Tue, 30 Apr 2013 10:28:31 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Sep 2013 14:24:45 GMT",
"version": "v2"
}
] |
2022-05-17
|
[
[
"Grytskyy",
"Dmytro",
""
],
[
"Tetzlaff",
"Tom",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Helias",
"Moritz",
""
]
] |
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.
|
2101.00613
|
Jozef Cernak
|
Jozef \v{C}ern\'ak
|
The questionable impact of population-wide public testing in reducing
SARS-CoV-2 infection prevalence in the Slovak Republic
| null | null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mina and Andersen, authors of the Perspectives in Science: COVID-19 Testing:
One Size Does Not Fit All have referred to results and adopted conclusions from
recently published governmental report Pavelka et al. "The effectiveness of
population wide, rapid antigen test based screening in reducing SARS-CoV-2
infection prevalence in Slovakia" without critical consideration, and rigorous
verification. We demonstrate that the authors refer to conclusions that are not
supported by experimental data. Further, there is a lack of objective,
independent information and studies regarding the widespread, public testing
program currently in force in the Slovak Republic. We offer an alternative
explanation of observed data as they have been provided by the Slovak Republic
government to fill this information gap. We also provide explanations and
conclusions that more accurately describe viral spread dynamics. Drawing from
available public data and our simple but rigorous analysis, we show that it is
not possible to make clear conclusions about any positive impact of the public
testing program in the Slovak Republic. In particular, it is not possible to
conclude that this testing program forces the curve down for the SARS-CoV-2
virus outbreak. We think that Pavelka et al. did not consider many fundamental
phenomena in their proposed computer simulations and data analysis - in
particular: the complexity of SARS-CoV-2 virus spread. In complex
spatio-temporal dynamical systems, small spatio-temporal fluctuations can
dramatically change the dynamics of virus spreading on large scales.
|
[
{
"created": "Sun, 3 Jan 2021 12:14:21 GMT",
"version": "v1"
}
] |
2021-01-05
|
[
[
"Černák",
"Jozef",
""
]
] |
Mina and Andersen, authors of the Perspectives in Science: COVID-19 Testing: One Size Does Not Fit All have referred to results and adopted conclusions from recently published governmental report Pavelka et al. "The effectiveness of population wide, rapid antigen test based screening in reducing SARS-CoV-2 infection prevalence in Slovakia" without critical consideration, and rigorous verification. We demonstrate that the authors refer to conclusions that are not supported by experimental data. Further, there is a lack of objective, independent information and studies regarding the widespread, public testing program currently in force in the Slovak Republic. We offer an alternative explanation of observed data as they have been provided by the Slovak Republic government to fill this information gap. We also provide explanations and conclusions that more accurately describe viral spread dynamics. Drawing from available public data and our simple but rigorous analysis, we show that it is not possible to make clear conclusions about any positive impact of the public testing program in the Slovak Republic. In particular, it is not possible to conclude that this testing program forces the curve down for the SARS-CoV-2 virus outbreak. We think that Pavelka et al. did not consider many fundamental phenomena in their proposed computer simulations and data analysis - in particular: the complexity of SARS-CoV-2 virus spread. In complex spatio-temporal dynamical systems, small spatio-temporal fluctuations can dramatically change the dynamics of virus spreading on large scales.
|
1404.2147
|
Ana Nunes
|
Tom\'as Aquino, Diogo Bolster and Ana Nunes
|
Characterization of the Endemic Equilibrium and Response to Mutant
Injection in a Multi-strain Disease Model
|
20 pages, 5 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Starting from common assumptions, we build a rate equation model for
multi-strain disease dynamics in terms of immune repertoire classes. We then
move to a strain-level description where a low-order closure reminiscent of a
pair approximation can be applied. We characterize the endemic equilibrium of
the ensuing model in the absence of mutation and discuss the presence of
degeneracy regarding the prevalence of the different strains. Finally we study
the behavior of the system under the injection of mutant strains.
|
[
{
"created": "Tue, 8 Apr 2014 14:24:22 GMT",
"version": "v1"
}
] |
2014-04-09
|
[
[
"Aquino",
"Tomás",
""
],
[
"Bolster",
"Diogo",
""
],
[
"Nunes",
"Ana",
""
]
] |
Starting from common assumptions, we build a rate equation model for multi-strain disease dynamics in terms of immune repertoire classes. We then move to a strain-level description where a low-order closure reminiscent of a pair approximation can be applied. We characterize the endemic equilibrium of the ensuing model in the absence of mutation and discuss the presence of degeneracy regarding the prevalence of the different strains. Finally we study the behavior of the system under the injection of mutant strains.
|
q-bio/0405003
|
Ravasz Maria
|
Maria Ravasz, Gyorgy Szabo, Attila Szolnoki
|
Spreading of families in cyclic predator-prey models
|
to be published in PRE
|
Phys. Rev. E 70, 012901 (2004)
|
10.1103/PhysRevE.70.012901
| null |
q-bio.PE cond-mat.stat-mech
| null |
We study the spreading of families in two-dimensional multispecies
predator-prey systems, in which species cyclically dominate each other. In each
time step randomly chosen individuals invade one of the nearest sites of the
square lattice eliminating their prey. Initially all individuals get a
family-name which will be carried on by their descendants. Monte Carlo
simulations show that the systems with several species (N=3,4,5) are
asymptotically approaching the behavior of the voter model, i.e., the survival
probability of families, the mean-size of families and the mean-square distance
of descendants from their ancestor exhibit the same scaling behavior. The
scaling behavior of the survival probability of families has a logarithmic
correction. In case of the voter model this correction depends on the number of
species, while cyclic predator-prey models behave like the voter model with
infinite species. It is found that changing the rates of invasions does not
change this asymptotic behavior. As an application a three-species system with
a fourth species intruder is also discussed.
|
[
{
"created": "Wed, 5 May 2004 17:06:58 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Ravasz",
"Maria",
""
],
[
"Szabo",
"Gyorgy",
""
],
[
"Szolnoki",
"Attila",
""
]
] |
We study the spreading of families in two-dimensional multispecies predator-prey systems, in which species cyclically dominate each other. In each time step randomly chosen individuals invade one of the nearest sites of the square lattice eliminating their prey. Initially all individuals get a family-name which will be carried on by their descendants. Monte Carlo simulations show that the systems with several species (N=3,4,5) are asymptotically approaching the behavior of the voter model, i.e., the survival probability of families, the mean-size of families and the mean-square distance of descendants from their ancestor exhibit the same scaling behavior. The scaling behavior of the survival probability of families has a logarithmic correction. In case of the voter model this correction depends on the number of species, while cyclic predator-prey models behave like the voter model with infinite species. It is found that changing the rates of invasions does not change this asymptotic behavior. As an application a three-species system with a fourth species intruder is also discussed.
|
1307.6071
|
Yong-Jung Kim
|
Changwook Yoon and Yong-Jung Kim
|
Bacterial chemotaxis without gradient-sensing
|
19 pages, 4 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Models for chemotaxis are based on gradient sensing of individual organisms.
The key contribution of Keller and Segel is showing that erratic movements of
individuals may result in an accurate chemotaxis phenomenon as a group. In this
paper we provide another option to understand chemotactic behavior when
individuals do not sense the gradient of chemical concentration by any means.
We show that, if individuals increase their motility to find food when they are
hungry, an accurate chemotactic behavior may obtained without sensing the
gradient. Such a random dispersal has been suggested by Cho and Kim and is
called starvation driven diffusion. This model is surprisingly similar to the
original derivation of Keller-Segel model. A comprehensive picture of traveling
band and front solutions is provided with numerical simulations.
|
[
{
"created": "Tue, 23 Jul 2013 13:35:37 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2013 23:18:35 GMT",
"version": "v2"
}
] |
2013-07-31
|
[
[
"Yoon",
"Changwook",
""
],
[
"Kim",
"Yong-Jung",
""
]
] |
Models for chemotaxis are based on gradient sensing of individual organisms. The key contribution of Keller and Segel is showing that erratic movements of individuals may result in an accurate chemotaxis phenomenon as a group. In this paper we provide another option to understand chemotactic behavior when individuals do not sense the gradient of chemical concentration by any means. We show that, if individuals increase their motility to find food when they are hungry, an accurate chemotactic behavior may obtained without sensing the gradient. Such a random dispersal has been suggested by Cho and Kim and is called starvation driven diffusion. This model is surprisingly similar to the original derivation of Keller-Segel model. A comprehensive picture of traveling band and front solutions is provided with numerical simulations.
|
2212.09749
|
Rakib Hassan Pran
|
Rakib Hassan Pran
|
Statistical Comparison among Brain Networks with Popular Network
Measurement Algorithms
|
22 pages, 38 figures, 19 tables
| null | null | null |
q-bio.NC cs.SI stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
In this research, a number of popular network measurement algorithms have
been applied to several brain networks (based on applicability of algorithms)
for finding out statistical correlation among these popular network
measurements which will help scientists to understand these popular network
measurement algorithms and their applicability to brain networks. By analysing
the results of correlations among these network measurement algorithms,
statistical comparison among selected brain networks has also been summarized.
Besides that, to understand each brain network, the visualization of each brain
network and each brain network degree distribution histogram have been
extrapolated. Six network measurement algorithms have been chosen to apply time
to time on sixteen brain networks based on applicability of these network
measurement algorithms and the results of these network measurements are put
into a correlation method to show the relationship among these six network
measurement algorithms for each brain network. At the end, the results of the
correlations have been summarized to show the statistical comparison among
these sixteen brain networks.
|
[
{
"created": "Sat, 22 Oct 2022 21:27:58 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Mar 2023 16:41:37 GMT",
"version": "v2"
}
] |
2023-03-30
|
[
[
"Pran",
"Rakib Hassan",
""
]
] |
In this research, a number of popular network measurement algorithms have been applied to several brain networks (based on applicability of algorithms) for finding out statistical correlation among these popular network measurements which will help scientists to understand these popular network measurement algorithms and their applicability to brain networks. By analysing the results of correlations among these network measurement algorithms, statistical comparison among selected brain networks has also been summarized. Besides that, to understand each brain network, the visualization of each brain network and each brain network degree distribution histogram have been extrapolated. Six network measurement algorithms have been chosen to apply time to time on sixteen brain networks based on applicability of these network measurement algorithms and the results of these network measurements are put into a correlation method to show the relationship among these six network measurement algorithms for each brain network. At the end, the results of the correlations have been summarized to show the statistical comparison among these sixteen brain networks.
|
1403.6074
|
Anjan Nandi
|
Anjan K. Nandi, Anindita Bhadra, Annagiri Sumana, Sujata A. Deshpande
and Raghavendra Gadagkar
|
The Evolution of Complexity in Social Organization - A Model Using
Dominance-Subordinate Behaviour in Two Social Wasp Species
|
13 pages, 6 figures, 2 tables
|
Journal of theoretical biology 327 (2013): 34-44
|
10.1016/j.jtbi.2013.01.010
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dominance and subordinate behaviours are important ingredients in the social
organizations of group living animals. Behavioural observations on the two
eusocial species \textit{Ropalidia marginata} and \textit{Ropalidia
cyathiformis} suggest varying complexities in their social systems. The queen
of R. cyathiformis is an aggressive individual who usually holds the top
position in the dominance hierarchy although she does not necessarily show the
maximum number of acts of dominance, while the R. marginata queen rarely shows
aggression and usually does not hold the top position in the dominance
hierarchy of her colony. These differences are reflected in the distribution of
dominance-subordinate interactions among the hierarchically ranked individuals
in both the species. The percentage of dominance interactions decrease
gradually with hierarchical ranks in R. marginata while in R. cyathiformis it
first increases and then decreases. We use an agent-based model to investigate
the underlying mechanism that could give rise to the observed patterns for both
the species. The model assumes, besides some non-interacting individuals, that
the interaction probabilities of the agents depend on their pre-differentiated
winning abilities. Our simulations show that if the queen takes up a strategy
of being involved in a moderate number of dominance interactions, one could get
the pattern similar to R. cyathiformis, while taking up the strategy of very
low interactions by the queen could lead to the pattern of R. marginata. We
infer that both the species follow a common interaction pattern, while the
differences in their social organization are due to the slight changes in queen
as well as worker strategies. These changes in strategies are expected to
accompany the evolution of more complex societies from simpler ones.
|
[
{
"created": "Mon, 24 Mar 2014 18:34:54 GMT",
"version": "v1"
}
] |
2014-03-25
|
[
[
"Nandi",
"Anjan K.",
""
],
[
"Bhadra",
"Anindita",
""
],
[
"Sumana",
"Annagiri",
""
],
[
"Deshpande",
"Sujata A.",
""
],
[
"Gadagkar",
"Raghavendra",
""
]
] |
Dominance and subordinate behaviours are important ingredients in the social organizations of group living animals. Behavioural observations on the two eusocial species \textit{Ropalidia marginata} and \textit{Ropalidia cyathiformis} suggest varying complexities in their social systems. The queen of R. cyathiformis is an aggressive individual who usually holds the top position in the dominance hierarchy although she does not necessarily show the maximum number of acts of dominance, while the R. marginata queen rarely shows aggression and usually does not hold the top position in the dominance hierarchy of her colony. These differences are reflected in the distribution of dominance-subordinate interactions among the hierarchically ranked individuals in both the species. The percentage of dominance interactions decrease gradually with hierarchical ranks in R. marginata while in R. cyathiformis it first increases and then decreases. We use an agent-based model to investigate the underlying mechanism that could give rise to the observed patterns for both the species. The model assumes, besides some non-interacting individuals, that the interaction probabilities of the agents depend on their pre-differentiated winning abilities. Our simulations show that if the queen takes up a strategy of being involved in a moderate number of dominance interactions, one could get the pattern similar to R. cyathiformis, while taking up the strategy of very low interactions by the queen could lead to the pattern of R. marginata. We infer that both the species follow a common interaction pattern, while the differences in their social organization are due to the slight changes in queen as well as worker strategies. These changes in strategies are expected to accompany the evolution of more complex societies from simpler ones.
|
2111.14053
|
John Kevin Cava
|
John Kevin Cava, John Vant, Nicholas Ho, Ankita Shukla, Pavan Turaga,
Ross Maciejewski, and Abhishek Singharoy
|
Towards Conditional Generation of Minimal Action Potential Pathways for
Molecular Dynamics
|
Accepted to ELLIS ML4Molecules Workshop 2021
| null | null | null |
q-bio.BM cs.AI cs.LG physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we utilized generative models, and reformulate it for problems
in molecular dynamics (MD) simulation, by introducing an MD potential energy
component to our generative model. By incorporating potential energy as
calculated from TorchMD into a conditional generative framework, we attempt to
construct a low-potential energy route of transformation between the
helix~$\rightarrow$~coil structures of a protein. We show how to add an
additional loss function to conditional generative models, motivated by
potential energy of molecular configurations, and also present an optimization
technique for such an augmented loss function. Our results show the benefit of
this additional loss term on synthesizing realistic molecular trajectories.
|
[
{
"created": "Sun, 28 Nov 2021 05:17:47 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jan 2022 20:41:24 GMT",
"version": "v2"
}
] |
2022-01-07
|
[
[
"Cava",
"John Kevin",
""
],
[
"Vant",
"John",
""
],
[
"Ho",
"Nicholas",
""
],
[
"Shukla",
"Ankita",
""
],
[
"Turaga",
"Pavan",
""
],
[
"Maciejewski",
"Ross",
""
],
[
"Singharoy",
"Abhishek",
""
]
] |
In this paper, we utilized generative models, and reformulate it for problems in molecular dynamics (MD) simulation, by introducing an MD potential energy component to our generative model. By incorporating potential energy as calculated from TorchMD into a conditional generative framework, we attempt to construct a low-potential energy route of transformation between the helix~$\rightarrow$~coil structures of a protein. We show how to add an additional loss function to conditional generative models, motivated by potential energy of molecular configurations, and also present an optimization technique for such an augmented loss function. Our results show the benefit of this additional loss term on synthesizing realistic molecular trajectories.
|
1608.04059
|
Kriti Sen Sharma
|
Kriti Sen Sharma
|
Scout-It: Interior tomography using modified scout acquisition
| null | null | null | null |
q-bio.QM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Global scout views have been previously used to reduce interior
reconstruction artifacts in high-resolution micro-CT and C-arm systems. However
these methods cannot be directly used in the all-important domain of clinical
CT. This is because when the CT scan is truncated, the scout views are also
truncated. However many cases of truncation in clinical CT involve partial
truncation, where the anterio-posterior (AP) scout is truncated, but the
medio-lateral (ML) scout is non-truncated. In this paper, we show that in such
cases of partially truncated CT scans, a modified configuration may be used to
acquire non-truncated AP scout view, and ultimately allow for highly accurate
interior reconstruction.
|
[
{
"created": "Sun, 14 Aug 2016 04:42:30 GMT",
"version": "v1"
}
] |
2016-08-16
|
[
[
"Sharma",
"Kriti Sen",
""
]
] |
Global scout views have been previously used to reduce interior reconstruction artifacts in high-resolution micro-CT and C-arm systems. However these methods cannot be directly used in the all-important domain of clinical CT. This is because when the CT scan is truncated, the scout views are also truncated. However many cases of truncation in clinical CT involve partial truncation, where the anterio-posterior (AP) scout is truncated, but the medio-lateral (ML) scout is non-truncated. In this paper, we show that in such cases of partially truncated CT scans, a modified configuration may be used to acquire non-truncated AP scout view, and ultimately allow for highly accurate interior reconstruction.
|
1609.06480
|
Shihua Zhang
|
Wenwen Min, Juan Liu, Shihua Zhang
|
Network-regularized Sparse Logistic Regression Models for Clinical Risk
Prediction and Biomarker Discovery
|
10 pages, 3 figures
| null | null | null |
q-bio.GN cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular profiling data (e.g., gene expression) has been used for clinical
risk prediction and biomarker discovery. However, it is necessary to integrate
other prior knowledge like biological pathways or gene interaction networks to
improve the predictive ability and biological interpretability of biomarkers.
Here, we first introduce a general regularized Logistic Regression (LR)
framework with regularized term $\lambda \|\bm{w}\|_1 +
\eta\bm{w}^T\bm{M}\bm{w}$, which can reduce to different penalties, including
Lasso, elastic net, and network-regularized terms with different $\bm{M}$. This
framework can be easily solved in a unified manner by a cyclic coordinate
descent algorithm which can avoid inverse matrix operation and accelerate the
computing speed. However, if those estimated $\bm{w}_i$ and $\bm{w}_j$ have
opposite signs, then the traditional network-regularized penalty may not
perform well. To address it, we introduce a novel network-regularized sparse LR
model with a new penalty $\lambda \|\bm{w}\|_1 + \eta|\bm{w}|^T\bm{M}|\bm{w}|$
to consider the difference between the absolute values of the coefficients. And
we develop two efficient algorithms to solve it. Finally, we test our methods
and compare them with the related ones using simulated and real data to show
their efficiency.
|
[
{
"created": "Wed, 21 Sep 2016 09:47:32 GMT",
"version": "v1"
}
] |
2016-09-22
|
[
[
"Min",
"Wenwen",
""
],
[
"Liu",
"Juan",
""
],
[
"Zhang",
"Shihua",
""
]
] |
Molecular profiling data (e.g., gene expression) has been used for clinical risk prediction and biomarker discovery. However, it is necessary to integrate other prior knowledge like biological pathways or gene interaction networks to improve the predictive ability and biological interpretability of biomarkers. Here, we first introduce a general regularized Logistic Regression (LR) framework with regularized term $\lambda \|\bm{w}\|_1 + \eta\bm{w}^T\bm{M}\bm{w}$, which can reduce to different penalties, including Lasso, elastic net, and network-regularized terms with different $\bm{M}$. This framework can be easily solved in a unified manner by a cyclic coordinate descent algorithm which can avoid inverse matrix operation and accelerate the computing speed. However, if those estimated $\bm{w}_i$ and $\bm{w}_j$ have opposite signs, then the traditional network-regularized penalty may not perform well. To address it, we introduce a novel network-regularized sparse LR model with a new penalty $\lambda \|\bm{w}\|_1 + \eta|\bm{w}|^T\bm{M}|\bm{w}|$ to consider the difference between the absolute values of the coefficients. And we develop two efficient algorithms to solve it. Finally, we test our methods and compare them with the related ones using simulated and real data to show their efficiency.
|
1910.06392
|
Muhammad Usman
|
Muhammad Usman and Jeong A Lee
|
AFP-CKSAAP: Prediction of Antifreeze Proteins Using Composition of
k-Spaced Amino Acid Pairs with Deep Neural Network
|
Accepted for oral presentation at 19th 2019 IEEE International
Conference on Bioinformatics and Bioengineering (IC-BIBE 2019) Copyright (c)
2019 IEEE
| null | null | null |
q-bio.QM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Antifreeze proteins (AFPs) are the sub-set of ice binding proteins
indispensable for the species living in extreme cold weather. These proteins
bind to the ice crystals, hindering their growth into large ice lattice that
could cause physical damage. There are variety of AFPs found in numerous
organisms and due to the heterogeneous sequence characteristics, AFPs are found
to demonstrate a high degree of diversity, which makes their prediction a
challenging task. Herein, we propose a machine learning framework to deal with
this vigorous and diverse prediction problem using the manifolding learning
through composition of k-spaced amino acid pairs. We propose to use the deep
neural network with skipped connection and ReLU non-linearity to learn the
non-linear mapping of protein sequence descriptor and class label. The proposed
antifreeze protein prediction method called AFP-CKSAAP has shown to outperform
the contemporary methods, achieving excellent prediction scores on standard
dataset. The main evaluater for the performance of the proposed method in this
study is Youden's index whose high value is dependent on both sensitivity and
specificity. In particular, AFP-CKSAAP yields a Youden's index value of 0.82 on
the independent dataset, which is better than previous methods.
|
[
{
"created": "Wed, 11 Sep 2019 03:13:14 GMT",
"version": "v1"
}
] |
2019-10-16
|
[
[
"Usman",
"Muhammad",
""
],
[
"Lee",
"Jeong A",
""
]
] |
Antifreeze proteins (AFPs) are the sub-set of ice binding proteins indispensable for the species living in extreme cold weather. These proteins bind to the ice crystals, hindering their growth into large ice lattice that could cause physical damage. There are variety of AFPs found in numerous organisms and due to the heterogeneous sequence characteristics, AFPs are found to demonstrate a high degree of diversity, which makes their prediction a challenging task. Herein, we propose a machine learning framework to deal with this vigorous and diverse prediction problem using the manifolding learning through composition of k-spaced amino acid pairs. We propose to use the deep neural network with skipped connection and ReLU non-linearity to learn the non-linear mapping of protein sequence descriptor and class label. The proposed antifreeze protein prediction method called AFP-CKSAAP has shown to outperform the contemporary methods, achieving excellent prediction scores on standard dataset. The main evaluater for the performance of the proposed method in this study is Youden's index whose high value is dependent on both sensitivity and specificity. In particular, AFP-CKSAAP yields a Youden's index value of 0.82 on the independent dataset, which is better than previous methods.
|
1312.1057
|
Thierry Rabilloud
|
Thierry Rabilloud (LCBM)
|
How to use 2D gel electrophoresis in plant proteomics
| null |
Methods in Molecular Biology -Clifton then Totowa- 1072 (2014)
43-50
|
10.1007/978-1-62703-631-3_4
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two-dimensional electrophoresis has nurtured the birth of proteomics. It is
however no longer the exclusive setup used in proteomics, with the development
of shotgun proteomics techniques that appear more fancy and fashionable
nowadays.Nevertheless, 2D gel-based proteomics still has valuable features, and
sometimes unique ones, which make it often an attractive choice when a
proteomics strategy must be selected. These features are detailed in this
chapter, as is the rationale for selecting or not 2D gel-based proteomics as a
proteomic strategy.
|
[
{
"created": "Wed, 4 Dec 2013 08:53:08 GMT",
"version": "v1"
}
] |
2013-12-05
|
[
[
"Rabilloud",
"Thierry",
"",
"LCBM"
]
] |
Two-dimensional electrophoresis has nurtured the birth of proteomics. It is however no longer the exclusive setup used in proteomics, with the development of shotgun proteomics techniques that appear more fancy and fashionable nowadays.Nevertheless, 2D gel-based proteomics still has valuable features, and sometimes unique ones, which make it often an attractive choice when a proteomics strategy must be selected. These features are detailed in this chapter, as is the rationale for selecting or not 2D gel-based proteomics as a proteomic strategy.
|
1411.0573
|
Enrico Bibbona
|
Enrico Bibbona
|
Stochastic Chemical Kinetics. Theory and (Mostly) Systems Biological
Applications, P. Erdi, G. Lente. Springer (2014)
| null | null |
10.1016/j.biosystems.2014.10.004
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Book review of Stochastic Chemical Kinetics. Theory and (Mostly) Systems
Biological Applications, P. Erdi, G. Lente. Springer (2014)
|
[
{
"created": "Fri, 31 Oct 2014 16:47:58 GMT",
"version": "v1"
}
] |
2014-11-04
|
[
[
"Bibbona",
"Enrico",
""
]
] |
Book review of Stochastic Chemical Kinetics. Theory and (Mostly) Systems Biological Applications, P. Erdi, G. Lente. Springer (2014)
|
0910.0835
|
Thierry Mora
|
Thierry Mora, Howard Yu and Ned S. Wingreen
|
Modeling torque versus speed, shot noise, and rotational diffusion of
the bacterial flagellar motor
| null |
Phys. Rev. Lett. 103, 248102 (2009)
|
10.1103/PhysRevLett.103.248102
| null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a minimal physical model for the flagellar motor that enables
bacteria to swim. Our model explains the experimentally measured torque-speed
relationship of the proton-driven E. coli motor at various pH and temperature
conditions. In particular, the dramatic drop of torque at high rotation speeds
(the "knee") is shown to arise from saturation of the proton flux. Moreover, we
show that shot noise in the proton current dominates the diffusion of motor
rotation at low loads. This suggests a new way to probe the discreteness of the
energy source, analogous to measurements of charge quantization in
superconducting tunnel junctions.
|
[
{
"created": "Mon, 5 Oct 2009 19:59:53 GMT",
"version": "v1"
}
] |
2009-12-11
|
[
[
"Mora",
"Thierry",
""
],
[
"Yu",
"Howard",
""
],
[
"Wingreen",
"Ned S.",
""
]
] |
We present a minimal physical model for the flagellar motor that enables bacteria to swim. Our model explains the experimentally measured torque-speed relationship of the proton-driven E. coli motor at various pH and temperature conditions. In particular, the dramatic drop of torque at high rotation speeds (the "knee") is shown to arise from saturation of the proton flux. Moreover, we show that shot noise in the proton current dominates the diffusion of motor rotation at low loads. This suggests a new way to probe the discreteness of the energy source, analogous to measurements of charge quantization in superconducting tunnel junctions.
|
1909.02456
|
Anton Bovier
|
Anton Bovier
|
Stochastic models for adaptive dynamics: Scaling limits and diversity
|
19 page, This is a review paper that will appear in "Probabilistic
Structures in Evolution", ed. by E. Baake and A. Wakolbinger
|
in: Probabilistic Structures in Evolution (E. Baake and A.
Wakolbinger, eds.), EMS Press, Berlin, 2021, pp. 127--150
| null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
I discuss the so-called stochastic individual based model of adaptive
dynamics and in particular how different scaling limits can be obtained by
taking limits of large populations, small mutation rate, and small effect of
single mutations together with appropriate time rescaling. In particular, one
derives the trait substitution sequence, polymorphic evolution sequence, and
the canonical equation of adaptive dynamics. In addition, I show how the escape
from an evolutionary stable conditions can occur as a metastable transition.
This is a review paper that will appear in
"Probabilistic Structures in Evolution", ed. by E. Baake and A. Wakolbinger.
|
[
{
"created": "Thu, 5 Sep 2019 14:42:10 GMT",
"version": "v1"
}
] |
2021-07-06
|
[
[
"Bovier",
"Anton",
""
]
] |
I discuss the so-called stochastic individual based model of adaptive dynamics and in particular how different scaling limits can be obtained by taking limits of large populations, small mutation rate, and small effect of single mutations together with appropriate time rescaling. In particular, one derives the trait substitution sequence, polymorphic evolution sequence, and the canonical equation of adaptive dynamics. In addition, I show how the escape from an evolutionary stable conditions can occur as a metastable transition. This is a review paper that will appear in "Probabilistic Structures in Evolution", ed. by E. Baake and A. Wakolbinger.
|
2303.00240
|
Hongsong Feng
|
Hongsong Feng, Jian Jiang, Guo-Wei Wei
|
Machine-learning Repurposing of DrugBank Compounds for Opioid Use
Disorder
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Opioid use disorder (OUD) is a chronic and relapsing condition that involves
the continued and compulsive use of opioids despite harmful consequences. The
development of medications with improved efficacy and safety profiles for OUD
treatment is urgently needed. Drug repurposing is a promising option for drug
discovery due to its reduced cost and expedited approval procedures.
Computational approaches based on machine learning enable the rapid screening
of DrugBank compounds, identifying those with the potential to be repurposed
for OUD treatment. We collected inhibitor data for four major opioid receptors
and used advanced machine learning predictors of binding affinity that fuse the
gradient boosting decision tree algorithm with two natural language processing
(NLP)-based molecular fingerprints and one traditional 2D fingerprint. Using
these predictors, we systematically analyzed the binding affinities of DrugBank
compounds on four opioid receptors. Based on our machine learning predictions,
we were able to discriminate DrugBank compounds with various binding affinity
thresholds and selectivities for different receptors. The prediction results
were further analyzed for ADMET (absorption, distribution, metabolism,
excretion, and toxicity), which provided guidance on repurposing DrugBank
compounds for the inhibition of selected opioid receptors. The pharmacological
effects of these compounds for OUD treatment need to be tested in further
experimental studies and clinical trials. Our machine learning studies provide
a valuable platform for drug discovery in the context of OUD treatment.
|
[
{
"created": "Wed, 1 Mar 2023 05:23:06 GMT",
"version": "v1"
}
] |
2023-03-02
|
[
[
"Feng",
"Hongsong",
""
],
[
"Jiang",
"Jian",
""
],
[
"Wei",
"Guo-Wei",
""
]
] |
Opioid use disorder (OUD) is a chronic and relapsing condition that involves the continued and compulsive use of opioids despite harmful consequences. The development of medications with improved efficacy and safety profiles for OUD treatment is urgently needed. Drug repurposing is a promising option for drug discovery due to its reduced cost and expedited approval procedures. Computational approaches based on machine learning enable the rapid screening of DrugBank compounds, identifying those with the potential to be repurposed for OUD treatment. We collected inhibitor data for four major opioid receptors and used advanced machine learning predictors of binding affinity that fuse the gradient boosting decision tree algorithm with two natural language processing (NLP)-based molecular fingerprints and one traditional 2D fingerprint. Using these predictors, we systematically analyzed the binding affinities of DrugBank compounds on four opioid receptors. Based on our machine learning predictions, we were able to discriminate DrugBank compounds with various binding affinity thresholds and selectivities for different receptors. The prediction results were further analyzed for ADMET (absorption, distribution, metabolism, excretion, and toxicity), which provided guidance on repurposing DrugBank compounds for the inhibition of selected opioid receptors. The pharmacological effects of these compounds for OUD treatment need to be tested in further experimental studies and clinical trials. Our machine learning studies provide a valuable platform for drug discovery in the context of OUD treatment.
|
q-bio/0310016
|
David Biron
|
D. Biron, P. Libros, D. Sagi, D. Mirelman and E. Moses
|
Cytokinesis: the initial linear phase crosses over to a multiplicity of
non-linear endings
| null | null | null | null |
q-bio.CB
| null |
We investigate the final stage of cytokinesis in two types of amoeba,
pointing out the existence of biphasic furrow contraction. The first phase is
characterized by a constant contraction rate, is better studied, and seems
universal to a large extent. The second phase is more diverse. In Dictyostelium
discoideum the transition involves a change in the rate of contraction, and
occurs when the width of the cleavage furrow is comparable to the height of the
cell. In Entamoeba invadens the contractile ring carries the cell through the
first phase, but cannot complete the second stage of cytokinesis. As a result,
a cooperative mechanism has evolved in that organism, where a neighboring
amoeba performs directed motion towards the dividing cell, and physically
causes separation by means of extending a pseudopod. We expand here on a
previous report of this novel chemotactic signaling mechanism.
|
[
{
"created": "Tue, 14 Oct 2003 15:32:38 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Biron",
"D.",
""
],
[
"Libros",
"P.",
""
],
[
"Sagi",
"D.",
""
],
[
"Mirelman",
"D.",
""
],
[
"Moses",
"E.",
""
]
] |
We investigate the final stage of cytokinesis in two types of amoeba, pointing out the existence of biphasic furrow contraction. The first phase is characterized by a constant contraction rate, is better studied, and seems universal to a large extent. The second phase is more diverse. In Dictyostelium discoideum the transition involves a change in the rate of contraction, and occurs when the width of the cleavage furrow is comparable to the height of the cell. In Entamoeba invadens the contractile ring carries the cell through the first phase, but cannot complete the second stage of cytokinesis. As a result, a cooperative mechanism has evolved in that organism, where a neighboring amoeba performs directed motion towards the dividing cell, and physically causes separation by means of extending a pseudopod. We expand here on a previous report of this novel chemotactic signaling mechanism.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.