id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.11342 | Lucrezia Ravera | Silke Klemm, Lucrezia Ravera | On SIR-type epidemiological models and population heterogeneity effects | 16 pages, 6 figures | Physica A 624 (2023), 128928 | 10.1016/j.physa.2023.128928 | null | q-bio.PE gr-qc hep-th math.PR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we elaborate on homogeneous and heterogeneous SIR-type
epidemiological models. We find an unexpected correspondence between the
epidemic trajectory of a transmissible disease in a homogeneous SIR-type model
and radial null geodesics in the Schwarzschild spacetime. We also discuss
modeling of population heterogeneity effects by considering both a one- and
two-parameter gamma-distributed function for the initial susceptibility
distribution, and deriving the associated herd immunity threshold. We
furthermore describe how mitigation measures can be taken into account by model
fitting.
| [
{
"created": "Thu, 20 Oct 2022 15:22:00 GMT",
"version": "v1"
}
] | 2023-08-04 | [
[
"Klemm",
"Silke",
""
],
[
"Ravera",
"Lucrezia",
""
]
] | In this paper we elaborate on homogeneous and heterogeneous SIR-type epidemiological models. We find an unexpected correspondence between the epidemic trajectory of a transmissible disease in a homogeneous SIR-type model and radial null geodesics in the Schwarzschild spacetime. We also discuss modeling of population heterogeneity effects by considering both a one- and two-parameter gamma-distributed function for the initial susceptibility distribution, and deriving the associated herd immunity threshold. We furthermore describe how mitigation measures can be taken into account by model fitting. |
2306.11622 | Lucas Czech | Lucas Czech, Jeffrey P. Spence, Mois\'es Exp\'osito-Alonso | grenedalf: population genetic statistics for the next generation of pool
sequencing | null | null | null | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pool sequencing is an efficient method for capturing genome-wide allele
frequencies from multiple individuals, with broad applications such as studying
adaptation in Evolve-and-Resequence experiments, monitoring of genetic
diversity in wild populations, and genotype-to-phenotype mapping. Here, we
present grenedalf, a command line tool written in C++ that implements common
population genetic statistics such as $\theta$, Tajima's D, and FST for Pool
sequencing. It is orders of magnitude faster than current tools, and is focused
on providing usability and scalability, while also offering a plethora of input
file formats and convenience options.
| [
{
"created": "Tue, 20 Jun 2023 15:48:48 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 13:49:37 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Jun 2024 12:01:05 GMT",
"version": "v3"
}
] | 2024-06-10 | [
[
"Czech",
"Lucas",
""
],
[
"Spence",
"Jeffrey P.",
""
],
[
"Expósito-Alonso",
"Moisés",
""
]
] | Pool sequencing is an efficient method for capturing genome-wide allele frequencies from multiple individuals, with broad applications such as studying adaptation in Evolve-and-Resequence experiments, monitoring of genetic diversity in wild populations, and genotype-to-phenotype mapping. Here, we present grenedalf, a command line tool written in C++ that implements common population genetic statistics such as $\theta$, Tajima's D, and FST for Pool sequencing. It is orders of magnitude faster than current tools, and is focused on providing usability and scalability, while also offering a plethora of input file formats and convenience options. |
1204.4896 | Kirill Korolev S | Kirill S Korolev, Melanie J I M\"uller, Nilay Karahan, Andrew W
Murray, Oskar Hallatschek and David R Nelson | Selective sweeps in growing microbial colonies | Supplementary information available at arXiv:1204.6328 | Physical Biology 9, 026008 (2012) | 10.1088/1478-3975/9/2/026008 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary experiments with microbes are a powerful tool to study mutations
and natural selection. These experiments, however, are often limited to the
well-mixed environments of a test tube or a chemostat. Since spatial
organization can significantly affect evolutionary dynamics, the need is
growing for evolutionary experiments in spatially structured environments. The
surface of a Petri dish provides such an environment, but a more detailed
understanding of microbial growth on Petri dishes is necessary to interpret
such experiments. We formulate a simple deterministic reaction-diffusion model,
which successfully predicts the spatial patterns created by two competing
species during colony expansion. We also derive the shape of these patterns
analytically without relying on microscopic details of the model. In
particular, we find that the relative fitness of two microbial strains can be
estimated from the logarithmic spirals created by selective sweeps. The theory
is tested with strains of the budding yeast Saccharomyces cerevisiae, for
spatial competitions with different initial conditions and for a range of
relative fitnesses. The reaction-diffusion model also connects the microscopic
parameters like growth rates and diffusion constants with macroscopic spatial
patterns and predicts the relationship between fitness in liquid cultures and
on Petri dishes, which we confirmed experimentally. Spatial sector patterns
therefore provide an alternative fitness assay to the commonly used liquid
culture fitness assays.
| [
{
"created": "Sun, 22 Apr 2012 14:48:13 GMT",
"version": "v1"
},
{
"created": "Tue, 1 May 2012 16:30:07 GMT",
"version": "v2"
}
] | 2012-05-02 | [
[
"Korolev",
"Kirill S",
""
],
[
"Müller",
"Melanie J I",
""
],
[
"Karahan",
"Nilay",
""
],
[
"Murray",
"Andrew W",
""
],
[
"Hallatschek",
"Oskar",
""
],
[
"Nelson",
"David R",
""
]
] | Evolutionary experiments with microbes are a powerful tool to study mutations and natural selection. These experiments, however, are often limited to the well-mixed environments of a test tube or a chemostat. Since spatial organization can significantly affect evolutionary dynamics, the need is growing for evolutionary experiments in spatially structured environments. The surface of a Petri dish provides such an environment, but a more detailed understanding of microbial growth on Petri dishes is necessary to interpret such experiments. We formulate a simple deterministic reaction-diffusion model, which successfully predicts the spatial patterns created by two competing species during colony expansion. We also derive the shape of these patterns analytically without relying on microscopic details of the model. In particular, we find that the relative fitness of two microbial strains can be estimated from the logarithmic spirals created by selective sweeps. The theory is tested with strains of the budding yeast Saccharomyces cerevisiae, for spatial competitions with different initial conditions and for a range of relative fitnesses. The reaction-diffusion model also connects the microscopic parameters like growth rates and diffusion constants with macroscopic spatial patterns and predicts the relationship between fitness in liquid cultures and on Petri dishes, which we confirmed experimentally. Spatial sector patterns therefore provide an alternative fitness assay to the commonly used liquid culture fitness assays. |
1310.3459 | Vladimir Chechetkin R. | V.R. Chechetkin | Statistics of genome architecture | 25 pages, 8 figures, 1 table | null | 10.1016/j.physleta.2013.10.021 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main statistical distributions applicable to the analysis of genome
architecture and genome tracks are briefly discussed and critically assessed.
Although the observed features in distributions of element lengths can be
equally well fitted by the different statistical approximations, the
interpretation of observed regularities may strongly depend on the chosen
scheme. We discuss the possible evolution scenarios and describe the main
characteristics obtained with different distributions. The expression for the
assessment of levels in hierarchical chromatin folding is derived and the
quantitative measure of genome architecture inhomogeneity is suggested. This
theory provides the ground for the regular statistical study of genome
architecture and genome tracks.
| [
{
"created": "Sun, 13 Oct 2013 09:26:19 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Oct 2013 13:12:26 GMT",
"version": "v2"
}
] | 2015-06-17 | [
[
"Chechetkin",
"V. R.",
""
]
] | The main statistical distributions applicable to the analysis of genome architecture and genome tracks are briefly discussed and critically assessed. Although the observed features in distributions of element lengths can be equally well fitted by the different statistical approximations, the interpretation of observed regularities may strongly depend on the chosen scheme. We discuss the possible evolution scenarios and describe the main characteristics obtained with different distributions. The expression for the assessment of levels in hierarchical chromatin folding is derived and the quantitative measure of genome architecture inhomogeneity is suggested. This theory provides the ground for the regular statistical study of genome architecture and genome tracks. |
2402.12507 | Laura Andrea Barrero Guevara | Laura Andrea Barrero Guevara, Sarah C Kramer, Tobias Kurth and
Matthieu Domenech de Cell\`es | How causal inference concepts can guide research into the effects of
climate on infectious diseases | null | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A pressing question resulting from global warming is how infectious diseases
will be affected by climate change. Answering this question requires research
into the effects of weather on the population dynamics of transmission and
infection; elucidating these effects, however, has proven difficult due to the
challenges of assessing causality from the predominantly observational data
available in epidemiological research. Here, we show how concepts from causal
inference -- the sub-field of statistics aiming at inferring causality from
data -- can guide that research. Through a series of case studies, we
illustrate how such concepts can help assess study design and strategically
choose a study's location, evaluate and reduce the risk of bias, and interpret
the multifaceted effects of meteorological variables on transmission. More
broadly, we argue that interdisciplinary approaches based on explicit causal
frameworks are crucial for reliably estimating the effect of weather and
accurately predicting the consequences of climate change.
| [
{
"created": "Mon, 19 Feb 2024 20:17:48 GMT",
"version": "v1"
}
] | 2024-02-21 | [
[
"Guevara",
"Laura Andrea Barrero",
""
],
[
"Kramer",
"Sarah C",
""
],
[
"Kurth",
"Tobias",
""
],
[
"de Cellès",
"Matthieu Domenech",
""
]
] | A pressing question resulting from global warming is how infectious diseases will be affected by climate change. Answering this question requires research into the effects of weather on the population dynamics of transmission and infection; elucidating these effects, however, has proven difficult due to the challenges of assessing causality from the predominantly observational data available in epidemiological research. Here, we show how concepts from causal inference -- the sub-field of statistics aiming at inferring causality from data -- can guide that research. Through a series of case studies, we illustrate how such concepts can help assess study design and strategically choose a study's location, evaluate and reduce the risk of bias, and interpret the multifaceted effects of meteorological variables on transmission. More broadly, we argue that interdisciplinary approaches based on explicit causal frameworks are crucial for reliably estimating the effect of weather and accurately predicting the consequences of climate change. |
2407.00976 | Nima Dehghani | Colleen J. Gillon, Cody Baker, Ryan Ly, Edoardo Balzani, Bingni W.
Brunton, Manuel Schottdorf, Satrajit Ghosh, Nima Dehghani | ODIN: Open Data In Neurophysiology: Advancements, Solutions & Challenges | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Across the life sciences, an ongoing effort over the last 50 years has made
data and methods more reproducible and transparent. This openness has led to
transformative insights and vastly accelerated scientific progress. For
example, structural biology and genomics have undertaken systematic collection
and publication of protein sequences and structures over the past half-century,
and these data have led to scientific breakthroughs that were unthinkable when
data collection first began. We believe that neuroscience is poised to follow
the same path, and that principles of open data and open science will transform
our understanding of the nervous system in ways that are impossible to predict
at the moment.
To this end, new social structures along with active and open scientific
communities are essential to facilitate and expand the still limited adoption
of open science practices in our field. Unified by shared values of openness,
we set out to organize a symposium for Open Data in Neuroscience (ODIN) to
strengthen our community and facilitate transformative neuroscience research at
large. In this report, we share what we learned during this first ODIN event.
We also lay out plans for how to grow this movement, document emerging
conversations, and propose a path toward a better and more transparent science
of tomorrow.
| [
{
"created": "Mon, 1 Jul 2024 05:26:30 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Gillon",
"Colleen J.",
""
],
[
"Baker",
"Cody",
""
],
[
"Ly",
"Ryan",
""
],
[
"Balzani",
"Edoardo",
""
],
[
"Brunton",
"Bingni W.",
""
],
[
"Schottdorf",
"Manuel",
""
],
[
"Ghosh",
"Satrajit",
""
],
[
"Dehghani",
"Nima",
""
]
] | Across the life sciences, an ongoing effort over the last 50 years has made data and methods more reproducible and transparent. This openness has led to transformative insights and vastly accelerated scientific progress. For example, structural biology and genomics have undertaken systematic collection and publication of protein sequences and structures over the past half-century, and these data have led to scientific breakthroughs that were unthinkable when data collection first began. We believe that neuroscience is poised to follow the same path, and that principles of open data and open science will transform our understanding of the nervous system in ways that are impossible to predict at the moment. To this end, new social structures along with active and open scientific communities are essential to facilitate and expand the still limited adoption of open science practices in our field. Unified by shared values of openness, we set out to organize a symposium for Open Data in Neuroscience (ODIN) to strengthen our community and facilitate transformative neuroscience research at large. In this report, we share what we learned during this first ODIN event. We also lay out plans for how to grow this movement, document emerging conversations, and propose a path toward a better and more transparent science of tomorrow. |
0806.4276 | Alessandro Pluchino | Alessandro Pluchino, Andrea Rapisarda and Vito Latora | Communities recognition in the Chesapeake Bay ecosystem by dynamical
clustering algorithms based on different oscillators systems | 8 pages, 7 figures, Proceedings of the International Workshop on
"Ecological Complex Systems: Stochastic Dynamics and Patterns", 22-26 July
2007 - Terrasini (Palermo), Italy | null | 10.1140/epjb/e2008-00292-8 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have recently introduced an efficient method for the detection and
identification of modules in complex networks, based on the de-synchronization
properties (dynamical clustering) of phase oscillators. In this paper we apply
the dynamical clustering tecnique to the identification of communities of
marine organisms living in the Chesapeake Bay food web. We show that our
algorithm is able to perform a very reliable classification of the real
communities existing in this ecosystem by using different kinds of dynamical
oscillators. We compare also our results with those of other methods for the
detection of community structures in complex networks.
| [
{
"created": "Thu, 26 Jun 2008 10:17:13 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jul 2008 17:01:05 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Pluchino",
"Alessandro",
""
],
[
"Rapisarda",
"Andrea",
""
],
[
"Latora",
"Vito",
""
]
] | We have recently introduced an efficient method for the detection and identification of modules in complex networks, based on the de-synchronization properties (dynamical clustering) of phase oscillators. In this paper we apply the dynamical clustering tecnique to the identification of communities of marine organisms living in the Chesapeake Bay food web. We show that our algorithm is able to perform a very reliable classification of the real communities existing in this ecosystem by using different kinds of dynamical oscillators. We compare also our results with those of other methods for the detection of community structures in complex networks. |
1702.08345 | Andrew Krause | Andrew L. Krause, Dmitry Beliaev, Robert A. Van Gorder, Sarah L.
Waters | Bifurcations and dynamics emergent from lattice and continuum models of
bioactive porous media | 30 pages, 21 figures | null | 10.1142/S0218127418300379 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study dynamics emergent from a two-dimensional reaction--diffusion process
modelled via a finite lattice dynamical system, as well as an analogous PDE
system, involving spatially nonlocal interactions. These models govern the
evolution of cells in a bioactive porous medium, with evolution of the local
cell density depending on a coupled quasi--static fluid flow problem. We
demonstrate differences emergent from the choice of a discrete lattice or a
continuum for the spatial domain of such a process. We find long--time
oscillations and steady states in cell density in both lattice and continuum
models, but that the continuum model only exhibits solutions with vertical
symmetry, independent of initial data, whereas the finite lattice admits
asymmetric oscillations and steady states arising from symmetry-breaking
bifurcations. We conjecture that it is the structure of the finite lattice
which allows for more complicated asymmetric dynamics. Our analysis suggests
that the origin of both types of oscillations is a nonlocal reaction-diffusion
mechanism mediated by quasi-static fluid flow.
| [
{
"created": "Fri, 24 Feb 2017 16:10:32 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Jun 2017 22:16:13 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Jun 2018 17:45:18 GMT",
"version": "v3"
}
] | 2018-11-14 | [
[
"Krause",
"Andrew L.",
""
],
[
"Beliaev",
"Dmitry",
""
],
[
"Van Gorder",
"Robert A.",
""
],
[
"Waters",
"Sarah L.",
""
]
] | We study dynamics emergent from a two-dimensional reaction--diffusion process modelled via a finite lattice dynamical system, as well as an analogous PDE system, involving spatially nonlocal interactions. These models govern the evolution of cells in a bioactive porous medium, with evolution of the local cell density depending on a coupled quasi--static fluid flow problem. We demonstrate differences emergent from the choice of a discrete lattice or a continuum for the spatial domain of such a process. We find long--time oscillations and steady states in cell density in both lattice and continuum models, but that the continuum model only exhibits solutions with vertical symmetry, independent of initial data, whereas the finite lattice admits asymmetric oscillations and steady states arising from symmetry-breaking bifurcations. We conjecture that it is the structure of the finite lattice which allows for more complicated asymmetric dynamics. Our analysis suggests that the origin of both types of oscillations is a nonlocal reaction-diffusion mechanism mediated by quasi-static fluid flow. |
2203.07753 | Alexandra Blenkinsop | Alexandra Blenkinsop, M\'elodie Monod, Ard van Sighem, Nikos Pantazis,
Daniela Bezemer, Eline Op de Coul, Thijs van de Laar, Christophe Fraser,
Maria Prins, Peter Reiss, Godelieve de Bree, Oliver Ratmann | Estimating the potential to prevent locally acquired HIV infections in a
UNAIDS Fast-Track City, Amsterdam | null | null | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-sa/4.0/ | Amsterdam and other UNAIDS Fast-Track cities aim for zero new HIV infections.
Utilising molecular and clinical data of the ATHENA observational HIV cohort,
our primary aims are to estimate the proportion of undiagnosed HIV infections
and the proportion of locally acquired infections in Amsterdam in 2014-2018,
both in MSM and heterosexuals and Dutch-born and foreign-born individuals.
We located diagnosed HIV infections in Amsterdam using postcode data at time
of registration to the cohort, and estimated their date of infection using
clinical HIV data. We then inferred the proportion undiagnosed from the
estimated times to diagnosis. To determine sources of Amsterdam infections, we
used HIV sequences of people living with HIV (PLHIV) within a background of
other Dutch and international sequences to phylogenetically reconstruct
transmission chains. Frequent late diagnoses indicate that more recent
phylogenetically observed chains are increasingly incomplete, and we use a
Bayesian model to estimate the actual growth of Amsterdam transmission chains,
and the proportion of locally acquired infections.
We estimate that 20% [95% CrI 18-22%] of infections acquired among MSM
between 2014-2018 were undiagnosed by the start of 2019, and 44% [37-50%] among
heterosexuals, with variation by place of birth. The estimated proportion of
MSM infections in 2014-2018 that were locally acquired was 68% [61-74%], with
no substantial differences by region of birth. In heterosexuals, this was 57%
[41-71%] overall, with heterogeneity by place of birth.
The data indicate substantial potential to further curb local transmission,
in both MSM and heterosexual Amsterdam residents. In 2014-2018 the largest
proportion of local transmissions in Amsterdam are estimated to have occurred
in foreign-born MSM, who would likely benefit most from intensified
interventions.
| [
{
"created": "Tue, 15 Mar 2022 09:59:50 GMT",
"version": "v1"
}
] | 2022-03-16 | [
[
"Blenkinsop",
"Alexandra",
""
],
[
"Monod",
"Mélodie",
""
],
[
"van Sighem",
"Ard",
""
],
[
"Pantazis",
"Nikos",
""
],
[
"Bezemer",
"Daniela",
""
],
[
"de Coul",
"Eline Op",
""
],
[
"van de Laar",
"Thijs",
""
],
[
"Fraser",
"Christophe",
""
],
[
"Prins",
"Maria",
""
],
[
"Reiss",
"Peter",
""
],
[
"de Bree",
"Godelieve",
""
],
[
"Ratmann",
"Oliver",
""
]
] | Amsterdam and other UNAIDS Fast-Track cities aim for zero new HIV infections. Utilising molecular and clinical data of the ATHENA observational HIV cohort, our primary aims are to estimate the proportion of undiagnosed HIV infections and the proportion of locally acquired infections in Amsterdam in 2014-2018, both in MSM and heterosexuals and Dutch-born and foreign-born individuals. We located diagnosed HIV infections in Amsterdam using postcode data at time of registration to the cohort, and estimated their date of infection using clinical HIV data. We then inferred the proportion undiagnosed from the estimated times to diagnosis. To determine sources of Amsterdam infections, we used HIV sequences of people living with HIV (PLHIV) within a background of other Dutch and international sequences to phylogenetically reconstruct transmission chains. Frequent late diagnoses indicate that more recent phylogenetically observed chains are increasingly incomplete, and we use a Bayesian model to estimate the actual growth of Amsterdam transmission chains, and the proportion of locally acquired infections. We estimate that 20% [95% CrI 18-22%] of infections acquired among MSM between 2014-2018 were undiagnosed by the start of 2019, and 44% [37-50%] among heterosexuals, with variation by place of birth. The estimated proportion of MSM infections in 2014-2018 that were locally acquired was 68% [61-74%], with no substantial differences by region of birth. In heterosexuals, this was 57% [41-71%] overall, with heterogeneity by place of birth. The data indicate substantial potential to further curb local transmission, in both MSM and heterosexual Amsterdam residents. In 2014-2018 the largest proportion of local transmissions in Amsterdam are estimated to have occurred in foreign-born MSM, who would likely benefit most from intensified interventions. |
2106.15531 | Umberto Ferraro Petrillo | Giuseppe Cattaneo, Umberto Ferraro Petrillo, Raffaele Giancarlo,
Francesco Palini, Chiara Romualdi | The Power of Word-Frequency Based Alignment-Free Functions: a
Comprehensive Large-scale Experimental Analysis -- Version 3 | null | null | null | null | q-bio.GN cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Alignment-free (AF) distance/similarity functions are a key tool
for sequence analysis. Experimental studies on real datasets abound and, to
some extent, there are also studies regarding their control of false positive
rate (Type I error). However, assessment of their power, i.e., their ability to
identify true similarity, has been limited to some members of the D2 family by
experimental studies on short sequences, not adequate for current applications,
where sequence lengths may vary considerably. Such a State of the Art is
methodologically problematic, since information regarding a key feature such as
power is either missing or limited. Results: By concentrating on a
representative set of word-frequency based AF functions, we perform the first
coherent and uniform evaluation of the power, involving also Type I error for
completeness. Two Alternative models of important genomic features (CIS
Regulatory Modules and Horizontal Gene Transfer), a wide range of sequence
lengths from a few thousand to millions, and different values of k have been
used. As a result, we provide a characterization of those AF functions that is
novel and informative. Indeed, we identify weak and strong points of each
function considered, which may be used as a guide to choose one for analysis
tasks. Remarkably, of the fifteen functions that we have considered, only four
stand out, with small differences between small and short sequence length
scenarios. Finally, in order to encourage the use of our methodology for
validation of future AF functions, the Big Data platform supporting it is
public.
| [
{
"created": "Sun, 27 Jun 2021 06:26:39 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 15:09:02 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Oct 2021 14:24:20 GMT",
"version": "v3"
}
] | 2021-10-20 | [
[
"Cattaneo",
"Giuseppe",
""
],
[
"Petrillo",
"Umberto Ferraro",
""
],
[
"Giancarlo",
"Raffaele",
""
],
[
"Palini",
"Francesco",
""
],
[
"Romualdi",
"Chiara",
""
]
] | Motivation: Alignment-free (AF) distance/similarity functions are a key tool for sequence analysis. Experimental studies on real datasets abound and, to some extent, there are also studies regarding their control of false positive rate (Type I error). However, assessment of their power, i.e., their ability to identify true similarity, has been limited to some members of the D2 family by experimental studies on short sequences, not adequate for current applications, where sequence lengths may vary considerably. Such a State of the Art is methodologically problematic, since information regarding a key feature such as power is either missing or limited. Results: By concentrating on a representative set of word-frequency based AF functions, we perform the first coherent and uniform evaluation of the power, involving also Type I error for completeness. Two Alternative models of important genomic features (CIS Regulatory Modules and Horizontal Gene Transfer), a wide range of sequence lengths from a few thousand to millions, and different values of k have been used. As a result, we provide a characterization of those AF functions that is novel and informative. Indeed, we identify weak and strong points of each function considered, which may be used as a guide to choose one for analysis tasks. Remarkably, of the fifteen functions that we have considered, only four stand out, with small differences between small and short sequence length scenarios. Finally, in order to encourage the use of our methodology for validation of future AF functions, the Big Data platform supporting it is public. |
2404.13265 | Guohao Wang | Guohao Wang, Ting Liu, Hongqiang Lyu and Ze Liu | F5C-finder: An Explainable and Ensemble Biological Language Model for
Predicting 5-Formylcytidine Modifications on mRNA | 34 pages, 10 figures, journal | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a prevalent and dynamically regulated epigenetic modification,
5-formylcytidine (f5C) is crucial in various biological processes. However,
traditional experimental methods for f5C detection are often laborious and
time-consuming, limiting their ability to map f5C sites across the
transcriptome comprehensively. While computational approaches offer a
cost-effective and high-throughput alternative, no recognition model for f5C
has been developed to date. Drawing inspiration from language models in natural
language processing, this study presents f5C-finder, an ensemble neural
network-based model utilizing multi-head attention for the identification of
f5C. Five distinct feature extraction methods were employed to construct five
individual artificial neural networks, and these networks were subsequently
integrated through ensemble learning to create f5C-finder. 10-fold
cross-validation and independent tests demonstrate that f5C-finder achieves
state-of-the-art (SOTA) performance with AUC of 0.807 and 0.827, respectively.
The result highlights the effectiveness of biological language model in
capturing both the order (sequential) and functional meaning (semantics) within
genomes. Furthermore, the built-in interpretability allows us to understand
what the model is learning, creating a bridge between identifying key
sequential elements and a deeper exploration of their biological functions.
| [
{
"created": "Sat, 20 Apr 2024 04:24:45 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Wang",
"Guohao",
""
],
[
"Liu",
"Ting",
""
],
[
"Lyu",
"Hongqiang",
""
],
[
"Liu",
"Ze",
""
]
] | As a prevalent and dynamically regulated epigenetic modification, 5-formylcytidine (f5C) is crucial in various biological processes. However, traditional experimental methods for f5C detection are often laborious and time-consuming, limiting their ability to map f5C sites across the transcriptome comprehensively. While computational approaches offer a cost-effective and high-throughput alternative, no recognition model for f5C has been developed to date. Drawing inspiration from language models in natural language processing, this study presents f5C-finder, an ensemble neural network-based model utilizing multi-head attention for the identification of f5C. Five distinct feature extraction methods were employed to construct five individual artificial neural networks, and these networks were subsequently integrated through ensemble learning to create f5C-finder. 10-fold cross-validation and independent tests demonstrate that f5C-finder achieves state-of-the-art (SOTA) performance with AUC of 0.807 and 0.827, respectively. The result highlights the effectiveness of biological language model in capturing both the order (sequential) and functional meaning (semantics) within genomes. Furthermore, the built-in interpretability allows us to understand what the model is learning, creating a bridge between identifying key sequential elements and a deeper exploration of their biological functions. |
2006.06429 | Charles Schaper | Charles D. Schaper | Intermolecular Enzymatic Encoding of Nucleic Acid, Steroid Complexes: A
New Theory on the Chemical Origin of Life Based on Evidence of Structural
Symmetry | 43 pages, 22 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The origin of life is one of the greatest mysteries. The mechanism for the
synthesis of DNA is synonymous with the chemical origin of life, and theories
have been developed along many lines of reasoning, but resolving all
requirements remains a challenge, such as defining an objective path to produce
sequences of encoded nucleotides paired as adenine to thymine and guanine to
cytosine. Here, a new theory for the origin of DNA is presented. The theory is
based upon three lines of experimental evidence and agreement of structural
symmetry between DNA nucleotides and steroid hormones, and introduces a new
concept of synthesizing both structural and functional characteristics of DNA
at the same time within a single unified complex of interleaved tetra-ringed
structures, steroid molecules which form reaction vessels that serve as
co-enzymatic building blocks. The new theory indicates that the establishment
of the DNA nucleotide code is among the very first synthesis steps. Moreover,
as a consequence of the intermolecular synthesis of both structural and
functional characteristics within a unified complex, there is a culminating
process step that sets forth in motion both DNA and the steroid structures that
subsequently trigger replication and transcription, as well as protein
translation, thereby resulting in the instantaneous release of life function.
| [
{
"created": "Tue, 9 Jun 2020 20:57:18 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jun 2020 19:30:55 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Jul 2020 06:07:39 GMT",
"version": "v3"
}
] | 2020-07-07 | [
[
"Schaper",
"Charles D.",
""
]
] | The origin of life is one of the greatest mysteries. The mechanism for the synthesis of DNA is synonymous with the chemical origin of life, and theories have been developed along many lines of reasoning, but resolving all requirements remains a challenge, such as defining an objective path to produce sequences of encoded nucleotides paired as adenine to thymine and guanine to cytosine. Here, a new theory for the origin of DNA is presented. The theory is based upon three lines of experimental evidence and agreement of structural symmetry between DNA nucleotides and steroid hormones, and introduces a new concept of synthesizing both structural and functional characteristics of DNA at the same time within a single unified complex of interleaved tetra-ringed structures, steroid molecules which form reaction vessels that serve as co-enzymatic building blocks. The new theory indicates that the establishment of the DNA nucleotide code is among the very first synthesis steps. Moreover, as a consequence of the intermolecular synthesis of both structural and functional characteristics within a unified complex, there is a culminating process step that sets forth in motion both DNA and the steroid structures that subsequently trigger replication and transcription, as well as protein translation, thereby resulting in the instantaneous release of life function. |
2307.14537 | Mareike Fischer | Sophie J. Kersting and A. Luise K\"uhn and Mareike Fischer | Measuring 3D tree imbalance of plant models using graph-theoretical
approaches | null | null | null | null | q-bio.QM math.CO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imbalance in the 3D structure of plants can be an important indicator of
insufficient light or nutrient supply, as well as excessive wind, (formerly
present) physical barriers, neighbor or storm damage. It can also be a simple
means to detect certain illnesses, since some diseases like the apple
proliferation disease, an infection with the barley yellow dwarf virus or plant
canker can cause abnormal growth, like \enquote{witches' brooms} or burls,
resulting in a deviating 3D plant architecture. However, quantifying imbalance
of plant growth is not an easy task, and it requires a mathematically sound 3D
model of plants to which imbalance indices can be applied. Current models of
plants are often based on stacked cylinders or voxel matrices and do not allow
for measuring the degree of 3D imbalance in the branching structure of the
whole plant.
On the other hand, various imbalance indices are readily available for
so-called graph-theoretical trees and are frequently used in areas like
phylogenetics and computer science. While only some basic ideas of these
indices can be transferred to the 3D setting, graph-theoretical trees are a
logical foundation for 3D plant models that allow for elegant and natural
imbalance measures.
In this manuscript, our aim is thus threefold: We first present a new
graph-theoretical 3D model of plants and discuss desirable properties of
imbalance measures in the 3D setting. We then introduce and analyze eight
different 3D imbalance indices and their properties. Thirdly, we illustrate all
our findings using a data set of 63 bush beans. Moreover, we implemented all
our indices in the publicly available \textsf{R}-software package
\textsf{treeDbalance} accompanying this manuscript.
| [
{
"created": "Wed, 26 Jul 2023 23:03:15 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Dec 2023 17:34:30 GMT",
"version": "v2"
}
] | 2023-12-11 | [
[
"Kersting",
"Sophie J.",
""
],
[
"Kühn",
"A. Luise",
""
],
[
"Fischer",
"Mareike",
""
]
] | Imbalance in the 3D structure of plants can be an important indicator of insufficient light or nutrient supply, as well as excessive wind, (formerly present) physical barriers, neighbor or storm damage. It can also be a simple means to detect certain illnesses, since some diseases like the apple proliferation disease, an infection with the barley yellow dwarf virus or plant canker can cause abnormal growth, like \enquote{witches' brooms} or burls, resulting in a deviating 3D plant architecture. However, quantifying imbalance of plant growth is not an easy task, and it requires a mathematically sound 3D model of plants to which imbalance indices can be applied. Current models of plants are often based on stacked cylinders or voxel matrices and do not allow for measuring the degree of 3D imbalance in the branching structure of the whole plant. On the other hand, various imbalance indices are readily available for so-called graph-theoretical trees and are frequently used in areas like phylogenetics and computer science. While only some basic ideas of these indices can be transferred to the 3D setting, graph-theoretical trees are a logical foundation for 3D plant models that allow for elegant and natural imbalance measures. In this manuscript, our aim is thus threefold: We first present a new graph-theoretical 3D model of plants and discuss desirable properties of imbalance measures in the 3D setting. We then introduce and analyze eight different 3D imbalance indices and their properties. Thirdly, we illustrate all our findings using a data set of 63 bush beans. Moreover, we implemented all our indices in the publicly available \textsf{R}-software package \textsf{treeDbalance} accompanying this manuscript. |
1903.07526 | Miguel Ib\'a\~nez Berganza | Miguel Ib\'a\~nez-Berganza, Ambra Amico, Vittorio Loreto | Subjectivity and complexity of facial attractiveness | 15 pages, 5 figures. Supplementary information: 26 pages, 13 figures | Scientific Reports 9, Article number: 8364 (2019) | 10.1038/s41598-019-44655-9 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The origin and meaning of facial beauty represent a longstanding puzzle.
Despite the profuse literature devoted to facial attractiveness, its very
nature, its determinants and the nature of inter-person differences remain
controversial issues. Here we tackle such questions proposing a novel
experimental approach in which human subjects, instead of rating natural faces,
are allowed to efficiently explore the face-space and 'sculpt' their favorite
variation of a reference facial image. The results reveal that different
subjects prefer distinguishable regions of the face-space, highlighting the
essential subjectivity of the phenomenon.The different sculpted facial vectors
exhibit strong correlations among pairs of facial distances, characterising the
underlying universality and complexity of the cognitive processes, and the
relative relevance and robustness of the different facial distances.
| [
{
"created": "Mon, 18 Mar 2019 16:08:16 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2019 12:42:42 GMT",
"version": "v2"
}
] | 2019-06-12 | [
[
"Ibáñez-Berganza",
"Miguel",
""
],
[
"Amico",
"Ambra",
""
],
[
"Loreto",
"Vittorio",
""
]
] | The origin and meaning of facial beauty represent a longstanding puzzle. Despite the profuse literature devoted to facial attractiveness, its very nature, its determinants and the nature of inter-person differences remain controversial issues. Here we tackle such questions proposing a novel experimental approach in which human subjects, instead of rating natural faces, are allowed to efficiently explore the face-space and 'sculpt' their favorite variation of a reference facial image. The results reveal that different subjects prefer distinguishable regions of the face-space, highlighting the essential subjectivity of the phenomenon.The different sculpted facial vectors exhibit strong correlations among pairs of facial distances, characterising the underlying universality and complexity of the cognitive processes, and the relative relevance and robustness of the different facial distances. |
2211.04020 | Zitong Jerry Wang | Zitong Jerry Wang, Alexander M. Xu, Aman Bhargava, Matt W. Thomson | Generating counterfactual explanations of tumor spatial proteomes to
discover effective strategies for enhancing immune infiltration | null | null | null | null | q-bio.QM cs.LG q-bio.GN q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The tumor microenvironment (TME) significantly impacts cancer prognosis due
to its immune composition. While therapies for altering the immune composition,
including immunotherapies, have shown exciting results for treating
hematological cancers, they are less effective for immunologically-cold, solid
tumors. Spatial omics technologies capture the spatial organization of the TME
with unprecedented molecular detail, revealing the relationship between immune
cell localization and molecular signals. Here, we formulate T-cell infiltration
prediction as a self-supervised machine learning problem and develop a
counterfactual optimization strategy that leverages large scale spatial omics
profiles of patient tumors to design tumor perturbations predicted to boost
T-cell infiltration. A convolutional neural network predicts T-cell
distribution based on signaling molecules in the TME provided by imaging mass
cytometry. Gradient-based counterfactual generation, then, computes
perturbations predicted to boost T-cell abundance. We apply our framework to
melanoma, colorectal cancer liver metastases, and breast tumor data,
discovering combinatorial perturbations predicted to support T-cell
infiltration across tens to hundreds of patients. This work presents a paradigm
for counterfactual-based prediction and design of cancer therapeutics using
spatial omics data.
| [
{
"created": "Tue, 8 Nov 2022 05:46:02 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Oct 2023 01:56:35 GMT",
"version": "v2"
}
] | 2023-10-17 | [
[
"Wang",
"Zitong Jerry",
""
],
[
"Xu",
"Alexander M.",
""
],
[
"Bhargava",
"Aman",
""
],
[
"Thomson",
"Matt W.",
""
]
] | The tumor microenvironment (TME) significantly impacts cancer prognosis due to its immune composition. While therapies for altering the immune composition, including immunotherapies, have shown exciting results for treating hematological cancers, they are less effective for immunologically-cold, solid tumors. Spatial omics technologies capture the spatial organization of the TME with unprecedented molecular detail, revealing the relationship between immune cell localization and molecular signals. Here, we formulate T-cell infiltration prediction as a self-supervised machine learning problem and develop a counterfactual optimization strategy that leverages large scale spatial omics profiles of patient tumors to design tumor perturbations predicted to boost T-cell infiltration. A convolutional neural network predicts T-cell distribution based on signaling molecules in the TME provided by imaging mass cytometry. Gradient-based counterfactual generation, then, computes perturbations predicted to boost T-cell abundance. We apply our framework to melanoma, colorectal cancer liver metastases, and breast tumor data, discovering combinatorial perturbations predicted to support T-cell infiltration across tens to hundreds of patients. This work presents a paradigm for counterfactual-based prediction and design of cancer therapeutics using spatial omics data. |
0808.2760 | Francois J. Nedelec | Dietrich Foethke, Tatyana Makushok, Damian Brunner and Francois
Nedelec | Force and length-dependent catastrophe activities explain interphase
microtubule organization in fission yeast | 25 pages, 3 figures | null | 10.1038/msb.2008.76 | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The cytoskeleton is essential for the maintenance of cell morphology in
eukaryotes. In fission yeast for example, polarized growth sites are organized
by actin whereas microtubules (MT) acting upstream control where growth occurs
(La Carbona et al, 2006). Growth is limited to the cell poles when MTs undergo
catastrophes there and not elsewhere on the cortex (Brunner and Nurse, 2000).
Here we report that the modulation of MT dynamics by forces as observed in
vitro (Dogterom and Yurke, 1997; Janson et al, 2003) can quantitatively explain
the localization of MT catastro-phes in S. pombe. However, we found that it is
necessary to add length-dependent catastrophe rates to make the model fully
consistent with other measured traits of MTs. This result demonstrates the
possibility that MTs together with associated proteins such as kinesins having
a depolymerization activity can reliably mark the tips of the cell.
| [
{
"created": "Wed, 20 Aug 2008 09:22:40 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Mar 2009 16:29:28 GMT",
"version": "v2"
}
] | 2009-03-30 | [
[
"Foethke",
"Dietrich",
""
],
[
"Makushok",
"Tatyana",
""
],
[
"Brunner",
"Damian",
""
],
[
"Nedelec",
"Francois",
""
]
] | The cytoskeleton is essential for the maintenance of cell morphology in eukaryotes. In fission yeast for example, polarized growth sites are organized by actin whereas microtubules (MT) acting upstream control where growth occurs (La Carbona et al, 2006). Growth is limited to the cell poles when MTs undergo catastrophes there and not elsewhere on the cortex (Brunner and Nurse, 2000). Here we report that the modulation of MT dynamics by forces as observed in vitro (Dogterom and Yurke, 1997; Janson et al, 2003) can quantitatively explain the localization of MT catastro-phes in S. pombe. However, we found that it is necessary to add length-dependent catastrophe rates to make the model fully consistent with other measured traits of MTs. This result demonstrates the possibility that MTs together with associated proteins such as kinesins having a depolymerization activity can reliably mark the tips of the cell. |
1908.02670 | Pierre Haas | Pierre A. Haas, Nuno M. Oliveira, and Raymond E. Goldstein | Subpopulations and Stability in Microbial Communities | updated version with expanded introduction; 5 pages, 4 figures | Phys. Rev. Research 2, 022036 (2020) | 10.1103/PhysRevResearch.2.022036 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In microbial communities, each species often has multiple, distinct
phenotypes, but studies of ecological stability have largely ignored this
subpopulation structure. Here, we show that such implicit averaging over
phenotypes leads to incorrect linear stability results. We then analyze the
effect of phenotypic switching in detail in an asymptotic limit and partly
overturn classical stability paradigms: abundant phenotypic variation is
linearly destabilizing but, surprisingly, a rare phenotype such as bacterial
persisters has a stabilizing effect. Finally, we extend these results by
showing how phenotypic variation modifies the stability of the system to large
perturbations such as antibiotic treatments.
| [
{
"created": "Wed, 7 Aug 2019 15:05:35 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Dec 2019 16:28:02 GMT",
"version": "v2"
}
] | 2020-05-20 | [
[
"Haas",
"Pierre A.",
""
],
[
"Oliveira",
"Nuno M.",
""
],
[
"Goldstein",
"Raymond E.",
""
]
] | In microbial communities, each species often has multiple, distinct phenotypes, but studies of ecological stability have largely ignored this subpopulation structure. Here, we show that such implicit averaging over phenotypes leads to incorrect linear stability results. We then analyze the effect of phenotypic switching in detail in an asymptotic limit and partly overturn classical stability paradigms: abundant phenotypic variation is linearly destabilizing but, surprisingly, a rare phenotype such as bacterial persisters has a stabilizing effect. Finally, we extend these results by showing how phenotypic variation modifies the stability of the system to large perturbations such as antibiotic treatments. |
1312.0353 | Sutirth Dey | Sudipta Tung, Abhishek Mishra and Sutirth Dey | A Comparison of Six Methods for Stabilizing Population Dynamics | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last two decades, several methods have been proposed for stabilizing
the dynamics of biological populations. However, these methods have typically
been evaluated using different population dynamics models and in the context of
very different concepts of stability, which makes it difficult to compare their
relative efficiencies. Moreover, since the dynamics of populations are
dependent on the life-history of the species and its environment, it is
conceivable that the stabilizing effects of control methods would also be
affected by such factors, a complication that has typically not been
investigated. In this study we compare six different control methods with
respect to their efficiency at inducing a common level of enhancement (50%
increase) for two kinds of stability (constancy and persistence) under four
different life history/ environment combinations. Since these methods have been
analytically studied elsewhere, we focus on an intuitive understanding of
realistic simulations incorporating noise, extinction probability and lattice
effect. We show that for these six methods, even when the magnitude of
stabilization attained is the same, other aspects of the dynamics like
population size distribution can be very different. Consequently, correlated
aspects of stability, like the amount of persistence for a given degree of
constancy stability (and vice versa) or the corresponding effective population
size (a measure of resistance to genetic drift) vary widely among the methods.
Moreover, the number of organisms needed to be added or removed to attain
similar levels of stabilization also varies for these methods, a fact that has
economic implications. Finally, we compare the relative efficiency of these
methods through a composite index of various stability related measures. We
find that restocking to a constant lower threshold seems to be the optimal
method under most conditions.
| [
{
"created": "Mon, 2 Dec 2013 07:01:43 GMT",
"version": "v1"
}
] | 2013-12-03 | [
[
"Tung",
"Sudipta",
""
],
[
"Mishra",
"Abhishek",
""
],
[
"Dey",
"Sutirth",
""
]
] | Over the last two decades, several methods have been proposed for stabilizing the dynamics of biological populations. However, these methods have typically been evaluated using different population dynamics models and in the context of very different concepts of stability, which makes it difficult to compare their relative efficiencies. Moreover, since the dynamics of populations are dependent on the life-history of the species and its environment, it is conceivable that the stabilizing effects of control methods would also be affected by such factors, a complication that has typically not been investigated. In this study we compare six different control methods with respect to their efficiency at inducing a common level of enhancement (50% increase) for two kinds of stability (constancy and persistence) under four different life history/ environment combinations. Since these methods have been analytically studied elsewhere, we focus on an intuitive understanding of realistic simulations incorporating noise, extinction probability and lattice effect. We show that for these six methods, even when the magnitude of stabilization attained is the same, other aspects of the dynamics like population size distribution can be very different. Consequently, correlated aspects of stability, like the amount of persistence for a given degree of constancy stability (and vice versa) or the corresponding effective population size (a measure of resistance to genetic drift) vary widely among the methods. Moreover, the number of organisms needed to be added or removed to attain similar levels of stabilization also varies for these methods, a fact that has economic implications. Finally, we compare the relative efficiency of these methods through a composite index of various stability related measures. We find that restocking to a constant lower threshold seems to be the optimal method under most conditions. |
1504.00556 | Karel B\v{r}inda | Karel B\v{r}inda, Valentina Boeva, Gregory Kucherov | RNF: a general framework to evaluate NGS read mappers | null | Bioinformatics 32.1 (2016): 136-139 | 10.1093/bioinformatics/btv524 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning reads to a reference sequence is a fundamental step in numerous
bioinformatics pipelines. As a consequence, the sensitivity and precision of
the mapping tool, applied with certain parameters to certain data, can
critically affect the accuracy of produced results (e.g., in variant calling
applications). Therefore, there has been an increasing demand of methods for
comparing mappers and for measuring effects of their parameters.
Read simulators combined with alignment evaluation tools provide the most
straightforward way to evaluate and compare mappers. Simulation of reads is
accompanied by information about their positions in the source genome. This
information is then used to evaluate alignments produced by the mapper.
Finally, reports containing statistics of successful read alignments are
created.
In default of standards for encoding read origins, every evaluation tool has
to be made explicitly compatible with the simulator used to generate reads. In
order to solve this obstacle, we have created a generic format RNF (Read Naming
Format) for assigning read names with encoded information about original
positions.
Futhermore, we have developed an associated software package RNF containing
two principal components. MIShmash applies one of popular read simulating tools
(among DwgSim, Art, Mason, CuReSim etc.) and transforms the generated reads
into RNF format. LAVEnder evaluates then a given read mapper using simulated
reads in RNF format. A special attention is payed to mapping qualities that
serve for parametrization of ROC curves, and to evaluation of the effect of
read sample contamination.
| [
{
"created": "Thu, 2 Apr 2015 13:41:46 GMT",
"version": "v1"
}
] | 2016-03-17 | [
[
"Břinda",
"Karel",
""
],
[
"Boeva",
"Valentina",
""
],
[
"Kucherov",
"Gregory",
""
]
] | Aligning reads to a reference sequence is a fundamental step in numerous bioinformatics pipelines. As a consequence, the sensitivity and precision of the mapping tool, applied with certain parameters to certain data, can critically affect the accuracy of produced results (e.g., in variant calling applications). Therefore, there has been an increasing demand of methods for comparing mappers and for measuring effects of their parameters. Read simulators combined with alignment evaluation tools provide the most straightforward way to evaluate and compare mappers. Simulation of reads is accompanied by information about their positions in the source genome. This information is then used to evaluate alignments produced by the mapper. Finally, reports containing statistics of successful read alignments are created. In default of standards for encoding read origins, every evaluation tool has to be made explicitly compatible with the simulator used to generate reads. In order to solve this obstacle, we have created a generic format RNF (Read Naming Format) for assigning read names with encoded information about original positions. Futhermore, we have developed an associated software package RNF containing two principal components. MIShmash applies one of popular read simulating tools (among DwgSim, Art, Mason, CuReSim etc.) and transforms the generated reads into RNF format. LAVEnder evaluates then a given read mapper using simulated reads in RNF format. A special attention is payed to mapping qualities that serve for parametrization of ROC curves, and to evaluation of the effect of read sample contamination. |
2205.08054 | Yoichi Watanabe | Yoichi Watanabe, A. Biswas, K. Rangarajan, G. Rath, and N. Gopishankar | Classification of anatomic structures in head and neck by CT-based
radiomics | null | null | null | null | q-bio.QM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background and Purpose: Radiomics features are used to identify disease types
and predict therapy outcomes. However, how the radiomics features are different
among different anatomical structures has never been investigated. Hence, we
analyzed the radiomics features of 22 anatomical structures in the head and
neck area in CT images. Furthermore, we studied whether CT radiomics can
classify anatomical structures of the head and neck using unsupervised
machine-learning techniques. Materials and methods: We obtained IMRT/VMAT
treatment planning data from 36 patients treated for head and neck cancers in a
single institution. There were 1357 contours of more than 22 anatomical
structures drawn on planning CTs. We calculated 174 radiomics features using
the SIBEX program. First, we tested whether the radiomics features of
anatomical structures were unique enough to classify all contours into 22
groups. We then developed a two-stage clustering technique to classify 22
anatomic structures into sub-groups with similar physiological or biological
characteristics. Results: The heatmap of 174 radiomics features of 22
anatomical structures showed a distinct difference among tumors and other
healthy structures. Radiomics features have allowed us to identify the eyes,
lens, submandibular, pituitary glands, and thyroids with over 90% accuracy. The
two-stage clustering of 22 structures resulted in six subgroups, which shared
common characteristics such as fatty and bony tissues. Conclusions: We have
shown that anatomical structures in head and neck tumors have distinguishable
radiomics features. We could observe similarities of features among subgroups
of the structures. The results suggest that CT radiomics can help distinguish
the biological characteristics of head and neck lesions.
| [
{
"created": "Tue, 17 May 2022 01:59:23 GMT",
"version": "v1"
}
] | 2022-05-18 | [
[
"Watanabe",
"Yoichi",
""
],
[
"Biswas",
"A.",
""
],
[
"Rangarajan",
"K.",
""
],
[
"Rath",
"G.",
""
],
[
"Gopishankar",
"N.",
""
]
] | Background and Purpose: Radiomics features are used to identify disease types and predict therapy outcomes. However, how the radiomics features are different among different anatomical structures has never been investigated. Hence, we analyzed the radiomics features of 22 anatomical structures in the head and neck area in CT images. Furthermore, we studied whether CT radiomics can classify anatomical structures of the head and neck using unsupervised machine-learning techniques. Materials and methods: We obtained IMRT/VMAT treatment planning data from 36 patients treated for head and neck cancers in a single institution. There were 1357 contours of more than 22 anatomical structures drawn on planning CTs. We calculated 174 radiomics features using the SIBEX program. First, we tested whether the radiomics features of anatomical structures were unique enough to classify all contours into 22 groups. We then developed a two-stage clustering technique to classify 22 anatomic structures into sub-groups with similar physiological or biological characteristics. Results: The heatmap of 174 radiomics features of 22 anatomical structures showed a distinct difference among tumors and other healthy structures. Radiomics features have allowed us to identify the eyes, lens, submandibular, pituitary glands, and thyroids with over 90% accuracy. The two-stage clustering of 22 structures resulted in six subgroups, which shared common characteristics such as fatty and bony tissues. Conclusions: We have shown that anatomical structures in head and neck tumors have distinguishable radiomics features. We could observe similarities of features among subgroups of the structures. The results suggest that CT radiomics can help distinguish the biological characteristics of head and neck lesions. |
2107.14139 | Sheshank Shankar | Chirag Samal, Kasia Jakimowicz, Krishnendu Dasgupta, Aniket
Vashishtha, Francisco O., Arunakiry Natarajan, Haris Nazir, Alluri Siddhartha
Varma, Tejal Dahake, Amitesh Anand Pandey, Ishaan Singh, John Sangyeob Kim,
Mehrab Singh Gill, Saurish Srivastava, Orna Mukhopadhyay, Parth Patwa, Qamil
Mirza, Sualeha Irshad, Sheshank Shankar, Rohan Iyer, Rohan Sukumaran, Ashley
Mehra, Anshuman Sharma, Abhishek Singh, Maurizio Arseni, Sethuraman T V,
Saras Agrawal, Vivek Sharma, and Ramesh Raskar | Vaccination Worldwide: Strategies, Distribution and Challenges | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Coronavirus 2019 (Covid-19) pandemic caused by the SARS-CoV-2 virus
represents an unprecedented crisis for our planet. It is a bane of the \"uber
connected world that we live in that this virus has affected almost all
countries and caused mortality and economic upheaval at a scale whose effects
are going to be felt for generations to come. While we can all be buoyed at the
pace at which vaccines have been developed and brought to market, there are
still challenges ahead for all countries to get their populations vaccinated
equitably and effectively. This paper provides an overview of ongoing
immunization efforts in various countries. In this early draft, we have
identified a few key factors that we use to review different countries' current
COVID-19 immunization strategies and their strengths and draw conclusions so
that policymakers worldwide can learn from them. Our paper focuses on processes
related to vaccine approval, allocation and prioritization, distribution
strategies, population to vaccine ratio, vaccination governance, accessibility
and use of digital solutions, and government policies. The statistics and
numbers are dated as per the draft date [June 24th, 2021].
| [
{
"created": "Wed, 21 Jul 2021 07:32:18 GMT",
"version": "v1"
}
] | 2021-07-30 | [
[
"Samal",
"Chirag",
""
],
[
"Jakimowicz",
"Kasia",
""
],
[
"Dasgupta",
"Krishnendu",
""
],
[
"Vashishtha",
"Aniket",
""
],
[
"O.",
"Francisco",
""
],
[
"Natarajan",
"Arunakiry",
""
],
[
"Nazir",
"Haris",
""
],
[
"Varma",
"Alluri Siddhartha",
""
],
[
"Dahake",
"Tejal",
""
],
[
"Pandey",
"Amitesh Anand",
""
],
[
"Singh",
"Ishaan",
""
],
[
"Kim",
"John Sangyeob",
""
],
[
"Gill",
"Mehrab Singh",
""
],
[
"Srivastava",
"Saurish",
""
],
[
"Mukhopadhyay",
"Orna",
""
],
[
"Patwa",
"Parth",
""
],
[
"Mirza",
"Qamil",
""
],
[
"Irshad",
"Sualeha",
""
],
[
"Shankar",
"Sheshank",
""
],
[
"Iyer",
"Rohan",
""
],
[
"Sukumaran",
"Rohan",
""
],
[
"Mehra",
"Ashley",
""
],
[
"Sharma",
"Anshuman",
""
],
[
"Singh",
"Abhishek",
""
],
[
"Arseni",
"Maurizio",
""
],
[
"T",
"Sethuraman",
"V"
],
[
"Agrawal",
"Saras",
""
],
[
"Sharma",
"Vivek",
""
],
[
"Raskar",
"Ramesh",
""
]
] | The Coronavirus 2019 (Covid-19) pandemic caused by the SARS-CoV-2 virus represents an unprecedented crisis for our planet. It is a bane of the \"uber connected world that we live in that this virus has affected almost all countries and caused mortality and economic upheaval at a scale whose effects are going to be felt for generations to come. While we can all be buoyed at the pace at which vaccines have been developed and brought to market, there are still challenges ahead for all countries to get their populations vaccinated equitably and effectively. This paper provides an overview of ongoing immunization efforts in various countries. In this early draft, we have identified a few key factors that we use to review different countries' current COVID-19 immunization strategies and their strengths and draw conclusions so that policymakers worldwide can learn from them. Our paper focuses on processes related to vaccine approval, allocation and prioritization, distribution strategies, population to vaccine ratio, vaccination governance, accessibility and use of digital solutions, and government policies. The statistics and numbers are dated as per the draft date [June 24th, 2021]. |
1703.09347 | Keith Hayton | Keith Hayton, Dimitrios Moirogiannis, Marcelo Magnasco | Adaptive Scales of Spatial Integration and Response Latencies in a
Critically-Balanced Model of the Primary Visual Cortex | null | null | 10.1371/journal.pone.0196566 | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain processes visual inputs having structure over a large range of
spatial scales. The precise mechanisms or algorithms used by the brain to
achieve this feat are largely unknown and an open problem in visual
neuroscience. In particular, the spatial extent in visual space over which
primary visual cortex (V1) performs evidence integration has been shown to
change as a function of contrast and other visual parameters, thus adapting
scale in visual space in an input-dependent manner. We demonstrate that a
simple dynamical mechanism---dynamical criticality---can simultaneously account
for the well-documented input-dependence characteristics of three properties of
V1: scales of integration in visuotopic space, extents of lateral integration
on the cortical surface, and response latencies.
| [
{
"created": "Mon, 27 Mar 2017 23:42:54 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2017 21:46:24 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Mar 2018 11:35:37 GMT",
"version": "v3"
}
] | 2018-07-04 | [
[
"Hayton",
"Keith",
""
],
[
"Moirogiannis",
"Dimitrios",
""
],
[
"Magnasco",
"Marcelo",
""
]
] | The brain processes visual inputs having structure over a large range of spatial scales. The precise mechanisms or algorithms used by the brain to achieve this feat are largely unknown and an open problem in visual neuroscience. In particular, the spatial extent in visual space over which primary visual cortex (V1) performs evidence integration has been shown to change as a function of contrast and other visual parameters, thus adapting scale in visual space in an input-dependent manner. We demonstrate that a simple dynamical mechanism---dynamical criticality---can simultaneously account for the well-documented input-dependence characteristics of three properties of V1: scales of integration in visuotopic space, extents of lateral integration on the cortical surface, and response latencies. |
2306.03218 | George Stepaniants | Marie Breeur, George Stepaniants, Pekka Keski-Rahkonen, Philippe
Rigollet, and Vivian Viallon | Optimal transport for automatic alignment of untargeted metabolomic data | 47 pages, 16 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Untargeted metabolomic profiling through liquid chromatography-mass
spectrometry (LC-MS) measures a vast array of metabolites within biospecimens,
advancing drug development, disease diagnosis, and risk prediction. However,
the low throughput of LC-MS poses a major challenge for biomarker discovery,
annotation, and experimental comparison, necessitating the merging of multiple
datasets. Current data pooling methods encounter practical limitations due to
their vulnerability to data variations and hyperparameter dependence. Here we
introduce GromovMatcher, a flexible and user-friendly algorithm that
automatically combines LC-MS datasets using optimal transport. By capitalizing
on feature intensity correlation structures, GromovMatcher delivers superior
alignment accuracy and robustness compared to existing approaches. This
algorithm scales to thousands of features requiring minimal hyperparameter
tuning. Manually curated datasets for validating alignment algorithms are
limited in the field of untargeted metabolomics, and hence we develop a dataset
split procedure to generate pairs of validation datasets to test the alignments
produced by GromovMatcher and other methods. Applying our method to
experimental patient studies of liver and pancreatic cancer, we discover shared
metabolic features related to patient alcohol intake, demonstrating how
GromovMatcher facilitates the search for biomarkers associated with lifestyle
risk factors linked to several cancer types.
| [
{
"created": "Mon, 5 Jun 2023 20:08:19 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2023 17:36:51 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Feb 2024 17:15:52 GMT",
"version": "v3"
},
{
"created": "Fri, 24 May 2024 13:16:49 GMT",
"version": "v4"
}
] | 2024-05-27 | [
[
"Breeur",
"Marie",
""
],
[
"Stepaniants",
"George",
""
],
[
"Keski-Rahkonen",
"Pekka",
""
],
[
"Rigollet",
"Philippe",
""
],
[
"Viallon",
"Vivian",
""
]
] | Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Manually curated datasets for validating alignment algorithms are limited in the field of untargeted metabolomics, and hence we develop a dataset split procedure to generate pairs of validation datasets to test the alignments produced by GromovMatcher and other methods. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types. |
1304.3146 | Kieran Smallbone | Kieran Smallbone | Standardized network reconstruction of CHO cell metabolism | arXiv admin note: substantial text overlap with arXiv:1304.2960 | null | null | null | q-bio.MN | http://creativecommons.org/licenses/publicdomain/ | We have created a genome-scale network reconstruction of chinese hamster
ovary (CHO) cell metabolism. Existing reconstructions were improved in terms of
annotation standards, to facilitate their subsequent use in dynamic modelling.
The resultant network is available from ChoNet (http://cho.sf.net/).
| [
{
"created": "Tue, 9 Apr 2013 09:09:16 GMT",
"version": "v1"
}
] | 2013-04-12 | [
[
"Smallbone",
"Kieran",
""
]
] | We have created a genome-scale network reconstruction of chinese hamster ovary (CHO) cell metabolism. Existing reconstructions were improved in terms of annotation standards, to facilitate their subsequent use in dynamic modelling. The resultant network is available from ChoNet (http://cho.sf.net/). |
0802.2620 | Kasper Peeters | Kasper Peeters and Anne Taormina | Dynamics of icosahedral viruses: what does Viral Tiling Theory teach us? | 10 pages, contribution to the proceedings of the `Second Mathematical
Virology Workshop', Edinburgh (6-10 August 2007) | Computational and Mathematical Methods in Medicine, 9(03-04),
2008, 211 - 220. | 10.1080/17486700802168270 | SPIN-08/09, ITP-UU-08/09, DCPT/08/09 | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a top-down approach to the study of the dynamics of icosahedral
virus capsids, in which each protein is approximated by a point mass. Although
this represents a rather crude coarse-graining, we argue that it highlights
several generic features of vibrational spectra which have been overlooked so
far. We furthermore discuss the consequences of approximate inversion symmetry
as well as the role played by Viral Tiling Theory in the study of virus capsid
vibrations.
| [
{
"created": "Tue, 19 Feb 2008 09:59:35 GMT",
"version": "v1"
}
] | 2008-08-20 | [
[
"Peeters",
"Kasper",
""
],
[
"Taormina",
"Anne",
""
]
] | We present a top-down approach to the study of the dynamics of icosahedral virus capsids, in which each protein is approximated by a point mass. Although this represents a rather crude coarse-graining, we argue that it highlights several generic features of vibrational spectra which have been overlooked so far. We furthermore discuss the consequences of approximate inversion symmetry as well as the role played by Viral Tiling Theory in the study of virus capsid vibrations. |
2302.06677 | Yena Han | Yena Han, Tomaso Poggio, Brian Cheung | System identification of neural systems: If we got it right, would we
know? | null | null | null | null | q-bio.NC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Artificial neural networks are being proposed as models of parts of the
brain. The networks are compared to recordings of biological neurons, and good
performance in reproducing neural responses is considered to support the
model's validity. A key question is how much this system identification
approach tells us about brain computation. Does it validate one model
architecture over another? We evaluate the most commonly used comparison
techniques, such as a linear encoding model and centered kernel alignment, to
correctly identify a model by replacing brain recordings with known ground
truth models. System identification performance is quite variable; it also
depends significantly on factors independent of the ground truth architecture,
such as stimuli images. In addition, we show the limitations of using
functional similarity scores in identifying higher-level architectural motifs.
| [
{
"created": "Mon, 13 Feb 2023 20:32:37 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Aug 2023 21:37:02 GMT",
"version": "v2"
}
] | 2023-09-01 | [
[
"Han",
"Yena",
""
],
[
"Poggio",
"Tomaso",
""
],
[
"Cheung",
"Brian",
""
]
] | Artificial neural networks are being proposed as models of parts of the brain. The networks are compared to recordings of biological neurons, and good performance in reproducing neural responses is considered to support the model's validity. A key question is how much this system identification approach tells us about brain computation. Does it validate one model architecture over another? We evaluate the most commonly used comparison techniques, such as a linear encoding model and centered kernel alignment, to correctly identify a model by replacing brain recordings with known ground truth models. System identification performance is quite variable; it also depends significantly on factors independent of the ground truth architecture, such as stimuli images. In addition, we show the limitations of using functional similarity scores in identifying higher-level architectural motifs. |
2407.16584 | Adam Hospital Gasch | Rommie Amaro, Johan {\AA}qvist, Ivet Bahar, Federica Battistini, Adam
Bellaiche, Daniel Beltran, Philip C. Biggin, Massimiliano Bonomi, Gregory R.
Bowman, Richard Bryce, Giovanni Bussi, Paolo Carloni, David Case, Andrea
Cavalli, Chie-En A. Chang, Thomas E. Cheatham III, Margaret S. Cheung, Cris
Chipot, Lillian T. Chong, Preeti Choudhary, Cecilia Clementi, Rosana
Collepardo-Guevara, Peter Coveney, T. Daniel Crawford, Matteo Dal Peraro,
Bert de Groot, Lucie Delemotte, Marco De Vivo, Jonathan Essex, Franca
Fraternali, Jiali Gao, Josep Llu\'is Gelp\'i, Francesco Luigi Gervasio,
Fernando Danilo Gonzalez-Nilo, Helmut Grubm\"uller, Marina Guenza, Horacio V.
Guzman, Sarah Harris, Teresa Head-Gordon, Rigoberto Hernandez, Adam Hospital,
Niu Huang, Xuhui Huang, Gerhard Hummer, Javier Iglesias-Fern\'andez, Jan H.
Jensen, Shantenu Jha, Wanting Jiao, Shina Caroline Lynn Kamerlin, Syma
Khalid, Charles Laughton, Michael Levitt, Vittorio Limongelli, Erik Lindahl,
Kersten Lindorff-Larsen, Sharon Loverde, Magnus Lundborg, Yun Lina Luo,
Francisco Javier Luque, Charlotte I. Lynch, Alexander MacKerell, Alessandra
Magistrato, Siewert J. Marrink, Hugh Martin, J. Andrew McCammon, Kenneth
Merz, Vicent Moliner, Adrian Mulholland, Sohail Murad, Athi N. Naganathan,
Shikha Nangia, Frank Noe, Agnes Noy, Julianna Ol\'ah, Megan O'Mara, Mary Jo
Ondrechen, Jos\'e N. Onuchic, Alexey Onufriev, Silvia Osuna, Anna R.
Panchenko, Sergio Pantano, Michele Parrinello, Alberto Perez, Tomas
Perez-Acle, Juan R. Perilla, B. Montgomery Pettitt, Adriana Pietropalo,
Jean-Philip Piquemal, Adolfo Poma, Matej Praprotnik, Maria J. Ramos, Pengyu
Ren, Nathalie Reuter, Adrian Roitberg, Edina Rosta, Carme Rovira, Benoit
Roux, Ursula R\"othlisberger, Karissa Y. Sanbonmatsu, Tamar Schlick, Alexey
K. Shaytan, Carlos Simmerling, Jeremy C. Smith, Yuji Sugita, Katarzyna
\'Swiderek, Makoto Taiji, Peng Tao, Julian Tirado-Rives, Inaki Tun\'on, Marc
W. Van Der Kamp, David Van der Spoel, Sameer Velankar, Gregory A. Voth,
Rebecca Wade, Ariel Warshel, Valerie Vaissier Welborn, Stacey Wetmore, Chung
F. Wong, Lee-Wei Yang, Martin Zacharias, Modesto Orozco | The need to implement FAIR principles in biomolecular simulations | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This letter illustrates the opinion of the molecular dynamics (MD) community
on the need to adopt a new FAIR paradigm for the use of molecular simulations.
It highlights the necessity of a collaborative effort to create, establish, and
sustain a database that allows findability, accessibility, interoperability,
and reusability of molecular dynamics simulation data. Such a development would
democratize the field and significantly improve the impact of MD simulations on
life science research. This will transform our working paradigm, pushing the
field to a new frontier. We invite you to support our initiative at the MDDB
community (https://mddbr.eu/community/)
| [
{
"created": "Tue, 23 Jul 2024 15:38:39 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Amaro",
"Rommie",
""
],
[
"Åqvist",
"Johan",
""
],
[
"Bahar",
"Ivet",
""
],
[
"Battistini",
"Federica",
""
],
[
"Bellaiche",
"Adam",
""
],
[
"Beltran",
"Daniel",
""
],
[
"Biggin",
"Philip C.",
""
],
[
"Bonomi",
"Massimiliano",
""
],
[
"Bowman",
"Gregory R.",
""
],
[
"Bryce",
"Richard",
""
],
[
"Bussi",
"Giovanni",
""
],
[
"Carloni",
"Paolo",
""
],
[
"Case",
"David",
""
],
[
"Cavalli",
"Andrea",
""
],
[
"Chang",
"Chie-En A.",
""
],
[
"Cheatham",
"Thomas E.",
"III"
],
[
"Cheung",
"Margaret S.",
""
],
[
"Chipot",
"Cris",
""
],
[
"Chong",
"Lillian T.",
""
],
[
"Choudhary",
"Preeti",
""
],
[
"Clementi",
"Cecilia",
""
],
[
"Collepardo-Guevara",
"Rosana",
""
],
[
"Coveney",
"Peter",
""
],
[
"Crawford",
"T. Daniel",
""
],
[
"Peraro",
"Matteo Dal",
""
],
[
"de Groot",
"Bert",
""
],
[
"Delemotte",
"Lucie",
""
],
[
"De Vivo",
"Marco",
""
],
[
"Essex",
"Jonathan",
""
],
[
"Fraternali",
"Franca",
""
],
[
"Gao",
"Jiali",
""
],
[
"Gelpí",
"Josep Lluís",
""
],
[
"Gervasio",
"Francesco Luigi",
""
],
[
"Gonzalez-Nilo",
"Fernando Danilo",
""
],
[
"Grubmüller",
"Helmut",
""
],
[
"Guenza",
"Marina",
""
],
[
"Guzman",
"Horacio V.",
""
],
[
"Harris",
"Sarah",
""
],
[
"Head-Gordon",
"Teresa",
""
],
[
"Hernandez",
"Rigoberto",
""
],
[
"Hospital",
"Adam",
""
],
[
"Huang",
"Niu",
""
],
[
"Huang",
"Xuhui",
""
],
[
"Hummer",
"Gerhard",
""
],
[
"Iglesias-Fernández",
"Javier",
""
],
[
"Jensen",
"Jan H.",
""
],
[
"Jha",
"Shantenu",
""
],
[
"Jiao",
"Wanting",
""
],
[
"Kamerlin",
"Shina Caroline Lynn",
""
],
[
"Khalid",
"Syma",
""
],
[
"Laughton",
"Charles",
""
],
[
"Levitt",
"Michael",
""
],
[
"Limongelli",
"Vittorio",
""
],
[
"Lindahl",
"Erik",
""
],
[
"Lindorff-Larsen",
"Kersten",
""
],
[
"Loverde",
"Sharon",
""
],
[
"Lundborg",
"Magnus",
""
],
[
"Luo",
"Yun Lina",
""
],
[
"Luque",
"Francisco Javier",
""
],
[
"Lynch",
"Charlotte I.",
""
],
[
"MacKerell",
"Alexander",
""
],
[
"Magistrato",
"Alessandra",
""
],
[
"Marrink",
"Siewert J.",
""
],
[
"Martin",
"Hugh",
""
],
[
"McCammon",
"J. Andrew",
""
],
[
"Merz",
"Kenneth",
""
],
[
"Moliner",
"Vicent",
""
],
[
"Mulholland",
"Adrian",
""
],
[
"Murad",
"Sohail",
""
],
[
"Naganathan",
"Athi N.",
""
],
[
"Nangia",
"Shikha",
""
],
[
"Noe",
"Frank",
""
],
[
"Noy",
"Agnes",
""
],
[
"Oláh",
"Julianna",
""
],
[
"O'Mara",
"Megan",
""
],
[
"Ondrechen",
"Mary Jo",
""
],
[
"Onuchic",
"José N.",
""
],
[
"Onufriev",
"Alexey",
""
],
[
"Osuna",
"Silvia",
""
],
[
"Panchenko",
"Anna R.",
""
],
[
"Pantano",
"Sergio",
""
],
[
"Parrinello",
"Michele",
""
],
[
"Perez",
"Alberto",
""
],
[
"Perez-Acle",
"Tomas",
""
],
[
"Perilla",
"Juan R.",
""
],
[
"Pettitt",
"B. Montgomery",
""
],
[
"Pietropalo",
"Adriana",
""
],
[
"Piquemal",
"Jean-Philip",
""
],
[
"Poma",
"Adolfo",
""
],
[
"Praprotnik",
"Matej",
""
],
[
"Ramos",
"Maria J.",
""
],
[
"Ren",
"Pengyu",
""
],
[
"Reuter",
"Nathalie",
""
],
[
"Roitberg",
"Adrian",
""
],
[
"Rosta",
"Edina",
""
],
[
"Rovira",
"Carme",
""
],
[
"Roux",
"Benoit",
""
],
[
"Röthlisberger",
"Ursula",
""
],
[
"Sanbonmatsu",
"Karissa Y.",
""
],
[
"Schlick",
"Tamar",
""
],
[
"Shaytan",
"Alexey K.",
""
],
[
"Simmerling",
"Carlos",
""
],
[
"Smith",
"Jeremy C.",
""
],
[
"Sugita",
"Yuji",
""
],
[
"Świderek",
"Katarzyna",
""
],
[
"Taiji",
"Makoto",
""
],
[
"Tao",
"Peng",
""
],
[
"Tirado-Rives",
"Julian",
""
],
[
"Tunón",
"Inaki",
""
],
[
"Van Der Kamp",
"Marc W.",
""
],
[
"Van der Spoel",
"David",
""
],
[
"Velankar",
"Sameer",
""
],
[
"Voth",
"Gregory A.",
""
],
[
"Wade",
"Rebecca",
""
],
[
"Warshel",
"Ariel",
""
],
[
"Welborn",
"Valerie Vaissier",
""
],
[
"Wetmore",
"Stacey",
""
],
[
"Wong",
"Chung F.",
""
],
[
"Yang",
"Lee-Wei",
""
],
[
"Zacharias",
"Martin",
""
],
[
"Orozco",
"Modesto",
""
]
] | This letter illustrates the opinion of the molecular dynamics (MD) community on the need to adopt a new FAIR paradigm for the use of molecular simulations. It highlights the necessity of a collaborative effort to create, establish, and sustain a database that allows findability, accessibility, interoperability, and reusability of molecular dynamics simulation data. Such a development would democratize the field and significantly improve the impact of MD simulations on life science research. This will transform our working paradigm, pushing the field to a new frontier. We invite you to support our initiative at the MDDB community (https://mddbr.eu/community/) |
1707.00759 | Dante Chialvo | Ignacio Cifre, Mahdi Zarepour, Silvina G Horovitz, Sergio Cannas,
Dante R Chialvo | On why a few points suffice to describe spatiotemporal large-scale brain
dynamics | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An heuristic signal processing scheme recently introduced shows how brain
signals can be efficiently represented by a sparse spatiotemporal point
process. The approach has been validated already for different relevant
conditions demonstrating that preserves and compress a surprisingly large
fraction of the signal information. In this paper the conditions for such
compression to succeed are investigated as well as the underlying reasons for
such good performance. The results show that the key lies in the correlation
properties of the time series under consideration. It is found that signals
with long range correlations are particularly suitable for this type of
compression, where inflection points contain most of the information. Since
this type of correlation is ubiquitous in signals trough out nature including
music, weather patterns, biological signals, etc., we expect that this type of
approach to be an useful tool for their analysis.
| [
{
"created": "Mon, 3 Jul 2017 21:12:33 GMT",
"version": "v1"
}
] | 2017-07-05 | [
[
"Cifre",
"Ignacio",
""
],
[
"Zarepour",
"Mahdi",
""
],
[
"Horovitz",
"Silvina G",
""
],
[
"Cannas",
"Sergio",
""
],
[
"Chialvo",
"Dante R",
""
]
] | An heuristic signal processing scheme recently introduced shows how brain signals can be efficiently represented by a sparse spatiotemporal point process. The approach has been validated already for different relevant conditions demonstrating that preserves and compress a surprisingly large fraction of the signal information. In this paper the conditions for such compression to succeed are investigated as well as the underlying reasons for such good performance. The results show that the key lies in the correlation properties of the time series under consideration. It is found that signals with long range correlations are particularly suitable for this type of compression, where inflection points contain most of the information. Since this type of correlation is ubiquitous in signals trough out nature including music, weather patterns, biological signals, etc., we expect that this type of approach to be an useful tool for their analysis. |
1406.3284 | Charles Cadieu | Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego
Ardila, Ethan A. Solomon, Najib J. Majaj, James J. DiCarlo | Deep Neural Networks Rival the Representation of Primate IT Cortex for
Core Visual Object Recognition | 35 pages, 12 figures, extends and expands upon arXiv:1301.3530 | null | 10.1371/journal.pcbi.1003963 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primate visual system achieves remarkable visual object recognition
performance even in brief presentations and under changes to object exemplar,
geometric transformations, and background variation (a.k.a. core visual object
recognition). This remarkable performance is mediated by the representation
formed in inferior temporal (IT) cortex. In parallel, recent advances in
machine learning have led to ever higher performing models of object
recognition using artificial deep neural networks (DNNs). It remains unclear,
however, whether the representational performance of DNNs rivals that of the
brain. To accurately produce such a comparison, a major difficulty has been a
unifying metric that accounts for experimental limitations such as the amount
of noise, the number of neural recording sites, and the number trials, and
computational limitations such as the complexity of the decoding classifier and
the number of classifier training examples. In this work we perform a direct
comparison that corrects for these experimental limitations and computational
considerations. As part of our methodology, we propose an extension of "kernel
analysis" that measures the generalization accuracy as a function of
representational complexity. Our evaluations show that, unlike previous
bio-inspired models, the latest DNNs rival the representational performance of
IT cortex on this visual object recognition task. Furthermore, we show that
models that perform well on measures of representational performance also
perform well on measures of representational similarity to IT and on measures
of predicting individual IT multi-unit responses. Whether these DNNs rely on
computational mechanisms similar to the primate visual system is yet to be
determined, but, unlike all previous bio-inspired models, that possibility
cannot be ruled out merely on representational performance grounds.
| [
{
"created": "Thu, 12 Jun 2014 16:38:07 GMT",
"version": "v1"
}
] | 2015-06-19 | [
[
"Cadieu",
"Charles F.",
""
],
[
"Hong",
"Ha",
""
],
[
"Yamins",
"Daniel L. K.",
""
],
[
"Pinto",
"Nicolas",
""
],
[
"Ardila",
"Diego",
""
],
[
"Solomon",
"Ethan A.",
""
],
[
"Majaj",
"Najib J.",
""
],
[
"DiCarlo",
"James J.",
""
]
] | The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. |
0908.2290 | David Saakian | David B. Saakian, Jose F. Fontanari | The optimization and shock waves in evolution dynamics | 6 pages | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the optimal dynamics in the infinite population evolution models
with general symmetric fitness landscape. The search of optimal evolution
trajectories are complicated due to sharp transitions (like shock waves) in
evolution dynamics with smooth fitness landscapes, which exist even in case of
popular quadratic fitness. We found exact analytical solutions for
discontinuous dynamics at the large genome length limit.
We found the optimal mutation rates for the fixed fitness landscape. The
single peak fitness landscape gives the fastest dynamics to send the vast
majority of the population from the initial sequence to the neighborhood of the
final sequence.
| [
{
"created": "Mon, 17 Aug 2009 06:31:51 GMT",
"version": "v1"
}
] | 2009-08-18 | [
[
"Saakian",
"David B.",
""
],
[
"Fontanari",
"Jose F.",
""
]
] | We consider the optimal dynamics in the infinite population evolution models with general symmetric fitness landscape. The search of optimal evolution trajectories are complicated due to sharp transitions (like shock waves) in evolution dynamics with smooth fitness landscapes, which exist even in case of popular quadratic fitness. We found exact analytical solutions for discontinuous dynamics at the large genome length limit. We found the optimal mutation rates for the fixed fitness landscape. The single peak fitness landscape gives the fastest dynamics to send the vast majority of the population from the initial sequence to the neighborhood of the final sequence. |
1311.1910 | Jan P. Radomski Dr. | Jan P. Radomski, Piotr P{\l}o\'nski, W{\l}odzimierz Zag\'orski-Ostoja | The hemagglutinin mutation E391K of pandemic 2009 influenza revisited | 23 pages with figures (5 in main text, 3 in supplementary materials) | null | 10.1016/j.ympev.2013.08.020 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic analyses based on small to moderately sized sets of sequential
data lead to overestimating mutation rates in influenza hemagglutinin (HA) by
at least an order of magnitude. Two major underlying reasons are: the
incomplete lineage sorting, and a possible absence in the analyzed sequences
set some of key missing ancestors. Additionally, during neighbor joining tree
reconstruction each mutation is considered equally important, regardless of its
nature. Here we have implemented a heuristic method optimizing site dependent
factors weighting differently 1st, 2nd, and 3rd codon position mutations,
allowing to extricate incorrectly attributed sub-clades. The least squares
regression analysis of distribution of frequencies for all mutations observed
on a partially disentangled tree for a large set of unique 3243 HA sequences,
along all nucleotide positions, was performed for all mutations as well as for
non-equivalent amino acid mutations: in both cases demonstrating almost flat
gradients, with a very slight downward slope towards the 3'-end positions. The
mean mutation rates per sequence per year were 3.83*10^-4 for the all
mutations, and 9.64*10^-5 for the non-equivalent ones.
| [
{
"created": "Fri, 8 Nov 2013 09:30:09 GMT",
"version": "v1"
}
] | 2013-11-11 | [
[
"Radomski",
"Jan P.",
""
],
[
"Płoński",
"Piotr",
""
],
[
"Zagórski-Ostoja",
"Włodzimierz",
""
]
] | Phylogenetic analyses based on small to moderately sized sets of sequential data lead to overestimating mutation rates in influenza hemagglutinin (HA) by at least an order of magnitude. Two major underlying reasons are: the incomplete lineage sorting, and a possible absence in the analyzed sequences set some of key missing ancestors. Additionally, during neighbor joining tree reconstruction each mutation is considered equally important, regardless of its nature. Here we have implemented a heuristic method optimizing site dependent factors weighting differently 1st, 2nd, and 3rd codon position mutations, allowing to extricate incorrectly attributed sub-clades. The least squares regression analysis of distribution of frequencies for all mutations observed on a partially disentangled tree for a large set of unique 3243 HA sequences, along all nucleotide positions, was performed for all mutations as well as for non-equivalent amino acid mutations: in both cases demonstrating almost flat gradients, with a very slight downward slope towards the 3'-end positions. The mean mutation rates per sequence per year were 3.83*10^-4 for the all mutations, and 9.64*10^-5 for the non-equivalent ones. |
q-bio/0408023 | Hiroshi Fujisaki | Hiroshi Fujisaki, Lintao Bu, John E. Straub | Probing vibrational energy relaxation in proteins using normal modes | 20 pages, 8 figures, to appear in "Normal Mode Analysis: Theory and
Applications to Biological and Chemical Systems" edited by Q. Cui and I.
Bahar | null | null | null | q-bio.BM | null | Vibrational energy relaxation (VER) of a selected mode in cytochrome c
(hemeprotein) in vacuum is studied using two theoretical approaches: One is the
equilibrium simulation approach with quantum correction factors, and the other
is the reduced model approach which describes the protein as an ensemble of
normal modes coupled with nonlinear coupling elements. Both methods result in
estimates of VER time (sub ps) for a CD stretching mode in the protein at room
temperature, that are in accord with the experimental data of Romesberg's
group. The applicability of the two methods is examined through a discussion of
the validity of Fermi's golden rule on which the two methods are based.
| [
{
"created": "Thu, 26 Aug 2004 23:46:58 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Sep 2004 23:31:10 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Fujisaki",
"Hiroshi",
""
],
[
"Bu",
"Lintao",
""
],
[
"Straub",
"John E.",
""
]
] | Vibrational energy relaxation (VER) of a selected mode in cytochrome c (hemeprotein) in vacuum is studied using two theoretical approaches: One is the equilibrium simulation approach with quantum correction factors, and the other is the reduced model approach which describes the protein as an ensemble of normal modes coupled with nonlinear coupling elements. Both methods result in estimates of VER time (sub ps) for a CD stretching mode in the protein at room temperature, that are in accord with the experimental data of Romesberg's group. The applicability of the two methods is examined through a discussion of the validity of Fermi's golden rule on which the two methods are based. |
2401.12126 | Gabriele D'Angella | Gabriele d'Angella and Christian Hennig | Approaches to biological species delimitation based on genetic and
spatial dissimilarity | Paper of 26 pages with 6 figures; appendix of 19 pages with 17
figures. February 2024 update: tiny notation edit, results unchanged. April
2024 update: additional simulation results and plots; introduction and
description of the methodologies edited; broader appendix with new charts.
June 2024 update: Minor edits in methods description | null | null | null | q-bio.PE stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | The delimitation of biological species, i.e., deciding which individuals
belong to the same species and whether and how many different species are
represented in a data set, is key to the conservation of biodiversity. Much
existing work uses only genetic data for species delimitation, often employing
some kind of cluster analysis. This can be misleading, because geographically
distant groups of individuals can be genetically quite different even if they
belong to the same species. We investigate the problem of testing whether two
potentially separated groups of individuals can belong to a single species or
not based on genetic and spatial data. Existing methods such as the partial
Mantel test and jackknife-based distance-distance regression are considered.
New approaches, i.e., an adaptation of a mixed effects model, a bootstrap
approach, and a jackknife version of partial Mantel, are proposed. All these
methods address the issue that distance data violate the independence
assumption for standard inference regarding correlation and regression; a
standard linear regression is also considered. The approaches are compared on
simulated meta-populations generated with SLiM and GSpace - two software
packages that can simulate spatially-explicit genetic data at an individual
level. Simulations show that the new jackknife version of the partial Mantel
test provides a good compromise between power and respecting the nominal type I
error rate. Mixed-effects models have larger power than jackknife-based
methods, but tend to display type I error rates slightly above the significance
level. An application on brassy ringlets concludes the paper.
| [
{
"created": "Mon, 22 Jan 2024 17:08:59 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Feb 2024 14:23:15 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Apr 2024 15:12:36 GMT",
"version": "v3"
},
{
"created": "Mon, 3 Jun 2024 11:22:44 GMT",
"version": "v4"
}
] | 2024-06-04 | [
[
"d'Angella",
"Gabriele",
""
],
[
"Hennig",
"Christian",
""
]
] | The delimitation of biological species, i.e., deciding which individuals belong to the same species and whether and how many different species are represented in a data set, is key to the conservation of biodiversity. Much existing work uses only genetic data for species delimitation, often employing some kind of cluster analysis. This can be misleading, because geographically distant groups of individuals can be genetically quite different even if they belong to the same species. We investigate the problem of testing whether two potentially separated groups of individuals can belong to a single species or not based on genetic and spatial data. Existing methods such as the partial Mantel test and jackknife-based distance-distance regression are considered. New approaches, i.e., an adaptation of a mixed effects model, a bootstrap approach, and a jackknife version of partial Mantel, are proposed. All these methods address the issue that distance data violate the independence assumption for standard inference regarding correlation and regression; a standard linear regression is also considered. The approaches are compared on simulated meta-populations generated with SLiM and GSpace - two software packages that can simulate spatially-explicit genetic data at an individual level. Simulations show that the new jackknife version of the partial Mantel test provides a good compromise between power and respecting the nominal type I error rate. Mixed-effects models have larger power than jackknife-based methods, but tend to display type I error rates slightly above the significance level. An application on brassy ringlets concludes the paper. |
2407.04525 | Alejandro Rodriguez-Garcia | Alejandro Rodriguez-Garcia, Jie Mei and Srikanth Ramaswamy | Enhancing learning in artificial neural networks through cellular
heterogeneity and neuromodulatory signaling | 34 pages, 4 figures, 3 boxes | null | null | null | q-bio.NC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent progress in artificial intelligence (AI) has been driven by insights
from neuroscience, particularly with the development of artificial neural
networks (ANNs). This has significantly enhanced the replication of complex
cognitive tasks such as vision and natural language processing. Despite these
advances, ANNs struggle with continual learning, adaptable knowledge transfer,
robustness, and resource efficiency - capabilities that biological systems
handle seamlessly. Specifically, ANNs often overlook the functional and
morphological diversity of the brain, hindering their computational
capabilities. Furthermore, incorporating cell-type specific neuromodulatory
effects into ANNs with neuronal heterogeneity could enable learning at two
spatial scales: spiking behavior at the neuronal level, and synaptic plasticity
at the circuit level, thereby potentially enhancing their learning abilities.
In this article, we summarize recent bio-inspired models, learning rules and
architectures and propose a biologically-informed framework for enhancing ANNs.
Our proposed dual-framework approach highlights the potential of spiking neural
networks (SNNs) for emulating diverse spiking behaviors and dendritic
compartments to simulate morphological and functional diversity of neuronal
computations. Finally, we outline how the proposed approach integrates
brain-inspired compartmental models and task-driven SNNs, balances
bioinspiration and complexity, and provides scalable solutions for pressing AI
challenges, such as continual learning, adaptability, robustness, and
resource-efficiency.
| [
{
"created": "Fri, 5 Jul 2024 14:11:28 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Rodriguez-Garcia",
"Alejandro",
""
],
[
"Mei",
"Jie",
""
],
[
"Ramaswamy",
"Srikanth",
""
]
] | Recent progress in artificial intelligence (AI) has been driven by insights from neuroscience, particularly with the development of artificial neural networks (ANNs). This has significantly enhanced the replication of complex cognitive tasks such as vision and natural language processing. Despite these advances, ANNs struggle with continual learning, adaptable knowledge transfer, robustness, and resource efficiency - capabilities that biological systems handle seamlessly. Specifically, ANNs often overlook the functional and morphological diversity of the brain, hindering their computational capabilities. Furthermore, incorporating cell-type specific neuromodulatory effects into ANNs with neuronal heterogeneity could enable learning at two spatial scales: spiking behavior at the neuronal level, and synaptic plasticity at the circuit level, thereby potentially enhancing their learning abilities. In this article, we summarize recent bio-inspired models, learning rules and architectures and propose a biologically-informed framework for enhancing ANNs. Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors and dendritic compartments to simulate morphological and functional diversity of neuronal computations. Finally, we outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, balances bioinspiration and complexity, and provides scalable solutions for pressing AI challenges, such as continual learning, adaptability, robustness, and resource-efficiency. |
2010.14703 | Stefany Moreno-G\'amez | Stefany Moreno-G\'amez, Alma Dal Co, Simon van Vliet, Martin Ackermann | Microfluidics for single-cell study of antibiotic tolerance and
persistence induced by nutrient limitation | null | null | null | null | q-bio.QM q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Nutrient limitation is one of the most common triggers of antibiotic
tolerance and persistence. Here, we present two microfluidic setups to study
how spatial and temporal variation in nutrient availability lead to increased
survival of bacteria to antibiotics. The first setup is designed to mimic the
growth dynamics of bacteria in spatially structured populations (e.g. biofilms)
and can be used to study how spatial gradients in nutrient availability,
created by the collective metabolic activity of a population, increase
antibiotic tolerance. The second setup captures the dynamics of
feast-and-famine cycles that bacteria recurrently encounter in nature, and can
be used to study how phenotypic heterogeneity in growth resumption after
starvation increases survival of clonal bacterial populations. In both setups,
the growth rates and metabolic activity of bacteria can be measured at the
single-cell level. This is useful to build a mechanistic understanding of how
spatiotemporal variation in nutrient availability triggers bacteria to enter
phenotypic states that increase their tolerance to antibiotics.
| [
{
"created": "Wed, 28 Oct 2020 02:19:38 GMT",
"version": "v1"
}
] | 2020-10-29 | [
[
"Moreno-Gámez",
"Stefany",
""
],
[
"Co",
"Alma Dal",
""
],
[
"van Vliet",
"Simon",
""
],
[
"Ackermann",
"Martin",
""
]
] | Nutrient limitation is one of the most common triggers of antibiotic tolerance and persistence. Here, we present two microfluidic setups to study how spatial and temporal variation in nutrient availability lead to increased survival of bacteria to antibiotics. The first setup is designed to mimic the growth dynamics of bacteria in spatially structured populations (e.g. biofilms) and can be used to study how spatial gradients in nutrient availability, created by the collective metabolic activity of a population, increase antibiotic tolerance. The second setup captures the dynamics of feast-and-famine cycles that bacteria recurrently encounter in nature, and can be used to study how phenotypic heterogeneity in growth resumption after starvation increases survival of clonal bacterial populations. In both setups, the growth rates and metabolic activity of bacteria can be measured at the single-cell level. This is useful to build a mechanistic understanding of how spatiotemporal variation in nutrient availability triggers bacteria to enter phenotypic states that increase their tolerance to antibiotics. |
1001.0446 | Benjamin Torben-Nielsen | Benjamin Torben-Nielsen and Marylka Uusisaari and Klaus M. Stiefel | A novel method for determining the phase-response curves of neurons
based on minimizing spike-time prediction error | PDFLatex 7 A4 pages, 4 figures. New method to estimate the neuronal
phase-response curve | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regular firing neurons can be seen as oscillators. The phase-response curve
(PRC) describes how such neurons will respond to small excitatory
perturbations. Knowledge of the PRC is important as it is associated to the
excitability type of neurons and their capability to synchronize in networks.
In this work we present a novel method to estimate the PRC from experimental
data. We assume that continuous noise signal can be discretized into
independent perturbations at evenly spaced phases and predict the next spike
based on these independent perturbations. The difference between the predicted
next spike made at every discretized phase and the actual next spike time is
used as the error signal used to optimize the PRC. We test our method on model
data and experimentally obtained data and find that the newly developed method
is robust and reliable method for the estimation of PRCs from experimental
data.
| [
{
"created": "Mon, 4 Jan 2010 10:54:46 GMT",
"version": "v1"
}
] | 2010-01-05 | [
[
"Torben-Nielsen",
"Benjamin",
""
],
[
"Uusisaari",
"Marylka",
""
],
[
"Stiefel",
"Klaus M.",
""
]
] | Regular firing neurons can be seen as oscillators. The phase-response curve (PRC) describes how such neurons will respond to small excitatory perturbations. Knowledge of the PRC is important as it is associated to the excitability type of neurons and their capability to synchronize in networks. In this work we present a novel method to estimate the PRC from experimental data. We assume that continuous noise signal can be discretized into independent perturbations at evenly spaced phases and predict the next spike based on these independent perturbations. The difference between the predicted next spike made at every discretized phase and the actual next spike time is used as the error signal used to optimize the PRC. We test our method on model data and experimentally obtained data and find that the newly developed method is robust and reliable method for the estimation of PRCs from experimental data. |
1710.09118 | Chang Sub Kim | Chang Sub Kim | Recognition Dynamics in the Brain under the Free Energy Principle | 34 pages, 5 figures; Revised; Figure added | Neural Computation 30, 2616-2659 (2018);
https://doi.org/10.1162/neco_a_01115 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate the computational processes of perception in the framework of
the principle of least action by postulating the theoretical action as a time
integral of the free energy in the brain sciences. The free energy principle is
accordingly rephrased as that for autopoietic grounds all viable organisms
attempt to minimize the sensory uncertainty about the unpredictable environment
over a temporal horizon. By varying the informational action, we derive the
brain's recognition dynamics (RD) which conducts Bayesian filtering of the
external causes from noisy sensory inputs. Consequently, we effectively cast
the gradient-descent scheme of minimizing the free energy into Hamiltonian
mechanics by addressing only positions and momenta of the organisms'
representations of the causal environment. To manifest the utility of our
theory, we show how the RD may be implemented in a neuronally based biophysical
model at a single-cell level and subsequently in a coarse-grained, hierarchical
architecture of the brain. We also present formal solutions to the RD for a
model brain in linear regime and analyze the perceptual trajectories around
attractors in neural state space.
| [
{
"created": "Wed, 25 Oct 2017 08:34:02 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2017 06:46:02 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Jan 2018 06:21:29 GMT",
"version": "v3"
}
] | 2019-02-21 | [
[
"Kim",
"Chang Sub",
""
]
] | We formulate the computational processes of perception in the framework of the principle of least action by postulating the theoretical action as a time integral of the free energy in the brain sciences. The free energy principle is accordingly rephrased as that for autopoietic grounds all viable organisms attempt to minimize the sensory uncertainty about the unpredictable environment over a temporal horizon. By varying the informational action, we derive the brain's recognition dynamics (RD) which conducts Bayesian filtering of the external causes from noisy sensory inputs. Consequently, we effectively cast the gradient-descent scheme of minimizing the free energy into Hamiltonian mechanics by addressing only positions and momenta of the organisms' representations of the causal environment. To manifest the utility of our theory, we show how the RD may be implemented in a neuronally based biophysical model at a single-cell level and subsequently in a coarse-grained, hierarchical architecture of the brain. We also present formal solutions to the RD for a model brain in linear regime and analyze the perceptual trajectories around attractors in neural state space. |
0806.4161 | Erez Persi | Erez Persi | Two Potential Mechanisms of Spatial Attention in Early Visual Areas | 15 pages, 4 figures, part of this work was presented in the 34th SFN
annual meeting (2004) | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate theoretically the effect of spatial attention on the
contrast-response function (CRF) and orientation-tuning curves in early visual
areas.We look at a model of a hypercolumn developed recently (Persi et al.,
2008), that accounts for both the contrast response and tuning properties in
the primary visual cortex, and extend it to two visual areas.
The effect of spatial attention is studied in a model of two inter-connected
visual areas, under two hypothesis that do not necessarily contradict. The
first hypothesis is that attention alters inter-areal feedback synaptic
strength, as has been proposed by many previous studies. A second new
hypothesis is that attention effectively alters single neuron input-output
properties. We show that with both mechanisms it is possible to achieve
attentional effects similarly to those observed in experiments, namely
contrast-gain and response-gain effects, while keeping the orientation-tuning
curves width approximately contrast-invariant and attention-invariant.
Nevertheless, some differences occur and are discussed. We propose a simple
test on existing data based on the second hypothesis.
| [
{
"created": "Wed, 25 Jun 2008 17:47:32 GMT",
"version": "v1"
}
] | 2008-06-26 | [
[
"Persi",
"Erez",
""
]
] | We investigate theoretically the effect of spatial attention on the contrast-response function (CRF) and orientation-tuning curves in early visual areas.We look at a model of a hypercolumn developed recently (Persi et al., 2008), that accounts for both the contrast response and tuning properties in the primary visual cortex, and extend it to two visual areas. The effect of spatial attention is studied in a model of two inter-connected visual areas, under two hypothesis that do not necessarily contradict. The first hypothesis is that attention alters inter-areal feedback synaptic strength, as has been proposed by many previous studies. A second new hypothesis is that attention effectively alters single neuron input-output properties. We show that with both mechanisms it is possible to achieve attentional effects similarly to those observed in experiments, namely contrast-gain and response-gain effects, while keeping the orientation-tuning curves width approximately contrast-invariant and attention-invariant. Nevertheless, some differences occur and are discussed. We propose a simple test on existing data based on the second hypothesis. |
1602.03710 | Matteo Cavaliere | Matteo Cavaliere, Guoli Yang, Vincent Danos, Vasilis Dakos | Detecting the Collapse of Cooperation in Evolving Networks | null | Scientific Reports, 6, 30845, 2016 | 10.1038/srep30845 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sustainability of structured biological, social, economic and ecological
communities are often determined by the outcome of social conflicts between
cooperative and selfish individuals (cheaters). Cheaters avoid the cost of
contributing to the community and can occasionally spread in the population
leading to the complete collapse of cooperation. Although such a collapse often
unfolds unexpectedly bearing the traits of a critical transition, it is unclear
whether one can detect the rising risk of cheater's invasions and loss of
cooperation in an evolving community. Here, we combine dynamical networks and
evolutionary game theory to study the abrupt loss of cooperation as a critical
transition. We estimate the risk of collapse of cooperation after the
introduction of a single cheater under gradually changing conditions. We
observe a systematic increase in the average time it takes for cheaters to be
eliminated from the community as the risk of collapse increases. We detect this
risk based on changes in community structure and composition. Nonetheless,
reliable detection depends on the mechanism that governs how cheaters evolve in
the community. Our results suggest possible avenues for detecting the loss of
cooperation in evolving communities
| [
{
"created": "Thu, 11 Feb 2016 12:54:22 GMT",
"version": "v1"
}
] | 2016-09-23 | [
[
"Cavaliere",
"Matteo",
""
],
[
"Yang",
"Guoli",
""
],
[
"Danos",
"Vincent",
""
],
[
"Dakos",
"Vasilis",
""
]
] | The sustainability of structured biological, social, economic and ecological communities are often determined by the outcome of social conflicts between cooperative and selfish individuals (cheaters). Cheaters avoid the cost of contributing to the community and can occasionally spread in the population leading to the complete collapse of cooperation. Although such a collapse often unfolds unexpectedly bearing the traits of a critical transition, it is unclear whether one can detect the rising risk of cheater's invasions and loss of cooperation in an evolving community. Here, we combine dynamical networks and evolutionary game theory to study the abrupt loss of cooperation as a critical transition. We estimate the risk of collapse of cooperation after the introduction of a single cheater under gradually changing conditions. We observe a systematic increase in the average time it takes for cheaters to be eliminated from the community as the risk of collapse increases. We detect this risk based on changes in community structure and composition. Nonetheless, reliable detection depends on the mechanism that governs how cheaters evolve in the community. Our results suggest possible avenues for detecting the loss of cooperation in evolving communities |
1501.06101 | Andrei Zinovyev Dr. | Urszula Czerwinska, Laurence Calzone, Emmanuel Barillot, Andrei
Zinovyev | DeDaL: Cytoscape 3.0 app for producing and morphing data-driven and
structure-driven network layouts | null | null | null | null | q-bio.QM q-bio.GN q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Visualization and analysis of molecular profiling data together with
biological networks are able to provide new mechanistical insights into
biological functions. Currently, high-throughput data are usually visualized on
top of predefined network layouts which are not always adapted to a given data
analysis task. We developed a Cytoscape app which allows to construct
biological network layouts based on the data from molecular profiles imported
as values of nodes attributes. DeDaL is a Cytoscape 3.0 app which uses linear
and non-linear algorithms of dimension reduction to produce data-driven network
layouts based on multidimensional data (typically gene expression). DeDaL
implements several data pre-processing and layout post-processing steps such as
continuous morphing between two arbitrary network layouts and aligning one
network layout with respect to another one by rotating and mirroring. Combining
these possibilities facilitates creating insightful network layouts
representing both structural network features and the correlation patterns in
multivariate data. DeDaL is the first method allowing to construct biological
network layouts from high-throughput data. DeDaL is freely available for
downloading together with step-by-step tutorial at
http://bioinfo-out.curie.fr/projects/dedal/.
| [
{
"created": "Sun, 25 Jan 2015 00:31:19 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Feb 2015 10:47:39 GMT",
"version": "v2"
}
] | 2015-02-03 | [
[
"Czerwinska",
"Urszula",
""
],
[
"Calzone",
"Laurence",
""
],
[
"Barillot",
"Emmanuel",
""
],
[
"Zinovyev",
"Andrei",
""
]
] | Visualization and analysis of molecular profiling data together with biological networks are able to provide new mechanistical insights into biological functions. Currently, high-throughput data are usually visualized on top of predefined network layouts which are not always adapted to a given data analysis task. We developed a Cytoscape app which allows to construct biological network layouts based on the data from molecular profiles imported as values of nodes attributes. DeDaL is a Cytoscape 3.0 app which uses linear and non-linear algorithms of dimension reduction to produce data-driven network layouts based on multidimensional data (typically gene expression). DeDaL implements several data pre-processing and layout post-processing steps such as continuous morphing between two arbitrary network layouts and aligning one network layout with respect to another one by rotating and mirroring. Combining these possibilities facilitates creating insightful network layouts representing both structural network features and the correlation patterns in multivariate data. DeDaL is the first method allowing to construct biological network layouts from high-throughput data. DeDaL is freely available for downloading together with step-by-step tutorial at http://bioinfo-out.curie.fr/projects/dedal/. |
1809.10613 | Leah B. Shaw | Adrienna Bingham, Leah B. Shaw | Intervention Strategies for Epidemics: Does Ignoring Time Delay Lead to
Incorrect Predictions? | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our paper investigates distributions of exposed and infectious time periods
in an epidemic model and how applying a disease control strategy affects the
model's accuracy. While ordinary differential equations are widely used for
their simplicity, they incorporate an exponential distribution for time spent
exposed or infectious. This allows for a high probability of unrealistically
short exposed and infectious time periods. We propose that caution must be
taken when applying intervention methods to basic models in order to avoid
inaccurate predictions. Delay differential equations, which use a delta
distribution for exposed and infectious periods, can provide better realism but
are more difficult to use and analyze. We introduce a multi-infected
compartment model to interpolate between an ODE model with exponential
distributions and a DDE model with delta distributions in order to investigate
the effect these distributions have on the dynamics of the system when an
intervention method is also included. Using steady state stability and
bifurcation analysis, this paper considers when simpler infectious disease
models can be used versus when more realistic time periods must be
incorporated. We find that the placement of control measures on subpopulations
and the length of the time delay impacts the accuracy of the simpler models.
| [
{
"created": "Thu, 27 Sep 2018 16:24:48 GMT",
"version": "v1"
}
] | 2018-09-28 | [
[
"Bingham",
"Adrienna",
""
],
[
"Shaw",
"Leah B.",
""
]
] | Our paper investigates distributions of exposed and infectious time periods in an epidemic model and how applying a disease control strategy affects the model's accuracy. While ordinary differential equations are widely used for their simplicity, they incorporate an exponential distribution for time spent exposed or infectious. This allows for a high probability of unrealistically short exposed and infectious time periods. We propose that caution must be taken when applying intervention methods to basic models in order to avoid inaccurate predictions. Delay differential equations, which use a delta distribution for exposed and infectious periods, can provide better realism but are more difficult to use and analyze. We introduce a multi-infected compartment model to interpolate between an ODE model with exponential distributions and a DDE model with delta distributions in order to investigate the effect these distributions have on the dynamics of the system when an intervention method is also included. Using steady state stability and bifurcation analysis, this paper considers when simpler infectious disease models can be used versus when more realistic time periods must be incorporated. We find that the placement of control measures on subpopulations and the length of the time delay impacts the accuracy of the simpler models. |
2406.06143 | Azenet Lopez | Azenet Lopez and Carlos Montemayor | The Integrated Information Theory needs Attention | 23 pages (including references), 6 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The Integrated Information Theory (IIT) might be our current best bet at a
scientific explanation of phenomenal consciousness. IIT focuses on the
distinctively subjective and phenomenological aspects of conscious experience.
Currently, it offers the fundaments of a formal account, but future
developments shall explain the qualitative structures of every possible
conscious experience. But this ambitious project is hindered by one fundamental
limitation. IIT fails to acknowledge the crucial roles of attention in
generating phenomenally conscious experience and shaping its contents. Here, we
argue that IIT urgently needs an account of attention. Without this account,
IIT cannot explain important informational differences between different kinds
of experiences. Furthermore, though some IIT proponents celebratedly endorse a
double dissociation between consciousness and attention, close analysis reveals
that such as dissociation is in fact incompatible with IIT. Notably, the issues
we raise for IIT will likely arise for many internalist theories of conscious
contents in philosophy, especially theories with primitivist inclinations. Our
arguments also extend to the recently popularized structuralist approaches.
Overall, our discussion highlights how considerations about attention are
indispensable for scientific as well as philosophical theorizing about
conscious experience.
| [
{
"created": "Mon, 10 Jun 2024 10:01:19 GMT",
"version": "v1"
}
] | 2024-06-11 | [
[
"Lopez",
"Azenet",
""
],
[
"Montemayor",
"Carlos",
""
]
] | The Integrated Information Theory (IIT) might be our current best bet at a scientific explanation of phenomenal consciousness. IIT focuses on the distinctively subjective and phenomenological aspects of conscious experience. Currently, it offers the fundaments of a formal account, but future developments shall explain the qualitative structures of every possible conscious experience. But this ambitious project is hindered by one fundamental limitation. IIT fails to acknowledge the crucial roles of attention in generating phenomenally conscious experience and shaping its contents. Here, we argue that IIT urgently needs an account of attention. Without this account, IIT cannot explain important informational differences between different kinds of experiences. Furthermore, though some IIT proponents celebratedly endorse a double dissociation between consciousness and attention, close analysis reveals that such as dissociation is in fact incompatible with IIT. Notably, the issues we raise for IIT will likely arise for many internalist theories of conscious contents in philosophy, especially theories with primitivist inclinations. Our arguments also extend to the recently popularized structuralist approaches. Overall, our discussion highlights how considerations about attention are indispensable for scientific as well as philosophical theorizing about conscious experience. |
2209.15171 | Zhuoran Qiao | Zhuoran Qiao, Weili Nie, Arash Vahdat, Thomas F. Miller III, Anima
Anandkumar | State-specific protein-ligand complex structure prediction with a
multi-scale deep generative model | 19 pages, 5 figures, 1 table & Supplementary Information (18 pages, 2
figures, 7 tables, 12 algorithms); supersedes an earlier version
arXiv:2209.15171v1 presented at the NeurIPS 2022 MLSB workshop as a
contributed talk | null | null | null | q-bio.QM cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The binding complexes formed by proteins and small molecule ligands are
ubiquitous and critical to life. Despite recent advancements in protein
structure prediction, existing algorithms are so far unable to systematically
predict the binding ligand structures along with their regulatory effects on
protein folding. To address this discrepancy, we present NeuralPLexer, a
computational approach that can directly predict protein-ligand complex
structures solely using protein sequence and ligand molecular graph inputs.
NeuralPLexer adopts a deep generative model to sample the 3D structures of the
binding complex and their conformational changes at an atomistic resolution.
The model is based on a diffusion process that incorporates essential
biophysical constraints and a multi-scale geometric deep learning system to
iteratively sample residue-level contact maps and all heavy-atom coordinates in
a hierarchical manner. NeuralPLexer achieves state-of-the-art performance
compared to all existing methods on benchmarks for both protein-ligand blind
docking and flexible binding site structure recovery. Moreover, owing to its
specificity in sampling both ligand-free-state and ligand-bound-state
ensembles, NeuralPLexer consistently outperforms AlphaFold2 in terms of global
protein structure accuracy on both representative structure pairs with large
conformational changes (average TM-score=0.93) and recently determined
ligand-binding proteins (average TM-score=0.89). Case studies reveal that the
predicted conformational variations are consistent with structure determination
experiments for important targets, including human KRAS$^\textrm{G12C}$,
ketol-acid reductoisomerase, and purine GPCRs. Our study suggests that a
data-driven approach can capture the structural cooperativity between proteins
and small molecules, showing promise in accelerating the design of enzymes,
drug molecules, and beyond.
| [
{
"created": "Fri, 30 Sep 2022 01:46:38 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Apr 2023 19:40:00 GMT",
"version": "v2"
}
] | 2023-04-21 | [
[
"Qiao",
"Zhuoran",
""
],
[
"Nie",
"Weili",
""
],
[
"Vahdat",
"Arash",
""
],
[
"Miller",
"Thomas F.",
"III"
],
[
"Anandkumar",
"Anima",
""
]
] | The binding complexes formed by proteins and small molecule ligands are ubiquitous and critical to life. Despite recent advancements in protein structure prediction, existing algorithms are so far unable to systematically predict the binding ligand structures along with their regulatory effects on protein folding. To address this discrepancy, we present NeuralPLexer, a computational approach that can directly predict protein-ligand complex structures solely using protein sequence and ligand molecular graph inputs. NeuralPLexer adopts a deep generative model to sample the 3D structures of the binding complex and their conformational changes at an atomistic resolution. The model is based on a diffusion process that incorporates essential biophysical constraints and a multi-scale geometric deep learning system to iteratively sample residue-level contact maps and all heavy-atom coordinates in a hierarchical manner. NeuralPLexer achieves state-of-the-art performance compared to all existing methods on benchmarks for both protein-ligand blind docking and flexible binding site structure recovery. Moreover, owing to its specificity in sampling both ligand-free-state and ligand-bound-state ensembles, NeuralPLexer consistently outperforms AlphaFold2 in terms of global protein structure accuracy on both representative structure pairs with large conformational changes (average TM-score=0.93) and recently determined ligand-binding proteins (average TM-score=0.89). Case studies reveal that the predicted conformational variations are consistent with structure determination experiments for important targets, including human KRAS$^\textrm{G12C}$, ketol-acid reductoisomerase, and purine GPCRs. Our study suggests that a data-driven approach can capture the structural cooperativity between proteins and small molecules, showing promise in accelerating the design of enzymes, drug molecules, and beyond. |
1604.02233 | Leena Salmela | Leena Salmela, Riku Walve, Eric Rivals and Esko Ukkonen | Accurate selfcorrection of errors in long reads using de Bruijn graphs | paper accepted at the RECOMB-Seq 2016 | Bioinformatics, Volume 33, Issue 6, 15 March 2017, Pages 799--806 | 10.1093/bioinformatics/btw321 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore,
can produce sequencing reads up to 50,000 bp long but with an error rate of at
least 15%. Reducing the error rate is necessary for subsequent utilisation of
the reads in, e.g., de novo genome assembly. The error correction problem has
been tackled either by aligning the long reads against each other or by a
hybrid approach that uses the more accurate short reads produced by second
generation sequencing technologies to correct the long reads. We present an
error correction method that uses long reads only. The method consists of two
phases: first we use an iterative alignment-free correction method based on de
Bruijn graphs with increasing length of k-mers, and second, the corrected reads
are further polished using long-distance dependencies that are found using
multiple alignments. According to our experiments the proposed method is the
most accurate one relying on long reads only for read sets with high coverage.
Furthermore, when the coverage of the read set is at least 75x, the throughput
of the new method is at least 20% higher. LoRMA is freely available at
http://www.cs.helsinki.fi/u/lmsalmel/LoRMA/.
| [
{
"created": "Fri, 8 Apr 2016 06:02:43 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Salmela",
"Leena",
""
],
[
"Walve",
"Riku",
""
],
[
"Rivals",
"Eric",
""
],
[
"Ukkonen",
"Esko",
""
]
] | New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore, can produce sequencing reads up to 50,000 bp long but with an error rate of at least 15%. Reducing the error rate is necessary for subsequent utilisation of the reads in, e.g., de novo genome assembly. The error correction problem has been tackled either by aligning the long reads against each other or by a hybrid approach that uses the more accurate short reads produced by second generation sequencing technologies to correct the long reads. We present an error correction method that uses long reads only. The method consists of two phases: first we use an iterative alignment-free correction method based on de Bruijn graphs with increasing length of k-mers, and second, the corrected reads are further polished using long-distance dependencies that are found using multiple alignments. According to our experiments the proposed method is the most accurate one relying on long reads only for read sets with high coverage. Furthermore, when the coverage of the read set is at least 75x, the throughput of the new method is at least 20% higher. LoRMA is freely available at http://www.cs.helsinki.fi/u/lmsalmel/LoRMA/. |
1111.2769 | Domenico Fuoco | Domenico Fuoco | Classification Framework and Structure-Activity-Relationship (SAR) of
Tetracycline-Structure-Based Drugs | 13 pages, 3 figures, 2 schemes, 1 table;
http://www.mdpi.com/2079-6382/1/1/1 | null | 10.3390/antibiotics1010001 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By studying the literature about Tetracyclines (TCs), it becomes clearly
evident that TCs are very dynamic molecules. In some cases, their
structure-activity-relationship (SAR) are known, especially against bacteria,
while against other targets, they are virtually unknown. In other diverse
yields of research, such as neurology, oncology and virology the utility and
activity of the tetracyclines are being discovered and are also emerging as new
technological fronts. The first aim of this paper is classify the compounds
already used in therapy and prepare the schematic structure in which include
the next generation of TCs. The aim of this work is introduce a new framework
for the classification of old and new TCs, using a medicinal chemistry approach
to the structure of that drugs. A fully documented
Structure-Activity-Relationship (SAR) is presented with the analysis data of
antibacterial and nonantibacterial (antifungal, antiviral and anticancer)
tetracyclines. Lipophilicity of functional groups and conformations
interchangeably are determining rules in biological activities of TCs.
| [
{
"created": "Fri, 11 Nov 2011 15:08:04 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Apr 2012 17:10:26 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jun 2012 15:10:29 GMT",
"version": "v3"
}
] | 2012-06-13 | [
[
"Fuoco",
"Domenico",
""
]
] | By studying the literature about Tetracyclines (TCs), it becomes clearly evident that TCs are very dynamic molecules. In some cases, their structure-activity-relationship (SAR) are known, especially against bacteria, while against other targets, they are virtually unknown. In other diverse yields of research, such as neurology, oncology and virology the utility and activity of the tetracyclines are being discovered and are also emerging as new technological fronts. The first aim of this paper is classify the compounds already used in therapy and prepare the schematic structure in which include the next generation of TCs. The aim of this work is introduce a new framework for the classification of old and new TCs, using a medicinal chemistry approach to the structure of that drugs. A fully documented Structure-Activity-Relationship (SAR) is presented with the analysis data of antibacterial and nonantibacterial (antifungal, antiviral and anticancer) tetracyclines. Lipophilicity of functional groups and conformations interchangeably are determining rules in biological activities of TCs. |
1007.3565 | Subhadip Raychaudhuri | Subhadip Raychaudhuri, Joanna Skommer, Kristen Henty, Nigel Birch,
Thomas Brittain | Neuroglobin protects nerve cells from apoptosis by inhibiting the
intrinsic pathway of cell death | 11 pages | Apoptosis 15:401-411 (2010) | 10.1007/s10495-009-0436-5 | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past few years, overwhelming evidence has accrued that a high level of
expression of the protein neuroglobin protects neurons in vitro, in animal
models, and in humans, against cell death associated with hypoxic and amyloid
insult. However, until now, the exact mechanism of neuroglobin's protective
action has not been determined. Using cell biology and biochemical approaches
we demonstrate that neuroglobin inhibits the intrinsic pathway of apoptosis in
vitro and intervenes in activation of pro-caspase 9 by interaction with
cytochrome c. Using systems level information of the apoptotic signalling
reactions we have developed a quantitative model of neuroglobin inhibition of
apoptosis, which simulates neuroglobin blocking of apoptosome formation at a
single cell level. Furthermore, this model allows us to explore the effect of
neuroglobin in conditions not easily accessible to experimental study. We found
that the protection of neurons by neuroglobin is very concentration sensitive.
The impact of neuroglobin may arise from both its binding to cytochrome c and
its subsequent redox reaction, although the binding alone is sufficient to
block pro-caspase 9 activation. These data provides an explanation the action
of neuroglobin in the protection of nerve cells from unwanted apoptosis.
| [
{
"created": "Wed, 21 Jul 2010 06:26:54 GMT",
"version": "v1"
}
] | 2010-07-22 | [
[
"Raychaudhuri",
"Subhadip",
""
],
[
"Skommer",
"Joanna",
""
],
[
"Henty",
"Kristen",
""
],
[
"Birch",
"Nigel",
""
],
[
"Brittain",
"Thomas",
""
]
] | In the past few years, overwhelming evidence has accrued that a high level of expression of the protein neuroglobin protects neurons in vitro, in animal models, and in humans, against cell death associated with hypoxic and amyloid insult. However, until now, the exact mechanism of neuroglobin's protective action has not been determined. Using cell biology and biochemical approaches we demonstrate that neuroglobin inhibits the intrinsic pathway of apoptosis in vitro and intervenes in activation of pro-caspase 9 by interaction with cytochrome c. Using systems level information of the apoptotic signalling reactions we have developed a quantitative model of neuroglobin inhibition of apoptosis, which simulates neuroglobin blocking of apoptosome formation at a single cell level. Furthermore, this model allows us to explore the effect of neuroglobin in conditions not easily accessible to experimental study. We found that the protection of neurons by neuroglobin is very concentration sensitive. The impact of neuroglobin may arise from both its binding to cytochrome c and its subsequent redox reaction, although the binding alone is sufficient to block pro-caspase 9 activation. These data provides an explanation the action of neuroglobin in the protection of nerve cells from unwanted apoptosis. |
2204.04609 | Anzhelika Koldaeva | Paula Villa Martin, Anzhelika Koldaeva and Simone Pigolotti | Coalescent dynamics of planktonic communities | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Planktonic communities are extremely diverse and include a vast number of
rare species. The dynamics of these rare species is best described by
individual-based models. However, individual-based approaches to planktonic
diversity face substantial difficulties, due to the large number of individuals
required to make realistic predictions. In this paper, we study diversity of
planktonic communities by means of a spatial coalescence model, that
incorporates transport by oceanic currents. As a main advantage, our approach
requires simulating a number of individuals equal to the size of the sample one
is interested in, rather than the size of the entire community. By theoretical
analysis and simulations, we explore the conditions upon which our coalescence
model is equivalent to individual-based dynamics. As an application, we use our
model to predict the impact of chaotic advection by oceanic currents on
biodiversity. We conclude that the coalescent approach permits to simulate
marine microbial communities much more efficiently than with individual-based
models.
| [
{
"created": "Sun, 10 Apr 2022 05:40:20 GMT",
"version": "v1"
}
] | 2022-04-12 | [
[
"Martin",
"Paula Villa",
""
],
[
"Koldaeva",
"Anzhelika",
""
],
[
"Pigolotti",
"Simone",
""
]
] | Planktonic communities are extremely diverse and include a vast number of rare species. The dynamics of these rare species is best described by individual-based models. However, individual-based approaches to planktonic diversity face substantial difficulties, due to the large number of individuals required to make realistic predictions. In this paper, we study diversity of planktonic communities by means of a spatial coalescence model, that incorporates transport by oceanic currents. As a main advantage, our approach requires simulating a number of individuals equal to the size of the sample one is interested in, rather than the size of the entire community. By theoretical analysis and simulations, we explore the conditions upon which our coalescence model is equivalent to individual-based dynamics. As an application, we use our model to predict the impact of chaotic advection by oceanic currents on biodiversity. We conclude that the coalescent approach permits to simulate marine microbial communities much more efficiently than with individual-based models. |
1909.03992 | Muhammad Afzal | Muhammad Afzal, Jinsoo Park, Ghulam Destgeer, Husnain Ahmed, Syed Atif
Iqrar, Sanghee Kim, Sunghyun Kang, Anas Alazzam, Tae-Sung Yoon, and Hyung Jin
Sung | Acoustomicrofluidic separation of tardigrades from raw cultures for
sample preparation | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tardigrades are microscopic animals widely known for their survival
capabilities under extreme conditions. They are the focus of current research
in the fields of taxonomy, biogeography, genomics, proteomics, development,
space biology, evolution, and ecology. Tardigrades, such as Hypsibius
exemplaris, are being advocated as a next-generation model organism for genomic
and developmental studies. The raw culture of H. exemplaris usually contains
tardigrades themselves, their eggs, and algal food and feces. Experimentation
with tardigrades often requires the demanding and laborious separation of
tardigrades from raw samples to prepare pure and contamination-free tardigrade
samples. In this paper, we propose a two-step acousto-microfluidic separation
method to isolate tardigrades from raw samples. In the first step, a passive
microfluidic filter composed of an array of traps is used to remove large algal
clusters in the raw sample. In the second step, a surface acoustic wave-based
active microfluidic separation device is used to continuously deflect
tardigrades from their original streamlines inside the microchannel and thus
selectively isolate them from algae and eggs. The experimental results
demonstrated the efficient tardigrade separation with a recovery rate of 96%
and an algae impurity of 4% on average in a continuous, contactless, automated,
rapid, biocompatible manner.
| [
{
"created": "Mon, 9 Sep 2019 17:04:41 GMT",
"version": "v1"
}
] | 2019-09-10 | [
[
"Afzal",
"Muhammad",
""
],
[
"Park",
"Jinsoo",
""
],
[
"Destgeer",
"Ghulam",
""
],
[
"Ahmed",
"Husnain",
""
],
[
"Iqrar",
"Syed Atif",
""
],
[
"Kim",
"Sanghee",
""
],
[
"Kang",
"Sunghyun",
""
],
[
"Alazzam",
"Anas",
""
],
[
"Yoon",
"Tae-Sung",
""
],
[
"Sung",
"Hyung Jin",
""
]
] | Tardigrades are microscopic animals widely known for their survival capabilities under extreme conditions. They are the focus of current research in the fields of taxonomy, biogeography, genomics, proteomics, development, space biology, evolution, and ecology. Tardigrades, such as Hypsibius exemplaris, are being advocated as a next-generation model organism for genomic and developmental studies. The raw culture of H. exemplaris usually contains tardigrades themselves, their eggs, and algal food and feces. Experimentation with tardigrades often requires the demanding and laborious separation of tardigrades from raw samples to prepare pure and contamination-free tardigrade samples. In this paper, we propose a two-step acousto-microfluidic separation method to isolate tardigrades from raw samples. In the first step, a passive microfluidic filter composed of an array of traps is used to remove large algal clusters in the raw sample. In the second step, a surface acoustic wave-based active microfluidic separation device is used to continuously deflect tardigrades from their original streamlines inside the microchannel and thus selectively isolate them from algae and eggs. The experimental results demonstrated the efficient tardigrade separation with a recovery rate of 96% and an algae impurity of 4% on average in a continuous, contactless, automated, rapid, biocompatible manner. |
2110.11347 | Junhao Wen | Junhao Wen, Cynthia H.Y. Fu, Duygu Tosun, Yogasudha Veturi, Zhijian
Yang, Ahmed Abdulkadir, Elizabeth Mamourian, Dhivya Srinivasan, Jingxuan Bao,
Guray Erus, Haochang Shou, Mohamad Habes, Jimit Doshi, Erdem Varol, Scott R
Mackin, Aristeidis Sotiras, Yong Fan, Andrew J. Saykin, Yvette I. Sheline, Li
Shen, Marylyn D. Ritchie, David A. Wolk, Marilyn Albert, Susan M. Resnick,
Christos Davatzikos | Multidimensional representations in late-life depression: convergence in
neuroimaging, cognition, clinical symptomatology and genetics | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Late-life depression (LLD) is characterized by considerable heterogeneity in
clinical manifestation. Unraveling such heterogeneity would aid in elucidating
etiological mechanisms and pave the road to precision and individualized
medicine. We sought to delineate, cross-sectionally and longitudinally,
disease-related heterogeneity in LLD linked to neuroanatomy, cognitive
functioning, clinical symptomatology, and genetic profiles. Multimodal data
from a multicentre sample (N=996) were analyzed. A semi-supervised clustering
method (HYDRA) was applied to regional grey matter (GM) brain volumes to derive
dimensional representations. Two dimensions were identified, which accounted
for the LLD-related heterogeneity in voxel-wise GM maps, white matter (WM)
fractional anisotropy (FA), neurocognitive functioning, clinical phenotype, and
genetics. Dimension one (Dim1) demonstrated relatively preserved brain anatomy
without WM disruptions relative to healthy controls. In contrast, dimension two
(Dim2) showed widespread brain atrophy and WM integrity disruptions, along with
cognitive impairment and higher depression severity. Moreover, one de novo
independent genetic variant (rs13120336) was significantly associated with Dim
1 but not with Dim 2. Notably, the two dimensions demonstrated significant
SNP-based heritability of 18-27% within the general population (N=12,518 in
UKBB). Lastly, in a subset of individuals having longitudinal measurements,
Dim2 demonstrated a more rapid longitudinal decrease in GM and brain age, and
was more likely to progress to Alzheimers disease, compared to Dim1 (N=1,413
participants and 7,225 scans from ADNI, BLSA, and BIOCARD datasets).
| [
{
"created": "Wed, 20 Oct 2021 22:43:44 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Oct 2021 12:16:46 GMT",
"version": "v2"
}
] | 2021-10-26 | [
[
"Wen",
"Junhao",
""
],
[
"Fu",
"Cynthia H. Y.",
""
],
[
"Tosun",
"Duygu",
""
],
[
"Veturi",
"Yogasudha",
""
],
[
"Yang",
"Zhijian",
""
],
[
"Abdulkadir",
"Ahmed",
""
],
[
"Mamourian",
"Elizabeth",
""
],
[
"Srinivasan",
"Dhivya",
""
],
[
"Bao",
"Jingxuan",
""
],
[
"Erus",
"Guray",
""
],
[
"Shou",
"Haochang",
""
],
[
"Habes",
"Mohamad",
""
],
[
"Doshi",
"Jimit",
""
],
[
"Varol",
"Erdem",
""
],
[
"Mackin",
"Scott R",
""
],
[
"Sotiras",
"Aristeidis",
""
],
[
"Fan",
"Yong",
""
],
[
"Saykin",
"Andrew J.",
""
],
[
"Sheline",
"Yvette I.",
""
],
[
"Shen",
"Li",
""
],
[
"Ritchie",
"Marylyn D.",
""
],
[
"Wolk",
"David A.",
""
],
[
"Albert",
"Marilyn",
""
],
[
"Resnick",
"Susan M.",
""
],
[
"Davatzikos",
"Christos",
""
]
] | Late-life depression (LLD) is characterized by considerable heterogeneity in clinical manifestation. Unraveling such heterogeneity would aid in elucidating etiological mechanisms and pave the road to precision and individualized medicine. We sought to delineate, cross-sectionally and longitudinally, disease-related heterogeneity in LLD linked to neuroanatomy, cognitive functioning, clinical symptomatology, and genetic profiles. Multimodal data from a multicentre sample (N=996) were analyzed. A semi-supervised clustering method (HYDRA) was applied to regional grey matter (GM) brain volumes to derive dimensional representations. Two dimensions were identified, which accounted for the LLD-related heterogeneity in voxel-wise GM maps, white matter (WM) fractional anisotropy (FA), neurocognitive functioning, clinical phenotype, and genetics. Dimension one (Dim1) demonstrated relatively preserved brain anatomy without WM disruptions relative to healthy controls. In contrast, dimension two (Dim2) showed widespread brain atrophy and WM integrity disruptions, along with cognitive impairment and higher depression severity. Moreover, one de novo independent genetic variant (rs13120336) was significantly associated with Dim 1 but not with Dim 2. Notably, the two dimensions demonstrated significant SNP-based heritability of 18-27% within the general population (N=12,518 in UKBB). Lastly, in a subset of individuals having longitudinal measurements, Dim2 demonstrated a more rapid longitudinal decrease in GM and brain age, and was more likely to progress to Alzheimers disease, compared to Dim1 (N=1,413 participants and 7,225 scans from ADNI, BLSA, and BIOCARD datasets). |
1309.5033 | Shuji Kaieda | Shuji Kaieda, Barbara Setlow, Peter Setlow, Bertil Halle | Mobility of core water in Bacillus subtilis spores by $^2$H NMR | 11 pages, 5 figures | Biophys. J. 105, 2016--2023 (2013) | 10.1016/j.bpj.2013.09.022 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial spores in a metabolically dormant state can survive long periods
without nutrients under extreme environmental conditions. The molecular basis
of spore dormancy is not well understood, but the distribution and physical
state of water within the spore is thought to play an important role. Two
scenarios have been proposed for the spore's core region, containing the DNA
and most enzymes. In the gel scenario, the core is a structured macromolecular
framework permeated by mobile water. In the glass scenario, the entire core,
including the water, is an amorphous solid and the quenched molecular diffusion
accounts for the spore's dormancy and thermal stability. Here, we use $^2$H
magnetic relaxation dispersion to selectively monitor water mobility in the
core of Bacillus subtilis spores in the presence and absence of core Mn$^{2+}$
ions. We also report and analyze the solid-state $^2$H NMR spectrum from these
spores. Our NMR data clearly support the gel scenario with highly mobile core
water (~ 25 ps average rotational correlation time). Furthermore, we find that
the large depot of manganese in the core is nearly anhydrous, with merely 1.7 %
on average of the maximum sixfold water coordination.
| [
{
"created": "Thu, 19 Sep 2013 15:59:01 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Oct 2013 19:39:39 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Oct 2013 16:35:47 GMT",
"version": "v3"
},
{
"created": "Sat, 12 Oct 2013 21:54:00 GMT",
"version": "v4"
},
{
"created": "Thu, 7 Nov 2013 12:55:34 GMT",
"version": "v5"
}
] | 2013-11-08 | [
[
"Kaieda",
"Shuji",
""
],
[
"Setlow",
"Barbara",
""
],
[
"Setlow",
"Peter",
""
],
[
"Halle",
"Bertil",
""
]
] | Bacterial spores in a metabolically dormant state can survive long periods without nutrients under extreme environmental conditions. The molecular basis of spore dormancy is not well understood, but the distribution and physical state of water within the spore is thought to play an important role. Two scenarios have been proposed for the spore's core region, containing the DNA and most enzymes. In the gel scenario, the core is a structured macromolecular framework permeated by mobile water. In the glass scenario, the entire core, including the water, is an amorphous solid and the quenched molecular diffusion accounts for the spore's dormancy and thermal stability. Here, we use $^2$H magnetic relaxation dispersion to selectively monitor water mobility in the core of Bacillus subtilis spores in the presence and absence of core Mn$^{2+}$ ions. We also report and analyze the solid-state $^2$H NMR spectrum from these spores. Our NMR data clearly support the gel scenario with highly mobile core water (~ 25 ps average rotational correlation time). Furthermore, we find that the large depot of manganese in the core is nearly anhydrous, with merely 1.7 % on average of the maximum sixfold water coordination. |
2002.12316 | Karunia Putra Wijaya | Karunia Putra Wijaya, Joseph P\'aez Ch\'avez, Dipo Aldila | An epidemic model highlighting humane social awareness and vector-host
lifespan ratio variation | null | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many vector-borne disease epidemic models neglect the fact that in modern
human civilization, social awareness as well as self-defence system are
overwhelming against advanced propagation of the disease. News are becoming
more effortlessly accessible through social media and mobile apps, while
apparatuses for disease prevention are inclined to be more abundant and
affordable. Here we study a simple host-vector model in which media-triggered
social awareness and seasonality in vector breeding are taken into account.
There appears a certain threshold indicating the alarming outbreak; the number
of infective human individuals above which shall actuate the self-defence
system for the susceptible subpopulation. A model where the infection rate
revolves in the likelihood of poverty, reluctancy, tiresomeness, perceiving the
disease as being easily curable, absence of medical access, and overwhelming
hungrier vectors is proposed. Further discoveries are made from undertaking
disparate time scales between human and vector population dynamics. The
resulting slow-fast system discloses notable dynamics in which solution
trajectories confine to the slow manifold and critical manifold, before finally
ending up at equilibria. How coinciding the slow manifold with the critical
manifold enhances periodic forcing is also studied. The finding on hysteresis
loops gives insights of how defining alarming outbreak critically perturbs the
basic reproductive number, which later helps keep the incidence cycle on small
magnitudes.
| [
{
"created": "Thu, 27 Feb 2020 18:37:58 GMT",
"version": "v1"
}
] | 2020-02-28 | [
[
"Wijaya",
"Karunia Putra",
""
],
[
"Chávez",
"Joseph Páez",
""
],
[
"Aldila",
"Dipo",
""
]
] | Many vector-borne disease epidemic models neglect the fact that in modern human civilization, social awareness as well as self-defence system are overwhelming against advanced propagation of the disease. News are becoming more effortlessly accessible through social media and mobile apps, while apparatuses for disease prevention are inclined to be more abundant and affordable. Here we study a simple host-vector model in which media-triggered social awareness and seasonality in vector breeding are taken into account. There appears a certain threshold indicating the alarming outbreak; the number of infective human individuals above which shall actuate the self-defence system for the susceptible subpopulation. A model where the infection rate revolves in the likelihood of poverty, reluctancy, tiresomeness, perceiving the disease as being easily curable, absence of medical access, and overwhelming hungrier vectors is proposed. Further discoveries are made from undertaking disparate time scales between human and vector population dynamics. The resulting slow-fast system discloses notable dynamics in which solution trajectories confine to the slow manifold and critical manifold, before finally ending up at equilibria. How coinciding the slow manifold with the critical manifold enhances periodic forcing is also studied. The finding on hysteresis loops gives insights of how defining alarming outbreak critically perturbs the basic reproductive number, which later helps keep the incidence cycle on small magnitudes. |
1611.07989 | Jie Lin | Jie Lin, Ariel Amir | The effects of stochasticity at the single-cell level and cell size
control on the population growth | Cell Systems, 2017 | null | null | null | q-bio.PE cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Establishing a quantitative connection between the population growth rate and
the generation times of single cells is a prerequisite for understanding
evolutionary dynamics of microbes. However, existing theories fail to account
for the experimentally observed correlations between mother-daughter generation
times that are unavoidable when cell size is controlled for - which is
essentially always the case. Here, we study population-level growth in the
presence of cell size control and corroborate our theory using experimental
measurements of single-cell growth rates. We derive a closed formula for the
population growth rate and demonstrate that it only depends on the single-cell
growth rate variability, not other sources of stochasticity. Our work provides
an evolutionary rationale for the narrow growth rate distributions often
observed in nature: when single-cell growth rates are less variable but have a
fixed mean, the population will exhibit an enhanced population growth rate, as
long as the correlations between the mother and daughter cells' growth rates
are not too strong.
| [
{
"created": "Wed, 23 Nov 2016 21:00:44 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Sep 2017 00:44:26 GMT",
"version": "v2"
}
] | 2017-09-19 | [
[
"Lin",
"Jie",
""
],
[
"Amir",
"Ariel",
""
]
] | Establishing a quantitative connection between the population growth rate and the generation times of single cells is a prerequisite for understanding evolutionary dynamics of microbes. However, existing theories fail to account for the experimentally observed correlations between mother-daughter generation times that are unavoidable when cell size is controlled for - which is essentially always the case. Here, we study population-level growth in the presence of cell size control and corroborate our theory using experimental measurements of single-cell growth rates. We derive a closed formula for the population growth rate and demonstrate that it only depends on the single-cell growth rate variability, not other sources of stochasticity. Our work provides an evolutionary rationale for the narrow growth rate distributions often observed in nature: when single-cell growth rates are less variable but have a fixed mean, the population will exhibit an enhanced population growth rate, as long as the correlations between the mother and daughter cells' growth rates are not too strong. |
1112.6424 | Peter Waddell | Peter J. Waddell, Jorge Ramos and Xi Tan | Homo denisova, Correspondence Spectral Analysis, Finite Sites Reticulate
Hierarchical Coalescent Models and the Ron Jeremy Hypothesis | 43 pages, 9 figures, 9 tables | null | null | null | q-bio.PE q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article shows how to fit reticulate finite and infinite sites sequence
spectra to aligned data from five modern human genomes (San, Yoruba, French,
Han and Papuan) plus two archaic humans (Denisovan and Neanderthal), to better
infer demographic parameters. These include interbreeding between distinct
lineages. Major improvements in the fit of the sequence spectrum are made with
successively more complicated models. Findings include some evidence of a male
biased gene flow from the Denisova lineage to Papuan ancestors and possibly
even more archaic gene flow. It is unclear if there is evidence for more than
one Neanderthal interbreeding, as the evidence suggesting this largely
disappears when a finite sites model is fitted.
| [
{
"created": "Thu, 29 Dec 2011 20:50:13 GMT",
"version": "v1"
}
] | 2011-12-30 | [
[
"Waddell",
"Peter J.",
""
],
[
"Ramos",
"Jorge",
""
],
[
"Tan",
"Xi",
""
]
] | This article shows how to fit reticulate finite and infinite sites sequence spectra to aligned data from five modern human genomes (San, Yoruba, French, Han and Papuan) plus two archaic humans (Denisovan and Neanderthal), to better infer demographic parameters. These include interbreeding between distinct lineages. Major improvements in the fit of the sequence spectrum are made with successively more complicated models. Findings include some evidence of a male biased gene flow from the Denisova lineage to Papuan ancestors and possibly even more archaic gene flow. It is unclear if there is evidence for more than one Neanderthal interbreeding, as the evidence suggesting this largely disappears when a finite sites model is fitted. |
q-bio/0412020 | Marco Cosentino Lagomarsino | M. Cosentino Lagomarsino, P. Jona, B. Bassetti | The Logic Backbone of a Transcription Network | 11 pages, 4 figures final | Phys Rev Lett. 2005 Oct 7;95(15): | null | null | q-bio.MN cond-mat.stat-mech physics.bio-ph | null | A great part of the effort in the study of coarse grained models of
transcription networks is directed to the analysis of their dynamical features.
In this letter, we consider the \emph{equilibrium} properties of such systems,
showing that the logic backbone underlying all dynamic descriptions has the
structure of a computational optimization problem. It involves variables, which
correspond to gene expression levels, and constraints, which describe the
effect of \emph{cis-}regulatory signal integration functions. In the simple
paradigmatic case of Boolean variables and signal integration functions, we
derive and discuss phase diagrams. Notably, the model exhibits a connectivity
transition between a regime of simple, but uncertain, gene control, to a regime
of complex combinatorial control.
| [
{
"created": "Fri, 10 Dec 2004 15:38:04 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Dec 2004 09:43:05 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Dec 2005 15:26:12 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Lagomarsino",
"M. Cosentino",
""
],
[
"Jona",
"P.",
""
],
[
"Bassetti",
"B.",
""
]
] | A great part of the effort in the study of coarse grained models of transcription networks is directed to the analysis of their dynamical features. In this letter, we consider the \emph{equilibrium} properties of such systems, showing that the logic backbone underlying all dynamic descriptions has the structure of a computational optimization problem. It involves variables, which correspond to gene expression levels, and constraints, which describe the effect of \emph{cis-}regulatory signal integration functions. In the simple paradigmatic case of Boolean variables and signal integration functions, we derive and discuss phase diagrams. Notably, the model exhibits a connectivity transition between a regime of simple, but uncertain, gene control, to a regime of complex combinatorial control. |
2312.01923 | Ruslan Mukhamadiarov | Ruslan Mukhamadiarov, Matteo Ciarchi, Fabrizio Olmeda, Steffen Rulands | Clonal dynamics of surface-driven growing tissues | null | null | null | null | q-bio.QM physics.bio-ph q-bio.CB q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The self-organization of cells into complex tissues relies on a tight
coordination of cell behavior. Identifying the cellular processes driving
tissue growth is key to understanding the emergence of tissue forms and
devising targeted therapies for aberrant growth, such as in cancer. Inferring
the mode of tissue growth, whether it is driven by cells on the surface or
cells in the bulk, is possible in cell culture experiments, but difficult in
most tissues in living organisms (in vivo). Genetic tracing experiments, where
a subset of cells is labeled with inheritable markers have become important
experimental tools to study cell fate in vivo. Here, we show that the mode of
tissue growth is reflected in the size distribution of the progeny of marked
cells. To this end, we derive the clone-size distributions using analytical
calculations in the limit of negligible cell migration and cell death, and we
test our predictions with an agent-based stochastic sampling technique. We show
that for surface-driven growth the clone-size distribution takes a
characteristic power-law form with an exponent determined by fluctuations of
the tissue surface. Our results show how the mode of tissue growth can be
inferred from genetic tracing experiments.
| [
{
"created": "Mon, 4 Dec 2023 14:30:44 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 19:36:13 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Mar 2024 12:23:33 GMT",
"version": "v3"
}
] | 2024-03-27 | [
[
"Mukhamadiarov",
"Ruslan",
""
],
[
"Ciarchi",
"Matteo",
""
],
[
"Olmeda",
"Fabrizio",
""
],
[
"Rulands",
"Steffen",
""
]
] | The self-organization of cells into complex tissues relies on a tight coordination of cell behavior. Identifying the cellular processes driving tissue growth is key to understanding the emergence of tissue forms and devising targeted therapies for aberrant growth, such as in cancer. Inferring the mode of tissue growth, whether it is driven by cells on the surface or cells in the bulk, is possible in cell culture experiments, but difficult in most tissues in living organisms (in vivo). Genetic tracing experiments, where a subset of cells is labeled with inheritable markers have become important experimental tools to study cell fate in vivo. Here, we show that the mode of tissue growth is reflected in the size distribution of the progeny of marked cells. To this end, we derive the clone-size distributions using analytical calculations in the limit of negligible cell migration and cell death, and we test our predictions with an agent-based stochastic sampling technique. We show that for surface-driven growth the clone-size distribution takes a characteristic power-law form with an exponent determined by fluctuations of the tissue surface. Our results show how the mode of tissue growth can be inferred from genetic tracing experiments. |
1109.3268 | Simone Linz | Celine Scornavacca, Simone Linz, and Benjamin Albrecht | A first step towards computing all hybridization networks for two rooted
binary phylogenetic trees | 21 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, considerable effort has been put into developing fast algorithms to
reconstruct a rooted phylogenetic network that explains two rooted phylogenetic
trees and has a minimum number of hybridization vertices. With the standard
approach to tackle this problem being combinatorial, the reconstructed network
is rarely unique. From a biological point of view, it is therefore of
importance to not only compute one network, but all possible networks. In this
paper, we make a first step towards approaching this goal by presenting the
first algorithm---called allMAAFs---that calculates all
maximum-acyclic-agreement forests for two rooted binary phylogenetic trees on
the same set of taxa.
| [
{
"created": "Thu, 15 Sep 2011 06:29:37 GMT",
"version": "v1"
}
] | 2011-09-16 | [
[
"Scornavacca",
"Celine",
""
],
[
"Linz",
"Simone",
""
],
[
"Albrecht",
"Benjamin",
""
]
] | Recently, considerable effort has been put into developing fast algorithms to reconstruct a rooted phylogenetic network that explains two rooted phylogenetic trees and has a minimum number of hybridization vertices. With the standard approach to tackle this problem being combinatorial, the reconstructed network is rarely unique. From a biological point of view, it is therefore of importance to not only compute one network, but all possible networks. In this paper, we make a first step towards approaching this goal by presenting the first algorithm---called allMAAFs---that calculates all maximum-acyclic-agreement forests for two rooted binary phylogenetic trees on the same set of taxa. |
q-bio/0409009 | Eugene Korotkov V. | E. V. Korotkov | Enzyme as a thermal resonance pump | 2 pages, 1 figure | null | null | null | q-bio.BM | null | We found latent periodicity of 150 protein families now. We suppose that
latent periodicity can determine a spectrum of resonance oscillations in
proteins.
| [
{
"created": "Mon, 6 Sep 2004 20:35:55 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Korotkov",
"E. V.",
""
]
] | We found latent periodicity of 150 protein families now. We suppose that latent periodicity can determine a spectrum of resonance oscillations in proteins. |
1105.6069 | Armando G. M. Neves | Armando G. M. Neves | Interbreeding conditions for explaining Neandertal DNA in living humans:
the nonneutral case | Submitted to BIOMAT 2011, 19 pages, 7 figures | null | null | null | q-bio.PE math-ph math.MP physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider here an extension of a previous work by Neves and Serva, still
unpublished, which estimates the amount of interbreeding between anatomically
modern Africans and Neandertals necessary for explaining the experimental fact
that 1 to 4% of the DNA in non-African living humans is of Neandertal origin.
In that work we considered that Africans and Neandertals had the same fitness
(neutral hypothesis) and Neandertal extinction was thus an event of fortune. In
this work we consider that Africans had larger fitnesses. We show results for
four values for the fitness difference: 1%, 5%, 10% and 20% and compare them
with the corresponding neutral results. Some technical differences with respect
to the neutral case appear. We conclude that even with 1% fitness difference
Neandertals extinction comes up in too small a time, so the neutral model looks
more suitable for explaining the known data on occupation of some caves in
Israel for a very long time, alternately by Africans and Neandertals.
| [
{
"created": "Mon, 30 May 2011 19:00:06 GMT",
"version": "v1"
}
] | 2011-05-31 | [
[
"Neves",
"Armando G. M.",
""
]
] | We consider here an extension of a previous work by Neves and Serva, still unpublished, which estimates the amount of interbreeding between anatomically modern Africans and Neandertals necessary for explaining the experimental fact that 1 to 4% of the DNA in non-African living humans is of Neandertal origin. In that work we considered that Africans and Neandertals had the same fitness (neutral hypothesis) and Neandertal extinction was thus an event of fortune. In this work we consider that Africans had larger fitnesses. We show results for four values for the fitness difference: 1%, 5%, 10% and 20% and compare them with the corresponding neutral results. Some technical differences with respect to the neutral case appear. We conclude that even with 1% fitness difference Neandertals extinction comes up in too small a time, so the neutral model looks more suitable for explaining the known data on occupation of some caves in Israel for a very long time, alternately by Africans and Neandertals. |
1305.0366 | Kunihiko Kaneko | Kunihiko Kaneko | Evolution of Robustness and Plasticity under Environmental Fluctuation:
Formulation in terms of Phenotypic Variances | 23 pages 11 figures | J. Stat. Phys. 148 (2012) 686-704 | 10.1007/s10955-012-0563-1 | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The characterization of plasticity, robustness, and evolvability, an
important issue in biology, is studied in terms of phenotypic fluctuations. By
numerically evolving gene regulatory networks, the proportionality between the
phenotypic variances of epigenetic and genetic origins is confirmed. The former
is given by the variance of the phenotypic fluctuation due to noise in the
developmental process; and the latter, by the variance of the phenotypic
fluctuation due to genetic mutation. The relationship suggests a link between
robustness to noise and to mutation, since robustness can be defined by the
sharpness of the distribution of the phenotype. Next, the proportionality
between the variances is demonstrated to also hold over expressions of
different genes (phenotypic traits) when the system acquires robustness through
the evolution. Then, evolution under environmental variation is numerically
investigated and it is found that both the adaptability to a novel environment
and the robustness are made compatible when a certain degree of phenotypic
fluctuations exists due to noise. The highest adaptability is achieved at a
certain noise level at which the gene expression dynamics are near the critical
state to lose the robustness. Based on our results, we revisit Waddington's
canalization and genetic assimilation with regard to the two types of
phenotypic fluctuations.
| [
{
"created": "Thu, 2 May 2013 08:21:47 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Kaneko",
"Kunihiko",
""
]
] | The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The relationship suggests a link between robustness to noise and to mutation, since robustness can be defined by the sharpness of the distribution of the phenotype. Next, the proportionality between the variances is demonstrated to also hold over expressions of different genes (phenotypic traits) when the system acquires robustness through the evolution. Then, evolution under environmental variation is numerically investigated and it is found that both the adaptability to a novel environment and the robustness are made compatible when a certain degree of phenotypic fluctuations exists due to noise. The highest adaptability is achieved at a certain noise level at which the gene expression dynamics are near the critical state to lose the robustness. Based on our results, we revisit Waddington's canalization and genetic assimilation with regard to the two types of phenotypic fluctuations. |
2110.07328 | Patrick Dondl | Patrick Dondl and Marius Zeinhofer | A parameter study on optimal scaffolds in a simple model for bone
regeneration | 9 pages, 5 figures, 1 table | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple model for scaffold aided bone regeneration. In this
model, only macroscopic quantities, e.g., locally averaged osteoblast
densities, are considered. This allows for use of this model in an optimization
algorithm, whose outcome is an optimal scaffold porosity distribution. This
optimal scaffold naturally depends on the choice of parameters in the model,
and we provide a parameter study with a particular focus on patients with
reduced bone regeneration or reduced vascularization capacity.
| [
{
"created": "Thu, 14 Oct 2021 13:03:03 GMT",
"version": "v1"
}
] | 2021-10-15 | [
[
"Dondl",
"Patrick",
""
],
[
"Zeinhofer",
"Marius",
""
]
] | We propose a simple model for scaffold aided bone regeneration. In this model, only macroscopic quantities, e.g., locally averaged osteoblast densities, are considered. This allows for use of this model in an optimization algorithm, whose outcome is an optimal scaffold porosity distribution. This optimal scaffold naturally depends on the choice of parameters in the model, and we provide a parameter study with a particular focus on patients with reduced bone regeneration or reduced vascularization capacity. |
1707.05711 | Wai-Tong Louis Fan | Wai-Tong Louis Fan, Sebastien Roch | Statistically consistent and computationally efficient inference of
ancestral DNA sequences in the TKF91 model under dense taxon sampling | Title modified, 31 pages, 2 Figures and 1 table | null | null | null | q-bio.PE cs.CE math.PR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In evolutionary biology, the speciation history of living organisms is
represented graphically by a phylogeny, that is, a rooted tree whose leaves
correspond to current species and branchings indicate past speciation events.
Phylogenies are commonly estimated from molecular sequences, such as DNA
sequences, collected from the species of interest. At a high level, the idea
behind this inference is simple: the further apart in the Tree of Life are two
species, the greater is the number of mutations to have accumulated in their
genomes since their most recent common ancestor. In order to obtain accurate
estimates in phylogenetic analyses, it is standard practice to employ
statistical approaches based on stochastic models of sequence evolution on a
tree. For tractability, such models necessarily make simplifying assumptions
about the evolutionary mechanisms involved. In particular, commonly omitted are
insertions and deletions of nucleotides -- also known as indels.
Properly accounting for indels in statistical phylogenetic analyses remains a
major challenge in computational evolutionary biology. Here we consider the
problem of reconstructing ancestral sequences on a known phylogeny in a model
of sequence evolution incorporating nucleotide substitutions, insertions and
deletions, specifically the classical TKF91 process. We focus on the case of
dense phylogenies of bounded height, which we refer to as the taxon-rich
setting, where statistical consistency is achievable. We give the first
polynomial-time ancestral reconstruction algorithm with provable guarantees
under constant rates of mutation. Our algorithm succeeds when the phylogeny
satisfies the "big bang" condition, a necessary and sufficient condition for
statistical consistency in this context.
| [
{
"created": "Tue, 18 Jul 2017 15:55:57 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jul 2017 14:50:49 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Aug 2019 01:42:25 GMT",
"version": "v3"
}
] | 2019-08-02 | [
[
"Fan",
"Wai-Tong Louis",
""
],
[
"Roch",
"Sebastien",
""
]
] | In evolutionary biology, the speciation history of living organisms is represented graphically by a phylogeny, that is, a rooted tree whose leaves correspond to current species and branchings indicate past speciation events. Phylogenies are commonly estimated from molecular sequences, such as DNA sequences, collected from the species of interest. At a high level, the idea behind this inference is simple: the further apart in the Tree of Life are two species, the greater is the number of mutations to have accumulated in their genomes since their most recent common ancestor. In order to obtain accurate estimates in phylogenetic analyses, it is standard practice to employ statistical approaches based on stochastic models of sequence evolution on a tree. For tractability, such models necessarily make simplifying assumptions about the evolutionary mechanisms involved. In particular, commonly omitted are insertions and deletions of nucleotides -- also known as indels. Properly accounting for indels in statistical phylogenetic analyses remains a major challenge in computational evolutionary biology. Here we consider the problem of reconstructing ancestral sequences on a known phylogeny in a model of sequence evolution incorporating nucleotide substitutions, insertions and deletions, specifically the classical TKF91 process. We focus on the case of dense phylogenies of bounded height, which we refer to as the taxon-rich setting, where statistical consistency is achievable. We give the first polynomial-time ancestral reconstruction algorithm with provable guarantees under constant rates of mutation. Our algorithm succeeds when the phylogeny satisfies the "big bang" condition, a necessary and sufficient condition for statistical consistency in this context. |
2403.10402 | Jeffrey Herrmann | Jeffrey W. Herrmann, Hongjie Liu, and Donald K. Milton | Modeling the Spread of COVID-19 in University Communities | 26 pages | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mathematical and simulation models are often used to predict the spread of a
disease and estimate the impact of public health interventions, and many such
models have been developed and used during the COVID-19 pandemic. This paper
describes a study that systematically compared models for a university
community, which has a much smaller but more connected population than a state
or nation. We developed a stochastic agent-based model, a deterministic
compartment model, and a model based on ordinary differential equations. All
three models represented the disease progression with the same
susceptible-exposed-infectious-recovered (SEIR) model. We created a baseline
scenario for a population of 14,000 students and faculty and eleven other
scenarios for combinations of interventions such as regular testing, contact
tracing, quarantine, isolation, moving courses online, mask wearing, improving
ventilation, and vaccination. We used parameter values from other
epidemiological studies and incorporated data about COVID-19 testing in College
Park, Maryland, but the study was designed to compare modeling approaches to
each other using a synthetic population. For each scenario we used the models
to estimate the number of persons who become infected over a semester of 119
days. We evaluated the models by comparing their predictions and evaluating
their parsimony and computational effort. The agent-based model (ABM) and the
deterministic compartment model (DCM) had similar results with cyclic flow of
persons to and from quarantine, but the model based on ordinary differential
equations failed to capture these dynamics. The ABM's computation time was much
greater than the other two models' computation time. The DCM captured some of
the dynamics that were present in the ABM's predictions and, like those from
the ABM, clearly showed the importance of testing and moving classes on-line.
| [
{
"created": "Fri, 15 Mar 2024 15:36:14 GMT",
"version": "v1"
}
] | 2024-03-18 | [
[
"Herrmann",
"Jeffrey W.",
""
],
[
"Liu",
"Hongjie",
""
],
[
"Milton",
"Donald K.",
""
]
] | Mathematical and simulation models are often used to predict the spread of a disease and estimate the impact of public health interventions, and many such models have been developed and used during the COVID-19 pandemic. This paper describes a study that systematically compared models for a university community, which has a much smaller but more connected population than a state or nation. We developed a stochastic agent-based model, a deterministic compartment model, and a model based on ordinary differential equations. All three models represented the disease progression with the same susceptible-exposed-infectious-recovered (SEIR) model. We created a baseline scenario for a population of 14,000 students and faculty and eleven other scenarios for combinations of interventions such as regular testing, contact tracing, quarantine, isolation, moving courses online, mask wearing, improving ventilation, and vaccination. We used parameter values from other epidemiological studies and incorporated data about COVID-19 testing in College Park, Maryland, but the study was designed to compare modeling approaches to each other using a synthetic population. For each scenario we used the models to estimate the number of persons who become infected over a semester of 119 days. We evaluated the models by comparing their predictions and evaluating their parsimony and computational effort. The agent-based model (ABM) and the deterministic compartment model (DCM) had similar results with cyclic flow of persons to and from quarantine, but the model based on ordinary differential equations failed to capture these dynamics. The ABM's computation time was much greater than the other two models' computation time. The DCM captured some of the dynamics that were present in the ABM's predictions and, like those from the ABM, clearly showed the importance of testing and moving classes on-line. |
1307.4118 | Yaniv Brandvain | Yaniv Brandvain, Tanja Slotte, Khaled Hazzouri, Stephen Wright and
Graham Coop | Genomic identification of founding haplotypes reveals the history of the
selfing species Capsella rubella | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The shift from outcrossing to self-fertilization is among the most common
transitions in plants. Until recently, however, a genome-wide view of this
transition has been obscured by a dearth of appropriate data and the lack of
appropriate population genomic methods to interpret such data. Here, we present
novel analyses detailing the origin of the selfing species, Capsella rubella,
which recently split from its outcrossing sister, Capsella grandiflora. Due to
the recency of the split, most variation within C. rubella is found within C.
grandiflora. We can therefore identify genomic regions where two C. rubella
individuals have inherited the same or different segments of ancestral
diversity (i.e. founding haplotypes) present in C. rubella's founder(s). Based
on this analysis, we show that C. rubella was founded by multiple individuals
drawn from a diverse ancestral population closely related to extant C.
grandiflora, that drift and selection have rapidly homogenized most of this
ancestral variation since C. rubella's founding, and that little novel
variation has accumulated within this time. Despite the extensive loss of
ancestral variation, the approximately 25% of the genome for which two C.
rubella individuals have inherited different founding haplotypes makes up
roughly 90% of the genetic variation between them. To extend these findings, we
develop a coalescent model that utilizes the inferred frequency of founding
haplotypes and variation within founding haplotypes to estimate that C. rubella
was founded by a potentially large number of individuals 50-100 kya, and has
subsequently experienced a 20X reduction in its effective population size. As
population genomic data from an increasing number of outcrossing/selfing pairs
are generated, analyses like this here will facilitate a fine-scaled view of
the evolutionary and demographic impact of the transition to
self-fertilization.
| [
{
"created": "Mon, 15 Jul 2013 22:18:00 GMT",
"version": "v1"
}
] | 2013-07-17 | [
[
"Brandvain",
"Yaniv",
""
],
[
"Slotte",
"Tanja",
""
],
[
"Hazzouri",
"Khaled",
""
],
[
"Wright",
"Stephen",
""
],
[
"Coop",
"Graham",
""
]
] | The shift from outcrossing to self-fertilization is among the most common transitions in plants. Until recently, however, a genome-wide view of this transition has been obscured by a dearth of appropriate data and the lack of appropriate population genomic methods to interpret such data. Here, we present novel analyses detailing the origin of the selfing species, Capsella rubella, which recently split from its outcrossing sister, Capsella grandiflora. Due to the recency of the split, most variation within C. rubella is found within C. grandiflora. We can therefore identify genomic regions where two C. rubella individuals have inherited the same or different segments of ancestral diversity (i.e. founding haplotypes) present in C. rubella's founder(s). Based on this analysis, we show that C. rubella was founded by multiple individuals drawn from a diverse ancestral population closely related to extant C. grandiflora, that drift and selection have rapidly homogenized most of this ancestral variation since C. rubella's founding, and that little novel variation has accumulated within this time. Despite the extensive loss of ancestral variation, the approximately 25% of the genome for which two C. rubella individuals have inherited different founding haplotypes makes up roughly 90% of the genetic variation between them. To extend these findings, we develop a coalescent model that utilizes the inferred frequency of founding haplotypes and variation within founding haplotypes to estimate that C. rubella was founded by a potentially large number of individuals 50-100 kya, and has subsequently experienced a 20X reduction in its effective population size. As population genomic data from an increasing number of outcrossing/selfing pairs are generated, analyses like this here will facilitate a fine-scaled view of the evolutionary and demographic impact of the transition to self-fertilization. |
2210.02730 | Thomas Widiez | Nathana\"el Jacquier (RDP), Thomas Widiez (RDP) | Absent daddy, but important father | null | Nature Plants, Nature Publishing Group, 2021, 7 (12), pp.1544-1545 | 10.1038/s41477-021-01030-9 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixing maternal and paternal genomes is the base of plant sexual
reproduction, but some so-called 'haploid inducer lines' lead to the formation
of seeds bearing well-developed embryos with solely the maternal genome. A
recent study adds a new piece to the puzzle of this enigmatic in planta haploid
embryo induction process.
| [
{
"created": "Thu, 6 Oct 2022 07:41:22 GMT",
"version": "v1"
}
] | 2022-10-07 | [
[
"Jacquier",
"Nathanaël",
"",
"RDP"
],
[
"Widiez",
"Thomas",
"",
"RDP"
]
] | Mixing maternal and paternal genomes is the base of plant sexual reproduction, but some so-called 'haploid inducer lines' lead to the formation of seeds bearing well-developed embryos with solely the maternal genome. A recent study adds a new piece to the puzzle of this enigmatic in planta haploid embryo induction process. |
1808.04196 | Anirban Das | Anirban Das and Anna Levina | Critical neuronal models with relaxed timescales separation | null | Phys. Rev. X 9, 021062 (2019) | 10.1103/PhysRevX.9.021062 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Power laws in nature are considered to be signatures of complexity. The
theory of self-organized criticality (SOC) was proposed to explain their
origins. A longstanding principle of SOC is the \emph{separation of timescales}
axiom. It dictates that external input is delivered to the system at a much
slower rate compared to the timescale of internal dynamics. The statistics of
neural avalanches in the brain was demonstrated to follow a power law,
indicating closeness to a critical state. Moreover, criticality was shown to be
a beneficial state for various computations leading to the hypothesis, that the
brain is a SOC system. However, for neuronal systems that are constantly
bombarded by incoming signals, separation of timescales assumption is
unnatural. Recently it was experimentally demonstrated that a proper correction
of the avalanche detection algorithm to account for the increased drive during
task performance leads to a change of the power-law exponent from $1.5$ to
approximately $1.3$, but there is so far no theoretical explanation for this
change. Here we investigate the importance of timescales separation, by partly
abandoning it in various models. We achieve it by allowing for external input
during the avalanche, without compromising the separation of avalanches. We
develop an analytic treatment and provide numerical simulations of a simple
neuronal model.
| [
{
"created": "Tue, 17 Jul 2018 06:47:39 GMT",
"version": "v1"
}
] | 2019-07-03 | [
[
"Das",
"Anirban",
""
],
[
"Levina",
"Anna",
""
]
] | Power laws in nature are considered to be signatures of complexity. The theory of self-organized criticality (SOC) was proposed to explain their origins. A longstanding principle of SOC is the \emph{separation of timescales} axiom. It dictates that external input is delivered to the system at a much slower rate compared to the timescale of internal dynamics. The statistics of neural avalanches in the brain was demonstrated to follow a power law, indicating closeness to a critical state. Moreover, criticality was shown to be a beneficial state for various computations leading to the hypothesis, that the brain is a SOC system. However, for neuronal systems that are constantly bombarded by incoming signals, separation of timescales assumption is unnatural. Recently it was experimentally demonstrated that a proper correction of the avalanche detection algorithm to account for the increased drive during task performance leads to a change of the power-law exponent from $1.5$ to approximately $1.3$, but there is so far no theoretical explanation for this change. Here we investigate the importance of timescales separation, by partly abandoning it in various models. We achieve it by allowing for external input during the avalanche, without compromising the separation of avalanches. We develop an analytic treatment and provide numerical simulations of a simple neuronal model. |
2401.02624 | Mi Jin Lee | Mi Jin Lee, Sudo Yi, Deok-Sun Lee | Correlation-enhanced viable core in metabolic networks | 8 pages, 4 figures | null | null | null | q-bio.MN physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular ingredient concentrations can be stabilized by adjusting generation
and consumption rates through multiple pathways. To explore the portion of
cellular metabolism equipped with multiple pathways, we categorize individual
metabolic reactions and compounds as viable or inviable: A compound is viable
if processed by two or more reactions, and a reaction is viable if all of its
substrates and products are viable. Using this classification, we identify the
maximal subnetwork of viable nodes, referred to as the {\it viable core}, in
bipartite metabolic networks across thousands of species. The obtained viable
cores are remarkably larger than those in degree-preserving randomized
networks, while their broad degree distributions commonly enable the viable
cores to shrink gradually as reaction nodes are deleted. We demonstrate that
the positive degree-degree correlations of the empirical networks may underlie
the enlarged viable cores compared to the randomized networks. By investigating
the relation between degree and cross-species frequency of metabolic compounds
and reactions, we elucidate the evolutionary origin of the correlations.
| [
{
"created": "Fri, 5 Jan 2024 04:02:07 GMT",
"version": "v1"
}
] | 2024-01-08 | [
[
"Lee",
"Mi Jin",
""
],
[
"Yi",
"Sudo",
""
],
[
"Lee",
"Deok-Sun",
""
]
] | Cellular ingredient concentrations can be stabilized by adjusting generation and consumption rates through multiple pathways. To explore the portion of cellular metabolism equipped with multiple pathways, we categorize individual metabolic reactions and compounds as viable or inviable: A compound is viable if processed by two or more reactions, and a reaction is viable if all of its substrates and products are viable. Using this classification, we identify the maximal subnetwork of viable nodes, referred to as the {\it viable core}, in bipartite metabolic networks across thousands of species. The obtained viable cores are remarkably larger than those in degree-preserving randomized networks, while their broad degree distributions commonly enable the viable cores to shrink gradually as reaction nodes are deleted. We demonstrate that the positive degree-degree correlations of the empirical networks may underlie the enlarged viable cores compared to the randomized networks. By investigating the relation between degree and cross-species frequency of metabolic compounds and reactions, we elucidate the evolutionary origin of the correlations. |
1706.08131 | Juan B Gutierrez | Elizabeth D. Trippe, Jacob B. Aguilar, Yi H. Yan, Mustafa V. Nural,
Jessica A. Brady, Juan B. Gutierrez | Introducing Data Primitives: Data Formats for the SKED Framework | 10 pages, 3 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The past few years have seen a tremendous increase in the size
and complexity of datasets. Scientific and clinical studies must to incorporate
datasets that cross multiple spatial and temporal scales to describe a
particular phenomenon. The storage and accessibility of these heterogeneous
datasets in a way that is useful to researchers and yet extensible to new data
types is a major challenge.
Methods: In order to overcome these obstacles, we propose the use of data
primitives as a common currency between analytical methods. The four data
primitives we have identified are time series, text, annotated graph and
triangulated mesh, with associated metadata. Using only data primitives to
store data and as algorithm input, output, and intermediate results, promotes
interoperability, scalability, and reproducibility in scientific studies.
Results: Data primitives were used in a multi-omic, multi-scale systems
biology study of malaria infection in non-human primates to perform many types
of integrative analysis quickly and efficiently.
Conclusions: Using data primitives as a common currency for both data storage
and for cross talk between analytical methods enables the analysis of complex
multi-omic, multi-scale datasets in a reproducible modular fashion.
| [
{
"created": "Sun, 25 Jun 2017 15:54:30 GMT",
"version": "v1"
}
] | 2017-06-27 | [
[
"Trippe",
"Elizabeth D.",
""
],
[
"Aguilar",
"Jacob B.",
""
],
[
"Yan",
"Yi H.",
""
],
[
"Nural",
"Mustafa V.",
""
],
[
"Brady",
"Jessica A.",
""
],
[
"Gutierrez",
"Juan B.",
""
]
] | Background: The past few years have seen a tremendous increase in the size and complexity of datasets. Scientific and clinical studies must to incorporate datasets that cross multiple spatial and temporal scales to describe a particular phenomenon. The storage and accessibility of these heterogeneous datasets in a way that is useful to researchers and yet extensible to new data types is a major challenge. Methods: In order to overcome these obstacles, we propose the use of data primitives as a common currency between analytical methods. The four data primitives we have identified are time series, text, annotated graph and triangulated mesh, with associated metadata. Using only data primitives to store data and as algorithm input, output, and intermediate results, promotes interoperability, scalability, and reproducibility in scientific studies. Results: Data primitives were used in a multi-omic, multi-scale systems biology study of malaria infection in non-human primates to perform many types of integrative analysis quickly and efficiently. Conclusions: Using data primitives as a common currency for both data storage and for cross talk between analytical methods enables the analysis of complex multi-omic, multi-scale datasets in a reproducible modular fashion. |
2104.01533 | Adrian Jones | Daoyu Zhang, Adrian Jones, Yuri Deigin, Karl Sirotkin, Alejandro Sousa | Unexpected novel Merbecovirus discoveries in agricultural sequencing
datasets from Wuhan, China | Supplementary information and data can be found in Zenodo datasets
doi: 10.5281/zenodo.4660981, doi: 10.5281/zenodo.4620604, doi:
10.5281/zenodo.4399248 | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | In this study we document the unexpected discovery of multiple coronaviruses
and a BSL-3 pathogen in agricultural cotton and rice sequencing datasets. In
particular, we have identified a novel HKU5-related Merbecovirus in a cotton
dataset sequenced by the Huazhong Agricultural University in 2017. We have also
found an infectious clone sequence containing a novel HKU4-related Merbecovirus
related to MERS coronavirus in a rice dataset sequenced by the Huazhong
Agricultural University in early 2020. Another HKU5-related Merbecovirus, as
well as Japanese encephalitis virus, were identified in a cotton dataset
sequenced by the Huazhong Agricultural University in 2018. An HKU3-related
Betacoronavirus was found in a Mus musculus sequencing dataset from the Wuhan
Institute of Virology in 2017. Finally, a SARS-WIV1-like Betacoronavirus was
found in a rice dataset sequenced by the Fujian Agriculture and Forestry
University in 2017. Using the contaminating reads we have extracted from the
above datasets, we were able to assemble complete genomes of two novel
coronaviruses which we disclose herein. In light of our findings, we raise
concerns about biosafety protocol breaches, as indicated by our discovery of
multiple dangerous human pathogens in agricultural sequencing laboratories in
Wuhan and Fouzou City, China.
| [
{
"created": "Sun, 4 Apr 2021 03:49:02 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Jun 2021 09:14:00 GMT",
"version": "v2"
}
] | 2021-06-08 | [
[
"Zhang",
"Daoyu",
""
],
[
"Jones",
"Adrian",
""
],
[
"Deigin",
"Yuri",
""
],
[
"Sirotkin",
"Karl",
""
],
[
"Sousa",
"Alejandro",
""
]
] | In this study we document the unexpected discovery of multiple coronaviruses and a BSL-3 pathogen in agricultural cotton and rice sequencing datasets. In particular, we have identified a novel HKU5-related Merbecovirus in a cotton dataset sequenced by the Huazhong Agricultural University in 2017. We have also found an infectious clone sequence containing a novel HKU4-related Merbecovirus related to MERS coronavirus in a rice dataset sequenced by the Huazhong Agricultural University in early 2020. Another HKU5-related Merbecovirus, as well as Japanese encephalitis virus, were identified in a cotton dataset sequenced by the Huazhong Agricultural University in 2018. An HKU3-related Betacoronavirus was found in a Mus musculus sequencing dataset from the Wuhan Institute of Virology in 2017. Finally, a SARS-WIV1-like Betacoronavirus was found in a rice dataset sequenced by the Fujian Agriculture and Forestry University in 2017. Using the contaminating reads we have extracted from the above datasets, we were able to assemble complete genomes of two novel coronaviruses which we disclose herein. In light of our findings, we raise concerns about biosafety protocol breaches, as indicated by our discovery of multiple dangerous human pathogens in agricultural sequencing laboratories in Wuhan and Fouzou City, China. |
0706.0076 | Hiroo Kenzaki | Hiroo Kenzaki, Macoto Kikuchi | Free-Energy Landscape of Kinesin by a Realistic Lattice Model | 15 pages, 4 figures | null | null | null | q-bio.BM | null | Structural fluctuations in the thermal equilibrium of the kinesin motor
domain are studied using a lattice protein model with Go interactions. By means
of the multi-self-overlap ensemble (MSOE) Monte Carlo method and the principal
component analysis (PCA), the free-energy landscape is obtained. It is shown
that kinesins have two subdomains that exhibit partial folding/unfolding at
functionally important regions: one is located around the nucleotide binding
site and the other includes the main microtubule binding site. These subdomains
are consistent with structural variability that was reported recently based on
experimentally-obtained structures. On the other hand, such large structural
fluctuations have not been captured by B-factor or normal mode analyses. Thus,
they are beyond the elastic regime, and it is essential to take into account
chain connectivity for studying the function of kinesins.
| [
{
"created": "Fri, 1 Jun 2007 06:45:42 GMT",
"version": "v1"
}
] | 2007-06-04 | [
[
"Kenzaki",
"Hiroo",
""
],
[
"Kikuchi",
"Macoto",
""
]
] | Structural fluctuations in the thermal equilibrium of the kinesin motor domain are studied using a lattice protein model with Go interactions. By means of the multi-self-overlap ensemble (MSOE) Monte Carlo method and the principal component analysis (PCA), the free-energy landscape is obtained. It is shown that kinesins have two subdomains that exhibit partial folding/unfolding at functionally important regions: one is located around the nucleotide binding site and the other includes the main microtubule binding site. These subdomains are consistent with structural variability that was reported recently based on experimentally-obtained structures. On the other hand, such large structural fluctuations have not been captured by B-factor or normal mode analyses. Thus, they are beyond the elastic regime, and it is essential to take into account chain connectivity for studying the function of kinesins. |
1110.3121 | Yoshiharu Maeno | Yoshiharu Maeno | Transient fluctuation of the prosperity of firms in a network economy | null | null | 10.1016/j.physa.2013.03.046 | null | q-bio.MN cs.CE physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The transient fluctuation of the prosperity of firms in a network economy is
investigated with an abstract stochastic model. The model describes the profit
which firms make when they sell materials to a firm which produces a product
and the fixed cost expense to the firms to produce those materials and product.
The formulae for this model are parallel to those for population dynamics. The
swinging changes in the fluctuation in the transient state from the initial
growth to the final steady state are the consequence of a topology-dependent
time trial competition between the profitable interactions and expense. The
firm in a sparse random network economy is more likely to go bankrupt than
expected from the value of the limit of the fluctuation in the steady state,
and there is a risk of failing to reach by far the less fluctuating steady
state.
| [
{
"created": "Fri, 14 Oct 2011 04:36:15 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jan 2012 05:17:28 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Apr 2012 14:01:47 GMT",
"version": "v3"
},
{
"created": "Wed, 1 Aug 2012 07:05:58 GMT",
"version": "v4"
},
{
"created": "Tue, 5 Feb 2013 07:26:17 GMT",
"version": "v5"
}
] | 2013-07-19 | [
[
"Maeno",
"Yoshiharu",
""
]
] | The transient fluctuation of the prosperity of firms in a network economy is investigated with an abstract stochastic model. The model describes the profit which firms make when they sell materials to a firm which produces a product and the fixed cost expense to the firms to produce those materials and product. The formulae for this model are parallel to those for population dynamics. The swinging changes in the fluctuation in the transient state from the initial growth to the final steady state are the consequence of a topology-dependent time trial competition between the profitable interactions and expense. The firm in a sparse random network economy is more likely to go bankrupt than expected from the value of the limit of the fluctuation in the steady state, and there is a risk of failing to reach by far the less fluctuating steady state. |
2211.00393 | Michael Elmalem Mr | Michael S. Elmalem, Hanna Moody, James K. Ruffle, Michel Thiebaut de
Schotten, Patrick Haggard, Beate Diehl, Parashkev Nachev, and Ashwani Jha | Focal and Connectomic Mapping of Transiently Disrupted Brain Function | null | null | 10.1038/s42003-023-04787-1 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The distributed nature of the neural substrate, and the difficulty of
establishing necessity from correlative data, combine to render the mapping of
brain function a far harder task than it seems. Methods capable of combining
connective anatomical information with focal disruption of function are needed
to disambiguate local from global neural dependence, and critical from merely
coincidental activity. Here we present a comprehensive framework for focal and
connective spatial inference based on sparse disruptive data, and demonstrate
its application in the context of transient direct electrical stimulation of
the human medial frontal wall during the pre-surgical evaluation of patients
with focal epilepsy. Our framework formalizes voxel-wise mass-univariate
inference on sparsely sampled data within the statistical parametric mapping
framework, encompassing the analysis of distributed maps defined by any
criterion of connectivity. Applied to the medial frontal wall, this transient
dysconnectome approach reveals marked discrepancies between local and
distributed associations of major categories of motor and sensory behaviour,
revealing differentiation by remote connectivity to which purely local analysis
is blind. Our framework enables disruptive mapping of the human brain based on
sparsely sampled data with minimal spatial assumptions, good statistical
efficiency, flexible model formulation, and explicit comparison of local and
distributed effects.
| [
{
"created": "Tue, 1 Nov 2022 11:34:10 GMT",
"version": "v1"
}
] | 2023-04-18 | [
[
"Elmalem",
"Michael S.",
""
],
[
"Moody",
"Hanna",
""
],
[
"Ruffle",
"James K.",
""
],
[
"de Schotten",
"Michel Thiebaut",
""
],
[
"Haggard",
"Patrick",
""
],
[
"Diehl",
"Beate",
""
],
[
"Nachev",
"Parashkev",
""
],
[
"Jha",
"Ashwani",
""
]
] | The distributed nature of the neural substrate, and the difficulty of establishing necessity from correlative data, combine to render the mapping of brain function a far harder task than it seems. Methods capable of combining connective anatomical information with focal disruption of function are needed to disambiguate local from global neural dependence, and critical from merely coincidental activity. Here we present a comprehensive framework for focal and connective spatial inference based on sparse disruptive data, and demonstrate its application in the context of transient direct electrical stimulation of the human medial frontal wall during the pre-surgical evaluation of patients with focal epilepsy. Our framework formalizes voxel-wise mass-univariate inference on sparsely sampled data within the statistical parametric mapping framework, encompassing the analysis of distributed maps defined by any criterion of connectivity. Applied to the medial frontal wall, this transient dysconnectome approach reveals marked discrepancies between local and distributed associations of major categories of motor and sensory behaviour, revealing differentiation by remote connectivity to which purely local analysis is blind. Our framework enables disruptive mapping of the human brain based on sparsely sampled data with minimal spatial assumptions, good statistical efficiency, flexible model formulation, and explicit comparison of local and distributed effects. |
q-bio/0511010 | Dietrich Stauffer | Stanislaw Cebrat, Andrzej Pekalski, Fabian Scharf | Monte Carlo simulations of the inside-intron recombination | 12 pages inc. 5 Figs., for Int. J. Mod. Phys. C 17, issue 4 (2006) | null | 10.1142/S0129183106008984 | null | q-bio.PE | null | Biological genomes are divided into coding and non-coding regions. Introns
are non-coding parts within genes, while the remaining non-coding parts are
intergenic sequences. To study the evolutionary significance of recombination
inside introns we have used two models based on the Monte Carlo method. In our
computer simulations we have implemented the internal structure of genes by
declaring the probability of recombination between exons. One situation when
inside-intron recombination is advantageous is recovering functional genes by
combining proper exons dispersed in the genetic pool of the population after a
long period without selection for the function of the gene. Populations have to
pass through the bottleneck, then. These events are rather rare and we have
expected that there should be other phenomena giving profits from the
inside-intron recombination. In fact we have found that inside-intron
recombination is advantageous only in the case when after recombination,
besides the recombinant forms, parental haplotypes are available and selection
is set already on gametes.
| [
{
"created": "Thu, 10 Nov 2005 12:45:55 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Cebrat",
"Stanislaw",
""
],
[
"Pekalski",
"Andrzej",
""
],
[
"Scharf",
"Fabian",
""
]
] | Biological genomes are divided into coding and non-coding regions. Introns are non-coding parts within genes, while the remaining non-coding parts are intergenic sequences. To study the evolutionary significance of recombination inside introns we have used two models based on the Monte Carlo method. In our computer simulations we have implemented the internal structure of genes by declaring the probability of recombination between exons. One situation when inside-intron recombination is advantageous is recovering functional genes by combining proper exons dispersed in the genetic pool of the population after a long period without selection for the function of the gene. Populations have to pass through the bottleneck, then. These events are rather rare and we have expected that there should be other phenomena giving profits from the inside-intron recombination. In fact we have found that inside-intron recombination is advantageous only in the case when after recombination, besides the recombinant forms, parental haplotypes are available and selection is set already on gametes. |
1409.2864 | Michael Lawrence | Michael Lawrence, Martin Morgan | Scalable Genomics with R and Bioconductor | Published in at http://dx.doi.org/10.1214/14-STS476 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org) | Statistical Science 2014, Vol. 29, No. 2, 214-226 | 10.1214/14-STS476 | IMS-STS-STS476 | q-bio.GN cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reviews strategies for solving problems encountered when analyzing
large genomic data sets and describes the implementation of those strategies in
R by packages from the Bioconductor project. We treat the scalable processing,
summarization and visualization of big genomic data. The general ideas are well
established and include restrictive queries, compression, iteration and
parallel computing. We demonstrate the strategies by applying Bioconductor
packages to the detection and analysis of genetic variants from a whole genome
sequencing experiment.
| [
{
"created": "Tue, 9 Sep 2014 10:47:37 GMT",
"version": "v1"
}
] | 2014-09-11 | [
[
"Lawrence",
"Michael",
""
],
[
"Morgan",
"Martin",
""
]
] | This paper reviews strategies for solving problems encountered when analyzing large genomic data sets and describes the implementation of those strategies in R by packages from the Bioconductor project. We treat the scalable processing, summarization and visualization of big genomic data. The general ideas are well established and include restrictive queries, compression, iteration and parallel computing. We demonstrate the strategies by applying Bioconductor packages to the detection and analysis of genetic variants from a whole genome sequencing experiment. |
0709.4344 | Bernat Corominas-Murtra BCM | Bernat Corominas-Murtra, Sergi Valverde and Ricard V. Sol\'e | Emergence of Scale-Free Syntax Networks | Revised version with new cites and ne title. 10 pages, 9 figures.
Submitted to: Journal of Theoretical Biology | null | null | null | q-bio.NC | null | The evolution of human language allowed the efficient propagation of
nongenetic information, thus creating a new form of evolutionary change.
Language development in children offers the opportunity of exploring the
emergence of such complex communication system and provides a window to
understanding the transition from protolanguage to language. Here we present
the first analysis of the emergence of syntax in terms of complex networks. A
previously unreported, sharp transition is shown to occur around two years of
age from a (pre-syntactic) tree-like structure to a scale-free, small world
syntax network. The nature of such transition supports the presence of an
innate component pervading the emergence of full syntax. This observation is
difficult to interpret in terms of any simple model of network growth, thus
suggesting that some internal, perhaps innate component was at work. We explore
this problem by using a minimal model that is able to capture several
statistical traits. Our results provide evidence for adaptive traits, but it
also indicates that some key features of syntax might actually correspond to
non-adaptive phenomena.
| [
{
"created": "Thu, 27 Sep 2007 09:45:36 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Oct 2007 10:30:14 GMT",
"version": "v2"
}
] | 2007-10-01 | [
[
"Corominas-Murtra",
"Bernat",
""
],
[
"Valverde",
"Sergi",
""
],
[
"Solé",
"Ricard V.",
""
]
] | The evolution of human language allowed the efficient propagation of nongenetic information, thus creating a new form of evolutionary change. Language development in children offers the opportunity of exploring the emergence of such complex communication system and provides a window to understanding the transition from protolanguage to language. Here we present the first analysis of the emergence of syntax in terms of complex networks. A previously unreported, sharp transition is shown to occur around two years of age from a (pre-syntactic) tree-like structure to a scale-free, small world syntax network. The nature of such transition supports the presence of an innate component pervading the emergence of full syntax. This observation is difficult to interpret in terms of any simple model of network growth, thus suggesting that some internal, perhaps innate component was at work. We explore this problem by using a minimal model that is able to capture several statistical traits. Our results provide evidence for adaptive traits, but it also indicates that some key features of syntax might actually correspond to non-adaptive phenomena. |
1911.06775 | Rich Pang | Rich Pang | A crossover code for high-dimensional composition | Presented at NeurIPS 2019 Workshop on Context and Compositionality in
Biological and Artificial Neural Systems | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | We present a novel way to encode compositional information in
high-dimensional (HD) vectors. Inspired by chromosomal crossover, random HD
vectors are recursively interwoven, with a fraction of one vector's components
masked out and replaced by those from another using a context-dependent mask.
Unlike many HD computing schemes, "crossover" codes highly overlap with their
base elements' and sub-structures' codes without sacrificing relational
information, allowing fast element readout and decoding by greedy
reconstruction. Crossover is mathematically tractable and has several
properties desirable for robust, flexible representation.
| [
{
"created": "Fri, 15 Nov 2019 17:39:08 GMT",
"version": "v1"
}
] | 2019-11-18 | [
[
"Pang",
"Rich",
""
]
] | We present a novel way to encode compositional information in high-dimensional (HD) vectors. Inspired by chromosomal crossover, random HD vectors are recursively interwoven, with a fraction of one vector's components masked out and replaced by those from another using a context-dependent mask. Unlike many HD computing schemes, "crossover" codes highly overlap with their base elements' and sub-structures' codes without sacrificing relational information, allowing fast element readout and decoding by greedy reconstruction. Crossover is mathematically tractable and has several properties desirable for robust, flexible representation. |
2207.09044 | Mohammad Reza Yousefi | Aboozar Moradi, Mohammad Reza Yousefi | Effects of different tumors on the steady-state heat distribution in the
human eye using the 3D finite element method | 15 pages, 6 Figures, 5 Tables | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | In this paper, a three-dimensional finite element method is developed to
simulate the heat distribution in the human eye with different types of tumors
to understand the effect of tumors on heat distribution in the human eye. The
human eye is modeled as a composition of several homogeneous regions and the
physical and thermal properties of each region used in this study are more
accurate than the models used in previous studies. By considering the exact and
complicated geometry of all parts, the finite element method is a proper
solution for solving the heat equation inside the human eye. There are two
kinds of boundary conditions called the radiation condition and the Robin
condition. The radiation boundary condition is modeled as a Robin boundary
condition. For modeling eye tumors and their effect on heat distribution, we
need information about eye tumor properties such as heat conductivity, density,
specific heat, and so on. Thanks to no accurate reported information about eye
tumor properties, the properties of other types of tumors such as skin, and
bowel tumors are used. Simulation results with different parameters of eye
tumors show the effect of eye tumors on heat distribution in the human eye.
| [
{
"created": "Tue, 19 Jul 2022 03:29:43 GMT",
"version": "v1"
}
] | 2022-07-20 | [
[
"Moradi",
"Aboozar",
""
],
[
"Yousefi",
"Mohammad Reza",
""
]
] | In this paper, a three-dimensional finite element method is developed to simulate the heat distribution in the human eye with different types of tumors to understand the effect of tumors on heat distribution in the human eye. The human eye is modeled as a composition of several homogeneous regions and the physical and thermal properties of each region used in this study are more accurate than the models used in previous studies. By considering the exact and complicated geometry of all parts, the finite element method is a proper solution for solving the heat equation inside the human eye. There are two kinds of boundary conditions called the radiation condition and the Robin condition. The radiation boundary condition is modeled as a Robin boundary condition. For modeling eye tumors and their effect on heat distribution, we need information about eye tumor properties such as heat conductivity, density, specific heat, and so on. Thanks to no accurate reported information about eye tumor properties, the properties of other types of tumors such as skin, and bowel tumors are used. Simulation results with different parameters of eye tumors show the effect of eye tumors on heat distribution in the human eye. |
1707.03046 | Funda Yildirim | Funda Yildirim, Joana Carvalho, Frans W. Cornelissen | A second-order orientation-contrast stimulus for
population-receptive-field-based retinotopic mapping | Yildirim, F., et al., A second-order orientation-contrast stimulus
for population-receptive-field-based retinotopic mapping, NeuroImage (2017) | null | 10.1016/j.neuroimage.2017.06.073 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual field or retinotopic mapping is one of the most frequently used
paradigms in fMRI. It uses activity evoked by position-varying high luminance
contrast visual patterns presented throughout the visual field for determining
the spatial organization of cortical visual areas. While the advantage of using
high luminance contrast is that it tends to drive a wide range of neural
populations - thus resulting in high signal-to-noise BOLD responses - this may
also be a limitation, especially for approaches that attempt to squeeze more
information out of the BOLD response, such as population receptive field (pRF)
mapping. In that case, more selective stimulation of a subset of neurons -
despite reduced signals - could result in better characterization of pRF
properties. Here, we used a second-order stimulus based on local differences in
orientation texture - to which we refer as orientation contrast - to perform
retinotopic mapping. Participants in our experiment viewed arrays of Gabor
patches composed of a foreground (a bar) and a background. These could only be
distinguished on the basis of a difference in patch orientation. In our
analyses, we compare the pRF properties obtained using this new orientation
contrast-based retinotopy (OCR) to those obtained using classic luminance
contrast-based retinotopy (LCR). Specifically, in higher order cortical visual
areas such as LO, our novel approach resulted in non-trivial reductions in
estimated population receptive field size of around 30%. We discuss how OCR -
by limiting receptive field scatter and reducing BOLD displacement - may result
in more accurate pRF localization as well. We conclude that using our approach,
it is possible to selectively target particular neuronal populations, opening
the way to use pRF modeling to dissect the response properties of more
clearly-defined neuronal populations in different visual areas.
| [
{
"created": "Mon, 10 Jul 2017 20:12:22 GMT",
"version": "v1"
}
] | 2017-07-12 | [
[
"Yildirim",
"Funda",
""
],
[
"Carvalho",
"Joana",
""
],
[
"Cornelissen",
"Frans W.",
""
]
] | Visual field or retinotopic mapping is one of the most frequently used paradigms in fMRI. It uses activity evoked by position-varying high luminance contrast visual patterns presented throughout the visual field for determining the spatial organization of cortical visual areas. While the advantage of using high luminance contrast is that it tends to drive a wide range of neural populations - thus resulting in high signal-to-noise BOLD responses - this may also be a limitation, especially for approaches that attempt to squeeze more information out of the BOLD response, such as population receptive field (pRF) mapping. In that case, more selective stimulation of a subset of neurons - despite reduced signals - could result in better characterization of pRF properties. Here, we used a second-order stimulus based on local differences in orientation texture - to which we refer as orientation contrast - to perform retinotopic mapping. Participants in our experiment viewed arrays of Gabor patches composed of a foreground (a bar) and a background. These could only be distinguished on the basis of a difference in patch orientation. In our analyses, we compare the pRF properties obtained using this new orientation contrast-based retinotopy (OCR) to those obtained using classic luminance contrast-based retinotopy (LCR). Specifically, in higher order cortical visual areas such as LO, our novel approach resulted in non-trivial reductions in estimated population receptive field size of around 30%. We discuss how OCR - by limiting receptive field scatter and reducing BOLD displacement - may result in more accurate pRF localization as well. We conclude that using our approach, it is possible to selectively target particular neuronal populations, opening the way to use pRF modeling to dissect the response properties of more clearly-defined neuronal populations in different visual areas. |
1709.09391 | Niv DeMalach | Niv DeMalach and Ronen Kadmon | Seed mass diversity along resource gradients: the role of allometric
growth rate and size-asymmetric competition | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The large variation in seed mass among species inspired a vast array of
theoretical and empirical research attempting to explain this variation. So
far, seed mass variation was investigated by two classes of studies: one class
focuses on species varying in seed mass within communities, while the second
focuses on variation between communities, most often with respect to resource
gradients. Here, we develop a model capable of simultaneously explaining
variation in seed mass within and between communities. The model describes
resource competition (for both soil and light resources) in annual communities
and incorporates two fundamental aspects: light asymmetry (higher light
acquisition per unit biomass for larger individuals) and growth allometry
(negative dependency of relative growth rate on plant biomass). Results show
that both factors are critical in determining patterns of seed mass variation.
In general, growth allometry increases the reproductive success of small-seeded
species while light asymmetry increases the reproductive success of
large-seeded species. Increasing availability of soil resources increases light
competition, thereby increasing the reproductive success of large-seeded
species and ultimately the community (weighted) mean seed mass. An unexpected
prediction of the model is that maximum variation in community seed mass (a
measure of functional diversity) occurs under intermediate levels of soil
resources. Extensions of the model incorporating size-dependent seed survival
and disturbance also show patterns consistent with empirical observations.
These overall results suggest that the mechanisms captured by the model are
important in determining patterns of species and functional diversity.
| [
{
"created": "Wed, 27 Sep 2017 08:51:38 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2017 07:13:04 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Jul 2018 09:06:43 GMT",
"version": "v3"
}
] | 2018-07-10 | [
[
"DeMalach",
"Niv",
""
],
[
"Kadmon",
"Ronen",
""
]
] | The large variation in seed mass among species inspired a vast array of theoretical and empirical research attempting to explain this variation. So far, seed mass variation was investigated by two classes of studies: one class focuses on species varying in seed mass within communities, while the second focuses on variation between communities, most often with respect to resource gradients. Here, we develop a model capable of simultaneously explaining variation in seed mass within and between communities. The model describes resource competition (for both soil and light resources) in annual communities and incorporates two fundamental aspects: light asymmetry (higher light acquisition per unit biomass for larger individuals) and growth allometry (negative dependency of relative growth rate on plant biomass). Results show that both factors are critical in determining patterns of seed mass variation. In general, growth allometry increases the reproductive success of small-seeded species while light asymmetry increases the reproductive success of large-seeded species. Increasing availability of soil resources increases light competition, thereby increasing the reproductive success of large-seeded species and ultimately the community (weighted) mean seed mass. An unexpected prediction of the model is that maximum variation in community seed mass (a measure of functional diversity) occurs under intermediate levels of soil resources. Extensions of the model incorporating size-dependent seed survival and disturbance also show patterns consistent with empirical observations. These overall results suggest that the mechanisms captured by the model are important in determining patterns of species and functional diversity. |
2109.09887 | Seung Ki Baek | Yohsuke Murase, Minjae Kim, and Seung Ki Baek | Social norms in indirect reciprocity with ternary reputations | 18 pages, 6 figures | Sci. Rep. 12, 455 (2022) | 10.1038/s41598-021-04033-w | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indirect reciprocity is a key mechanism that promotes cooperation in social
dilemmas by means of reputation. Although it has been a common practice to
represent reputations by binary values, either `good' or `bad', such a
dichotomy is a crude approximation considering the complexity of reality. In
this work, we studied norms with three different reputations, i.e., `good',
`neutral', and `bad'. Through massive supercomputing for handling more than
thirty billion possibilities, we fully identified which norms achieve
cooperation and possess evolutionary stability against behavioural mutants. By
systematically categorizing all these norms according to their behaviours, we
found similarities and dissimilarities to their binary-reputation counterpart,
the leading eight. We obtained four rules that should be satisfied by the
successful norms, and the behaviour of the leading eight can be understood as a
special case of these rules. A couple of norms that show counter-intuitive
behaviours are also presented. We believe the findings are also useful for
designing successful norms with more general reputation systems.
| [
{
"created": "Tue, 21 Sep 2021 00:07:33 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Sep 2021 10:14:50 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jan 2022 03:40:30 GMT",
"version": "v3"
}
] | 2022-01-17 | [
[
"Murase",
"Yohsuke",
""
],
[
"Kim",
"Minjae",
""
],
[
"Baek",
"Seung Ki",
""
]
] | Indirect reciprocity is a key mechanism that promotes cooperation in social dilemmas by means of reputation. Although it has been a common practice to represent reputations by binary values, either `good' or `bad', such a dichotomy is a crude approximation considering the complexity of reality. In this work, we studied norms with three different reputations, i.e., `good', `neutral', and `bad'. Through massive supercomputing for handling more than thirty billion possibilities, we fully identified which norms achieve cooperation and possess evolutionary stability against behavioural mutants. By systematically categorizing all these norms according to their behaviours, we found similarities and dissimilarities to their binary-reputation counterpart, the leading eight. We obtained four rules that should be satisfied by the successful norms, and the behaviour of the leading eight can be understood as a special case of these rules. A couple of norms that show counter-intuitive behaviours are also presented. We believe the findings are also useful for designing successful norms with more general reputation systems. |
2108.09217 | Minoo Ashoori | Minoo Ashoori, Eugene M. Dempsey, Fiona B. McDonald, John M. O'Toole | Sparse-Denoising Methods for Extracting Desaturation Transients in
Cerebral Oxygenation Signals of Preterm Infants | null | null | null | null | q-bio.QM physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preterm infants are at high risk of developing brain injury in the first days
of life as a consequence of poor cerebral oxygen delivery. Near-infrared
spectroscopy (NIRS) is an established technology developed to monitor regional
tissue oxygenation. Detailed waveform analysis of the cerebral NIRS signal
could improve the clinical utility of this method in accurately predicting
brain injury. Frequent transient cerebral oxygen desaturations are commonly
observed in extremely preterm infants, yet their clinical significance remains
unclear. The aim of this study was to examine and compare the performance of
two distinct approaches in isolating and extracting transient deflections
within NIRS signals. We optimized three different simultaneous low-pass
filtering and total variation denoising (LPF_TVD) methods and compared their
performance with a recently proposed method that uses singular-spectrum
analysis and the discrete cosine transform (SSA_DCT). Parameters for the
LPF_TVD methods were optimized over a grid search using synthetic NIRS-like
signals. The SSA_DCT method was modified with a post-processing procedure to
increase sparsity in the extracted components. Our analysis, using a synthetic
NIRS-like dataset, showed that a LPF_TVD method outperformed the modified
SSA_DCT method: median mean-squared error of 0.97 (95% CI: 0.86 to 1.07) was
lower for the LPF_TVD method compared to the modified SSA_DCT method of 1.48
(95% CI: 1.33 to 1.63), P<0.001. The dual low-pass filter and total variation
denoising methods are considerably more computational efficient, by 3 to 4
orders of magnitude, than the SSA_DCT method. More research is needed to
examine the efficacy of these methods in extracting oxygen desaturation in real
NIRS signals.
| [
{
"created": "Fri, 20 Aug 2021 15:22:11 GMT",
"version": "v1"
}
] | 2021-08-23 | [
[
"Ashoori",
"Minoo",
""
],
[
"Dempsey",
"Eugene M.",
""
],
[
"McDonald",
"Fiona B.",
""
],
[
"O'Toole",
"John M.",
""
]
] | Preterm infants are at high risk of developing brain injury in the first days of life as a consequence of poor cerebral oxygen delivery. Near-infrared spectroscopy (NIRS) is an established technology developed to monitor regional tissue oxygenation. Detailed waveform analysis of the cerebral NIRS signal could improve the clinical utility of this method in accurately predicting brain injury. Frequent transient cerebral oxygen desaturations are commonly observed in extremely preterm infants, yet their clinical significance remains unclear. The aim of this study was to examine and compare the performance of two distinct approaches in isolating and extracting transient deflections within NIRS signals. We optimized three different simultaneous low-pass filtering and total variation denoising (LPF_TVD) methods and compared their performance with a recently proposed method that uses singular-spectrum analysis and the discrete cosine transform (SSA_DCT). Parameters for the LPF_TVD methods were optimized over a grid search using synthetic NIRS-like signals. The SSA_DCT method was modified with a post-processing procedure to increase sparsity in the extracted components. Our analysis, using a synthetic NIRS-like dataset, showed that a LPF_TVD method outperformed the modified SSA_DCT method: median mean-squared error of 0.97 (95% CI: 0.86 to 1.07) was lower for the LPF_TVD method compared to the modified SSA_DCT method of 1.48 (95% CI: 1.33 to 1.63), P<0.001. The dual low-pass filter and total variation denoising methods are considerably more computational efficient, by 3 to 4 orders of magnitude, than the SSA_DCT method. More research is needed to examine the efficacy of these methods in extracting oxygen desaturation in real NIRS signals. |
1210.5299 | Nen Saito | Nen Saito, Shuji Ishihara, Kunihiko Kaneko | The Baldwin effect under multi-peaked fitness landscapes: Phenotypic
fluctuation accelerates evolutionary rate | 12 pages, 10 figures | Phys. Rev. E 87, 052701 (2013) | 10.1103/PhysRevE.87.052701 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phenotypic fluctuations and plasticity can generally affect the course of
evolution, a process known as the Baldwin effect. Several studies have recast
this effect and claimed that phenotypic plasticity acceler- ates evolutionary
rate (the Baldwin expediting effect); however, the validity of this claim is
still controversial. In this study, we investi- gate the evolutionary
population dynamics of a quantitative genetic model under a multi-peaked
fitness landscape, in order to evaluate the validity of the effect. We provide
analytical expressions for the evolutionary rate and average population
fitness. Our results indicate that under a multi-peaked fitness landscape,
phenotypic fluctuation always accelerates evolutionary rate, but it decreases
the average fit- ness. As an extreme case of the trade-off between the rate of
evolution and average fitness, phenotypic fluctuation is shown to accelerate
the error catastrophe, in which a population fails to sustain a high-fitness
peak. In the context of our findings, we discuss the role of phenotypic
plasticity in adaptive evolution.
| [
{
"created": "Fri, 19 Oct 2012 02:32:02 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2013 09:48:21 GMT",
"version": "v2"
}
] | 2013-05-30 | [
[
"Saito",
"Nen",
""
],
[
"Ishihara",
"Shuji",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | Phenotypic fluctuations and plasticity can generally affect the course of evolution, a process known as the Baldwin effect. Several studies have recast this effect and claimed that phenotypic plasticity acceler- ates evolutionary rate (the Baldwin expediting effect); however, the validity of this claim is still controversial. In this study, we investi- gate the evolutionary population dynamics of a quantitative genetic model under a multi-peaked fitness landscape, in order to evaluate the validity of the effect. We provide analytical expressions for the evolutionary rate and average population fitness. Our results indicate that under a multi-peaked fitness landscape, phenotypic fluctuation always accelerates evolutionary rate, but it decreases the average fit- ness. As an extreme case of the trade-off between the rate of evolution and average fitness, phenotypic fluctuation is shown to accelerate the error catastrophe, in which a population fails to sustain a high-fitness peak. In the context of our findings, we discuss the role of phenotypic plasticity in adaptive evolution. |
1411.3383 | Nobu C. Shirai | Nobu C. Shirai and Macoto Kikuchi | The interplay of intrinsic disorder and macromolecular crowding on
{\alpha}-synuclein fibril formation | 11 pages, 14 figures | J. Chem. Phys. 144, 055101 (2016) | 10.1063/1.4941054 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | {\alpha}-synuclein ({\alpha}-syn) is an intrinsically disordered protein
which is considered to be one of the causes of Parkinson's disease. This
protein forms amyloid fibrils when in a highly concentrated solution. The
fibril formation of {\alpha}-syn is induced not only by increases in
{\alpha}-syn concentration but also by macromolecular crowding. In order to
investigate the coupled effect of the intrinsic disorder of {\alpha}-syn and
macromolecular crowding, we construct a lattice gas model of {\alpha}-syn in
contact with a crowding agent reservoir based on statistical mechanics. The
main assumption is that {\alpha}-syn can be expressed as coarse-grained
particles with internal states coupled with effective volume; and disordered
states are modeled by larger particles with larger internal entropy than other
states. Thanks to the simplicity of the model, we can exactly calculate the
number of conformations of crowding agents, and this enables us to prove that
the original grand canonical ensemble with a crowding agent reservoir is
mathematically equivalent to a canonical ensemble without crowding agents. In
this expression, the effect of macromolecular crowding is absorbed in the
internal entropy of disordered states; it is clearly shown that the crowding
effect reduces the internal entropy. Based on Monte Carlo simulation, we
provide scenarios of crowding-induced fibril formation. We also discuss the
recent controversy over the existence of helically folded tetramers of
{\alpha}-syn, and suggest that macromolecular crowding is the key to resolving
the controversy.
| [
{
"created": "Wed, 12 Nov 2014 22:37:48 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jun 2015 10:12:41 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Jan 2016 12:32:12 GMT",
"version": "v3"
},
{
"created": "Sat, 6 Feb 2016 11:58:40 GMT",
"version": "v4"
}
] | 2016-02-09 | [
[
"Shirai",
"Nobu C.",
""
],
[
"Kikuchi",
"Macoto",
""
]
] | {\alpha}-synuclein ({\alpha}-syn) is an intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of {\alpha}-syn is induced not only by increases in {\alpha}-syn concentration but also by macromolecular crowding. In order to investigate the coupled effect of the intrinsic disorder of {\alpha}-syn and macromolecular crowding, we construct a lattice gas model of {\alpha}-syn in contact with a crowding agent reservoir based on statistical mechanics. The main assumption is that {\alpha}-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states are modeled by larger particles with larger internal entropy than other states. Thanks to the simplicity of the model, we can exactly calculate the number of conformations of crowding agents, and this enables us to prove that the original grand canonical ensemble with a crowding agent reservoir is mathematically equivalent to a canonical ensemble without crowding agents. In this expression, the effect of macromolecular crowding is absorbed in the internal entropy of disordered states; it is clearly shown that the crowding effect reduces the internal entropy. Based on Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of {\alpha}-syn, and suggest that macromolecular crowding is the key to resolving the controversy. |
1910.01689 | Jordan Guerguiev | Jordan Guerguiev, Konrad P. Kording, Blake A. Richards | Spike-based causal inference for weight alignment | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | In artificial neural networks trained with gradient descent, the weights used
for processing stimuli are also used during backward passes to calculate
gradients. For the real brain to approximate gradients, gradient information
would have to be propagated separately, such that one set of synaptic weights
is used for processing and another set is used for backward passes. This
produces the so-called "weight transport problem" for biological models of
learning, where the backward weights used to calculate gradients need to mirror
the forward weights used to process stimuli. This weight transport problem has
been considered so hard that popular proposals for biological learning assume
that the backward weights are simply random, as in the feedback alignment
algorithm. However, such random weights do not appear to work well for large
networks. Here we show how the discontinuity introduced in a spiking system can
lead to a solution to this problem. The resulting algorithm is a special case
of an estimator used for causal inference in econometrics, regression
discontinuity design. We show empirically that this algorithm rapidly makes the
backward weights approximate the forward weights. As the backward weights
become correct, this improves learning performance over feedback alignment on
tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate
that a simple learning rule in a spiking network can allow neurons to produce
the right backward connections and thus solve the weight transport problem.
| [
{
"created": "Thu, 3 Oct 2019 19:07:58 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Feb 2020 01:05:15 GMT",
"version": "v2"
}
] | 2020-02-04 | [
[
"Guerguiev",
"Jordan",
""
],
[
"Kording",
"Konrad P.",
""
],
[
"Richards",
"Blake A.",
""
]
] | In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem. |
2105.05221 | Andrea Raffo | Andrea Raffo, Ulderico Fugacci, Silvia Biasotti, Walter Rocchia,
Yonghuai Liu, Ekpo Otu, Reyer Zwiggelaar, David Hunter, Evangelia I.
Zacharaki, Eleftheria Psatha, Dimitrios Laskos, Gerasimos Arvanitis,
Konstantinos Moustakas, Tunde Aderinwale, Charles Christoffer, Woong-Hee
Shin, Daisuke Kihara, Andrea Giachetti, Huu-Nghia Nguyen, Tuan-Duy Nguyen,
Vinh-Thuyen Nguyen-Truong, Danh Le-Thanh, Hai-Dang Nguyen, Minh-Triet Tran | SHREC 2021: Retrieval and classification of protein surfaces equipped
with physical and chemical properties | null | Computers & Graphics 99 (2021) 1-21 | 10.1016/j.cag.2021.06.010 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the methods that have participated in the SHREC 2021
contest on retrieval and classification of protein surfaces on the basis of
their geometry and physicochemical properties. The goal of the contest is to
assess the capability of different computational approaches to identify
different conformations of the same protein, or the presence of common
sub-parts, starting from a set of molecular surfaces. We addressed two
problems: defining the similarity solely based on the surface geometry or with
the inclusion of physicochemical information, such as electrostatic potential,
amino acid hydrophobicity, and the presence of hydrogen bond donors and
acceptors. Retrieval and classification performances, with respect to the
single protein or the existence of common sub-sequences, are analysed according
to a number of information retrieval indicators.
| [
{
"created": "Tue, 11 May 2021 17:37:41 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jun 2021 17:57:35 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Aug 2021 20:45:01 GMT",
"version": "v3"
},
{
"created": "Tue, 7 Sep 2021 09:08:58 GMT",
"version": "v4"
},
{
"created": "Sun, 17 Oct 2021 10:49:11 GMT",
"version": "v5"
}
] | 2021-10-19 | [
[
"Raffo",
"Andrea",
""
],
[
"Fugacci",
"Ulderico",
""
],
[
"Biasotti",
"Silvia",
""
],
[
"Rocchia",
"Walter",
""
],
[
"Liu",
"Yonghuai",
""
],
[
"Otu",
"Ekpo",
""
],
[
"Zwiggelaar",
"Reyer",
""
],
[
"Hunter",
"David",
""
],
[
"Zacharaki",
"Evangelia I.",
""
],
[
"Psatha",
"Eleftheria",
""
],
[
"Laskos",
"Dimitrios",
""
],
[
"Arvanitis",
"Gerasimos",
""
],
[
"Moustakas",
"Konstantinos",
""
],
[
"Aderinwale",
"Tunde",
""
],
[
"Christoffer",
"Charles",
""
],
[
"Shin",
"Woong-Hee",
""
],
[
"Kihara",
"Daisuke",
""
],
[
"Giachetti",
"Andrea",
""
],
[
"Nguyen",
"Huu-Nghia",
""
],
[
"Nguyen",
"Tuan-Duy",
""
],
[
"Nguyen-Truong",
"Vinh-Thuyen",
""
],
[
"Le-Thanh",
"Danh",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Tran",
"Minh-Triet",
""
]
] | This paper presents the methods that have participated in the SHREC 2021 contest on retrieval and classification of protein surfaces on the basis of their geometry and physicochemical properties. The goal of the contest is to assess the capability of different computational approaches to identify different conformations of the same protein, or the presence of common sub-parts, starting from a set of molecular surfaces. We addressed two problems: defining the similarity solely based on the surface geometry or with the inclusion of physicochemical information, such as electrostatic potential, amino acid hydrophobicity, and the presence of hydrogen bond donors and acceptors. Retrieval and classification performances, with respect to the single protein or the existence of common sub-sequences, are analysed according to a number of information retrieval indicators. |
2310.15173 | Dora Hermes | Tal Pal Attia, Kay Robbins, S\'andor Beniczky, Jorge Bosch-Bayard,
Arnaud Delorme, Brian Nils Lundstrom, Christine Rogers, Stefan Rampp, Pedro
Valdes-Sosa, Dung Truong, Greg Worrell, Scott Makeig, Dora Hermes | Hierarchical Event Descriptor library schema for EEG data annotation | 22 pages, 5 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Standardizing terminology to describe electrophysiological events can improve
both clinical care and computational research. Sharing data enriched by such
standardized terminology can support advances in neuroscientific data
exploration, from single-subject to mega-analysis. Machine readability of
electrophysiological event annotations is essential for performing such
analyses efficiently across software tools and packages. Hierarchical Event
Descriptors (HED) provide a framework for describing events in neuroscience
experiments. HED library schemas extend the standard HED schema vocabulary to
include specialized vocabularies, such as standardized clinical terms for
electrophysiological events. The Standardized Computer-based Organized
Reporting of EEG (SCORE) defines terms for annotating EEG events, including
artifacts. This study makes SCORE machine-readable by incorporating it into a
HED library schema. We demonstrate the use of the HED-SCORE library schema to
annotate events in example EEG data stored in Brain Imaging Data Structure
(BIDS) format. Clinicians and researchers worldwide can now use the HED-SCORE
library schema to annotate and then compute on electrophysiological data
obtained from the human brain.
| [
{
"created": "Wed, 4 Oct 2023 13:51:08 GMT",
"version": "v1"
}
] | 2023-10-25 | [
[
"Attia",
"Tal Pal",
""
],
[
"Robbins",
"Kay",
""
],
[
"Beniczky",
"Sándor",
""
],
[
"Bosch-Bayard",
"Jorge",
""
],
[
"Delorme",
"Arnaud",
""
],
[
"Lundstrom",
"Brian Nils",
""
],
[
"Rogers",
"Christine",
""
],
[
"Rampp",
"Stefan",
""
],
[
"Valdes-Sosa",
"Pedro",
""
],
[
"Truong",
"Dung",
""
],
[
"Worrell",
"Greg",
""
],
[
"Makeig",
"Scott",
""
],
[
"Hermes",
"Dora",
""
]
] | Standardizing terminology to describe electrophysiological events can improve both clinical care and computational research. Sharing data enriched by such standardized terminology can support advances in neuroscientific data exploration, from single-subject to mega-analysis. Machine readability of electrophysiological event annotations is essential for performing such analyses efficiently across software tools and packages. Hierarchical Event Descriptors (HED) provide a framework for describing events in neuroscience experiments. HED library schemas extend the standard HED schema vocabulary to include specialized vocabularies, such as standardized clinical terms for electrophysiological events. The Standardized Computer-based Organized Reporting of EEG (SCORE) defines terms for annotating EEG events, including artifacts. This study makes SCORE machine-readable by incorporating it into a HED library schema. We demonstrate the use of the HED-SCORE library schema to annotate events in example EEG data stored in Brain Imaging Data Structure (BIDS) format. Clinicians and researchers worldwide can now use the HED-SCORE library schema to annotate and then compute on electrophysiological data obtained from the human brain. |
2101.09352 | Raphael Huser | Matheus B. Guerrero, Rapha\"el Huser and Hernando Ombao | Conex-Connect: Learning Patterns in Extremal Brain Connectivity From
Multi-Channel EEG Data | null | null | null | null | q-bio.NC stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epilepsy is a chronic neurological disorder affecting more than 50 million
people globally. An epileptic seizure acts like a temporary shock to the
neuronal system, disrupting normal electrical activity in the brain. Epilepsy
is frequently diagnosed with electroencephalograms (EEGs). Current methods
study the time-varying spectra and coherence but do not directly model changes
in extreme behavior. Thus, we propose a new approach to characterize brain
connectivity based on the joint tail behavior of the EEGs. Our proposed method,
the conditional extremal dependence for brain connectivity (Conex-Connect), is
a pioneering approach that links the association between extreme values of
higher oscillations at a reference channel with the other brain network
channels. Using the Conex-Connect method, we discover changes in the extremal
dependence driven by the activity at the foci of the epileptic seizure. Our
model-based approach reveals that, pre-seizure, the dependence is notably
stable for all channels when conditioning on extreme values of the focal
seizure area. Post-seizure, by contrast, the dependence between channels is
weaker, and dependence patterns are more "chaotic". Moreover, in terms of
spectral decomposition, we find that high values of the high-frequency
Gamma-band are the most relevant features to explain the conditional extremal
dependence of brain connectivity.
| [
{
"created": "Sun, 3 Jan 2021 18:53:05 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Guerrero",
"Matheus B.",
""
],
[
"Huser",
"Raphaël",
""
],
[
"Ombao",
"Hernando",
""
]
] | Epilepsy is a chronic neurological disorder affecting more than 50 million people globally. An epileptic seizure acts like a temporary shock to the neuronal system, disrupting normal electrical activity in the brain. Epilepsy is frequently diagnosed with electroencephalograms (EEGs). Current methods study the time-varying spectra and coherence but do not directly model changes in extreme behavior. Thus, we propose a new approach to characterize brain connectivity based on the joint tail behavior of the EEGs. Our proposed method, the conditional extremal dependence for brain connectivity (Conex-Connect), is a pioneering approach that links the association between extreme values of higher oscillations at a reference channel with the other brain network channels. Using the Conex-Connect method, we discover changes in the extremal dependence driven by the activity at the foci of the epileptic seizure. Our model-based approach reveals that, pre-seizure, the dependence is notably stable for all channels when conditioning on extreme values of the focal seizure area. Post-seizure, by contrast, the dependence between channels is weaker, and dependence patterns are more "chaotic". Moreover, in terms of spectral decomposition, we find that high values of the high-frequency Gamma-band are the most relevant features to explain the conditional extremal dependence of brain connectivity. |
2311.04307 | Maciej Nowak | Ewa Gudowska-Nowak and Maciej A. Nowak | Freeness in cognitive science | 11 pages, 2 figures. Mini-review dedicated to the Jubilee of
Professor Tadeusz Marek | Chapter in the monography ISBN 978-83-233-5179-5 (2022) | null | null | q-bio.NC math-ph math.MP math.PR nlin.AO | http://creativecommons.org/licenses/by/4.0/ | In this mini-review, dedicated to the Jubilee of Professor Tadeusz Marek, we
highlight in a popular way the power of so-called free random variables
(hereafter FRV) calculus, viewed as a potential probability calculus for the
XXI century, in applications to the broad area of cognitive sciences. We
provide three examples: (i) inference of noisy signals from multivariate
correlation data from the brain; (ii) distinguished role of non-normality in
real neuronal models; (iii) applications to the field of deep learning in
artificial neural networks.
| [
{
"created": "Tue, 7 Nov 2023 19:24:39 GMT",
"version": "v1"
}
] | 2023-11-09 | [
[
"Gudowska-Nowak",
"Ewa",
""
],
[
"Nowak",
"Maciej A.",
""
]
] | In this mini-review, dedicated to the Jubilee of Professor Tadeusz Marek, we highlight in a popular way the power of so-called free random variables (hereafter FRV) calculus, viewed as a potential probability calculus for the XXI century, in applications to the broad area of cognitive sciences. We provide three examples: (i) inference of noisy signals from multivariate correlation data from the brain; (ii) distinguished role of non-normality in real neuronal models; (iii) applications to the field of deep learning in artificial neural networks. |
1504.07700 | Thorsten Pr\"ustel | Thorsten Pr\"ustel and Martin Meier-Schellersheim | Interplay of receptor memory and ligand rebinding | 13 pages | null | null | null | q-bio.QM q-bio.SC | http://creativecommons.org/licenses/publicdomain/ | Rapid rebinding of molecular interaction partners that are in close proximity
after dissociation leads to a dissociation and association kinetics that can
profoundly differ from predictions based on bulk reaction models. The cause of
this effect can be traced back to the non-Markovian character of the ligand's
rebinding time probability density function, reflecting the fact that, for a
certain time span, the ligand still 'remembers' the receptor it was bound to
previously. In this manuscript, we explore the consequences of the hypothesis
that initial binding and consecutive rebinding give rise to a bond lifetime
density that is non-Markovian as well. We study the combined effect of the two
non-Markovian waiting time probability densities and show that even for very
short times the decay of the fraction of occupied receptors deviates from an
exponential. For long times, dissociation is slower than an exponential and the
fate of the the steady-state bound receptor fraction critically depends on the
extent of the deviation, relative to the rebinding time density, from the
Markovian limit: The population of occupied receptors may either decay
completely or assume a non-vanishing value for small and strong deviations,
respectively. Furthermore, we point out the important role played by fractional
calculus and demonstrate that the short- and long-time dynamics of the occupied
receptors can be naturally expressed as well as easily obtained in terms of
fractional differential equations involving the Riemann-Liouville derivative.
Our analysis shows that cells may exploit receptor memory as mechanism to
dynamically widen the range of potential response-patterns to a given signal.
| [
{
"created": "Wed, 29 Apr 2015 01:59:13 GMT",
"version": "v1"
}
] | 2015-04-30 | [
[
"Prüstel",
"Thorsten",
""
],
[
"Meier-Schellersheim",
"Martin",
""
]
] | Rapid rebinding of molecular interaction partners that are in close proximity after dissociation leads to a dissociation and association kinetics that can profoundly differ from predictions based on bulk reaction models. The cause of this effect can be traced back to the non-Markovian character of the ligand's rebinding time probability density function, reflecting the fact that, for a certain time span, the ligand still 'remembers' the receptor it was bound to previously. In this manuscript, we explore the consequences of the hypothesis that initial binding and consecutive rebinding give rise to a bond lifetime density that is non-Markovian as well. We study the combined effect of the two non-Markovian waiting time probability densities and show that even for very short times the decay of the fraction of occupied receptors deviates from an exponential. For long times, dissociation is slower than an exponential and the fate of the the steady-state bound receptor fraction critically depends on the extent of the deviation, relative to the rebinding time density, from the Markovian limit: The population of occupied receptors may either decay completely or assume a non-vanishing value for small and strong deviations, respectively. Furthermore, we point out the important role played by fractional calculus and demonstrate that the short- and long-time dynamics of the occupied receptors can be naturally expressed as well as easily obtained in terms of fractional differential equations involving the Riemann-Liouville derivative. Our analysis shows that cells may exploit receptor memory as mechanism to dynamically widen the range of potential response-patterns to a given signal. |
2202.05497 | Gersende Fort | Patrice Abry (Phys-ENS), Gersende Fort (IMT), Barbara Pascal
(CRIStAL), Nelly Pustelnik (Phys-ENS) | Temporal evolution of the Covid19 pandemic reproduction number:
Estimations from proximal optimization to Monte Carlo sampling | null | null | null | null | q-bio.PE physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring the evolution of the Covid19 pandemic constitutes a critical step
in sanitary policy design. Yet, the assessment of the pandemic intensity within
the pandemic period remains a challenging task because of the limited quality
of data made available by public health authorities (missing data, outliers and
pseudoseasonalities, notably), that calls for cumbersome and ad-hoc
preprocessing (denoising) prior to estimation. Recently, the estimation of the
reproduction number, a measure of the pandemic intensity, was formulated as an
inverse problem, combining data-model fidelity and space-time regularity
constraints, solved by nonsmooth convex proximal minimizations. Though
promising, that formulation lacks robustness against the limited quality of the
Covid19 data and confidence assessment. The present work aims to address both
limitations: First, it discusses solutions to produce a robust assessment of
the pandemic intensity by accounting for the low quality of the data directly
within the inverse problem formulation. Second, exploiting a Bayesian
interpretation of the inverse problem formulation, it devises a Monte Carlo
sampling strategy, tailored to a nonsmooth log-concave a posteriori
distribution, to produce relevant credibility intervalbased estimates for the
Covid19 reproduction number. Clinical relevance Applied to daily counts of new
infections made publicly available by the Health Authorities for around 200
countries, the proposed procedures permit robust assessments of the time
evolution of the Covid19 pandemic intensity, updated automatically and on a
daily basis.
| [
{
"created": "Fri, 11 Feb 2022 08:15:42 GMT",
"version": "v1"
}
] | 2022-02-14 | [
[
"Abry",
"Patrice",
"",
"Phys-ENS"
],
[
"Fort",
"Gersende",
"",
"IMT"
],
[
"Pascal",
"Barbara",
"",
"CRIStAL"
],
[
"Pustelnik",
"Nelly",
"",
"Phys-ENS"
]
] | Monitoring the evolution of the Covid19 pandemic constitutes a critical step in sanitary policy design. Yet, the assessment of the pandemic intensity within the pandemic period remains a challenging task because of the limited quality of data made available by public health authorities (missing data, outliers and pseudoseasonalities, notably), that calls for cumbersome and ad-hoc preprocessing (denoising) prior to estimation. Recently, the estimation of the reproduction number, a measure of the pandemic intensity, was formulated as an inverse problem, combining data-model fidelity and space-time regularity constraints, solved by nonsmooth convex proximal minimizations. Though promising, that formulation lacks robustness against the limited quality of the Covid19 data and confidence assessment. The present work aims to address both limitations: First, it discusses solutions to produce a robust assessment of the pandemic intensity by accounting for the low quality of the data directly within the inverse problem formulation. Second, exploiting a Bayesian interpretation of the inverse problem formulation, it devises a Monte Carlo sampling strategy, tailored to a nonsmooth log-concave a posteriori distribution, to produce relevant credibility intervalbased estimates for the Covid19 reproduction number. Clinical relevance Applied to daily counts of new infections made publicly available by the Health Authorities for around 200 countries, the proposed procedures permit robust assessments of the time evolution of the Covid19 pandemic intensity, updated automatically and on a daily basis. |
1405.5513 | Seyed Aidin Sajedi | Fahimeh Abdollahi and Seyed Aidin Sajedi | Correlation of multiple sclerosis (MS) incidence trends with solar and
geomagnetic indices: time to revise the method of reporting MS
epidemiological data | Single PDF, 8 pages, 3 figures | Iranian Journal of Neurology 2014; 13(2):64-69 | null | null | q-bio.TO q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Recently, we introduced solar related geomagnetic disturbances
(GMD) as a potential environmental risk factor for multiple sclerosis (MS). The
aim of this study was to test probable correlation between solar activities and
GMD with long-term variations of MS incidence.
Methods: After a systematic review, we studied the association between
alterations in solar wind velocity (Vsw) and planetary A index (Ap, a GMD
index) with MS incidence in Tehran and western Greece, during the 23rd solar
cycle (1996-2008), by an ecological-correlational study.
Results: We found moderate to strong correlations among MS incidence of
Tehran with Vsw (Rs=0.665, p=0.013), with one year delay, and also with Ap
(Rs=0.864, p=0.001) with 2 year delay. There were very strong correlations
among MS incidence data of Greece with Vsw (R=0.906, p<0.001) and with Ap
(R=0.844, p=0.001), both with one year lag.
Conclusion: It is the first time that a hypothesis has introduced an
environmental factor that may describe MS incidence alterations; however, it
should be reminded that correlation does not mean necessarily the existence of
a causal relationship. Important message of these findings for researchers is
to provide MS incidence reports with higher resolution for consecutive years,
based on the time of disease onset and relapses, not just the time of
diagnosis. Then, it would be possible to further investigate the validity of
GMD hypothesis or any other probable environmental risk factors.
Keywords: Correlation analysis, Multiple sclerosis, Incidence, Geomagnetic
disturbance, Geomagnetic activity, Solar wind velocity, Environmental risk
factor.
| [
{
"created": "Sun, 18 May 2014 21:28:05 GMT",
"version": "v1"
}
] | 2014-05-22 | [
[
"Abdollahi",
"Fahimeh",
""
],
[
"Sajedi",
"Seyed Aidin",
""
]
] | Background: Recently, we introduced solar related geomagnetic disturbances (GMD) as a potential environmental risk factor for multiple sclerosis (MS). The aim of this study was to test probable correlation between solar activities and GMD with long-term variations of MS incidence. Methods: After a systematic review, we studied the association between alterations in solar wind velocity (Vsw) and planetary A index (Ap, a GMD index) with MS incidence in Tehran and western Greece, during the 23rd solar cycle (1996-2008), by an ecological-correlational study. Results: We found moderate to strong correlations among MS incidence of Tehran with Vsw (Rs=0.665, p=0.013), with one year delay, and also with Ap (Rs=0.864, p=0.001) with 2 year delay. There were very strong correlations among MS incidence data of Greece with Vsw (R=0.906, p<0.001) and with Ap (R=0.844, p=0.001), both with one year lag. Conclusion: It is the first time that a hypothesis has introduced an environmental factor that may describe MS incidence alterations; however, it should be reminded that correlation does not mean necessarily the existence of a causal relationship. Important message of these findings for researchers is to provide MS incidence reports with higher resolution for consecutive years, based on the time of disease onset and relapses, not just the time of diagnosis. Then, it would be possible to further investigate the validity of GMD hypothesis or any other probable environmental risk factors. Keywords: Correlation analysis, Multiple sclerosis, Incidence, Geomagnetic disturbance, Geomagnetic activity, Solar wind velocity, Environmental risk factor. |
1806.09334 | Hamed Heidari-Gorji | Hamed Heidari Gorji (1 and 2), Sajjad Zabbah (2), Reza Ebrahimpour (1
and 2) ((1) Faculty of Computer Engineering, Shahid Rajaee Teacher Training
University, Tehran, Iran (2) School of Cognitive Sciences, Institute for
Research in Fundamental Sciences, Tehran, Iran) | A temporal neural network model for object recognition using a
biologically plausible decision making layer | Version 2 contains more details about model. Comparisons with some
known deep neural networks have been included and are shown in figure 7. text
was corrected and edited | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain can recognize different objects as ones that it has experienced before.
The recognition accuracy and its processing time depend on task properties such
as viewing condition, level of noise and etc. Recognition accuracy can be well
explained by different models. However, less attention has been paid to the
processing time and the ones that do, are not biologically plausible. By
extracting features temporally as well as utilizing an accumulation to bound
decision making model, an object recognition model accounting for both
recognition time and accuracy is proposed. To temporally extract informative
features in support of possible classes of stimuli, a hierarchical spiking
neural network, called spiking HMAX is modified. In the decision making part of
the model the extracted information accumulates over time using accumulator
units. The input category is determined as soon as any of the accumulators
reaches a threshold, called decision bound. Results show that not only does the
model follow human accuracy in a psychophysics task better than the classic
spiking HMAX model, but also it predicts human response time in each choice.
Results provide enough evidence that temporal representation of features are
informative since they can improve the accuracy of a biological plausible
decision maker over time. This is also in line with the well-known idea of
speed accuracy trade-off in decision making studies.
| [
{
"created": "Mon, 25 Jun 2018 09:04:28 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Nov 2018 18:24:50 GMT",
"version": "v2"
}
] | 2018-11-27 | [
[
"Gorji",
"Hamed Heidari",
"",
"1 and 2"
],
[
"Zabbah",
"Sajjad",
"",
"1\n and 2"
],
[
"Ebrahimpour",
"Reza",
"",
"1\n and 2"
]
] | Brain can recognize different objects as ones that it has experienced before. The recognition accuracy and its processing time depend on task properties such as viewing condition, level of noise and etc. Recognition accuracy can be well explained by different models. However, less attention has been paid to the processing time and the ones that do, are not biologically plausible. By extracting features temporally as well as utilizing an accumulation to bound decision making model, an object recognition model accounting for both recognition time and accuracy is proposed. To temporally extract informative features in support of possible classes of stimuli, a hierarchical spiking neural network, called spiking HMAX is modified. In the decision making part of the model the extracted information accumulates over time using accumulator units. The input category is determined as soon as any of the accumulators reaches a threshold, called decision bound. Results show that not only does the model follow human accuracy in a psychophysics task better than the classic spiking HMAX model, but also it predicts human response time in each choice. Results provide enough evidence that temporal representation of features are informative since they can improve the accuracy of a biological plausible decision maker over time. This is also in line with the well-known idea of speed accuracy trade-off in decision making studies. |
1807.07268 | Emi Tanaka | Emi Tanaka | Simple robust genomic prediction and outlier detection for a
multi-environmental field trial | null | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of plant breeding trials is often to identify germplasms that are
well adapt to target environments. These germplasms are identified through
genomic prediction from the analysis of multi-environmental field trial (MET)
using linear mixed models. The occurrence of outliers in MET are common and
known to adversely impact accuracy of genomic prediction yet the detection of
outliers, and subsequently its treatment, are often neglected. A number of
reasons stand for this - complex data such as MET give rise to distinct levels
of residuals and thus offers additional challenges of an outlier detection
method and many linear mixed model software are ill-equipped for robust
prediction. We present outlier detection methods using a holistic approach that
borrows the strength across trials. We furthermore evaluate a simple robust
genomic prediction that is applicable to any linear mixed model software. These
are demonstrated using simulation based on two real bread wheat yield METs with
a partially replicated design and an alpha lattice design.
| [
{
"created": "Thu, 19 Jul 2018 07:36:21 GMT",
"version": "v1"
}
] | 2018-07-20 | [
[
"Tanaka",
"Emi",
""
]
] | The aim of plant breeding trials is often to identify germplasms that are well adapt to target environments. These germplasms are identified through genomic prediction from the analysis of multi-environmental field trial (MET) using linear mixed models. The occurrence of outliers in MET are common and known to adversely impact accuracy of genomic prediction yet the detection of outliers, and subsequently its treatment, are often neglected. A number of reasons stand for this - complex data such as MET give rise to distinct levels of residuals and thus offers additional challenges of an outlier detection method and many linear mixed model software are ill-equipped for robust prediction. We present outlier detection methods using a holistic approach that borrows the strength across trials. We furthermore evaluate a simple robust genomic prediction that is applicable to any linear mixed model software. These are demonstrated using simulation based on two real bread wheat yield METs with a partially replicated design and an alpha lattice design. |
2112.05240 | Aydogan Ozcan | Bijie Bai, Hongda Wang, Yuzhu Li, Kevin de Haan, Francesco Colonnese,
Yujie Wan, Jingyi Zuo, Ngan B. Doan, Xiaoran Zhang, Yijie Zhang, Jingxi Li,
Wenjie Dong, Morgan Angus Darrow, Elham Kamangar, Han Sung Lee, Yair
Rivenson, Aydogan Ozcan | Label-free virtual HER2 immunohistochemical staining of breast tissue
using deep learning | 26 Pages, 5 Figures | BME Frontiers (2022) | 10.34133/2022/9786242 | null | q-bio.QM cs.LG eess.IV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The immunohistochemical (IHC) staining of the human epidermal growth factor
receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis,
preclinical studies and diagnostic decisions, guiding cancer treatment and
investigation of pathogenesis. HER2 staining demands laborious tissue treatment
and chemical processing performed by a histotechnologist, which typically takes
one day to prepare in a laboratory, increasing analysis time and associated
costs. Here, we describe a deep learning-based virtual HER2 IHC staining method
using a conditional generative adversarial network that is trained to rapidly
transform autofluorescence microscopic images of unlabeled/label-free breast
tissue sections into bright-field equivalent microscopic images, matching the
standard HER2 IHC staining that is chemically performed on the same tissue
sections. The efficacy of this virtual HER2 staining framework was demonstrated
by quantitative analysis, in which three board-certified breast pathologists
blindly graded the HER2 scores of virtually stained and immunohistochemically
stained HER2 whole slide images (WSIs) to reveal that the HER2 scores
determined by inspecting virtual IHC images are as accurate as their
immunohistochemically stained counterparts. A second quantitative blinded study
performed by the same diagnosticians further revealed that the virtually
stained HER2 images exhibit a comparable staining quality in the level of
nuclear detail, membrane clearness, and absence of staining artifacts with
respect to their immunohistochemically stained counterparts. This virtual HER2
staining framework bypasses the costly, laborious, and time-consuming IHC
staining procedures in laboratory, and can be extended to other types of
biomarkers to accelerate the IHC tissue staining used in life sciences and
biomedical workflow.
| [
{
"created": "Wed, 8 Dec 2021 08:56:15 GMT",
"version": "v1"
}
] | 2022-09-02 | [
[
"Bai",
"Bijie",
""
],
[
"Wang",
"Hongda",
""
],
[
"Li",
"Yuzhu",
""
],
[
"de Haan",
"Kevin",
""
],
[
"Colonnese",
"Francesco",
""
],
[
"Wan",
"Yujie",
""
],
[
"Zuo",
"Jingyi",
""
],
[
"Doan",
"Ngan B.",
""
],
[
"Zhang",
"Xiaoran",
""
],
[
"Zhang",
"Yijie",
""
],
[
"Li",
"Jingxi",
""
],
[
"Dong",
"Wenjie",
""
],
[
"Darrow",
"Morgan Angus",
""
],
[
"Kamangar",
"Elham",
""
],
[
"Lee",
"Han Sung",
""
],
[
"Rivenson",
"Yair",
""
],
[
"Ozcan",
"Aydogan",
""
]
] | The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory, and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow. |
q-bio/0510022 | Ayse Erzan | Yasemin Sengun and Ayse Erzan | Content based network model with duplication and divergence | 12 pages, 6 figures | null | 10.1016/j.physa.2006.02.045 | null | q-bio.MN | null | We construct a minimal content-based realization of the duplication and
divergence model of genomic networks introduced by Wagner [A. Wagner, Proc.
Natl. Acad. Sci. {\bf 91}, 4387 (1994)] and investigate the scaling properties
of the directed degree distribution and clustering coefficient. We find that
the content based network exhibits crossover between two scaling regimes, with
log-periodic oscillations for large degrees. These features are not present in
the original gene duplication model, but inherent in the content based model of
Balcan and Erzan. The scaling exponents $\gamma_1$ and $\gamma_2=\gamma_1-1/2$
of the Balcan-Erzan model turn out to be robust under duplication and point
mutations, but get modified in the presence of splitting and merging of
strings. The clustering coefficient as a function of the degree, $C(d)$, is
found, for the Balcan-Erzan model, to behave in a way qualitatively similar to
the out-degree distribution, however with a very small exponent $\alpha_1=
1-\gamma_1$ and an envelope for the oscillatory part, which is essentially
flat, thus $\alpha_2= 0$. Under duplication and mutations including splitting
and merging of strings, $C(d)$ is found to decay exponentially.
| [
{
"created": "Tue, 11 Oct 2005 20:57:52 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Sengun",
"Yasemin",
""
],
[
"Erzan",
"Ayse",
""
]
] | We construct a minimal content-based realization of the duplication and divergence model of genomic networks introduced by Wagner [A. Wagner, Proc. Natl. Acad. Sci. {\bf 91}, 4387 (1994)] and investigate the scaling properties of the directed degree distribution and clustering coefficient. We find that the content based network exhibits crossover between two scaling regimes, with log-periodic oscillations for large degrees. These features are not present in the original gene duplication model, but inherent in the content based model of Balcan and Erzan. The scaling exponents $\gamma_1$ and $\gamma_2=\gamma_1-1/2$ of the Balcan-Erzan model turn out to be robust under duplication and point mutations, but get modified in the presence of splitting and merging of strings. The clustering coefficient as a function of the degree, $C(d)$, is found, for the Balcan-Erzan model, to behave in a way qualitatively similar to the out-degree distribution, however with a very small exponent $\alpha_1= 1-\gamma_1$ and an envelope for the oscillatory part, which is essentially flat, thus $\alpha_2= 0$. Under duplication and mutations including splitting and merging of strings, $C(d)$ is found to decay exponentially. |
2001.05057 | Michael Kordovan | Michael Kordovan, Stefan Rotter | Spike Train Cumulants for Linear-Nonlinear Poisson Cascade Models | 45 pages, 8 figures | null | null | null | q-bio.NC math.ST physics.bio-ph stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking activity in cortical networks is nonlinear in nature. The
linear-nonlinear cascade model, some versions of which are also known as
point-process generalized linear model, can efficiently capture the nonlinear
dynamics exhibited by such networks. Of particular interest in such models are
theoretical predictions of spike train statistics. However, due to the
moment-closure problem, approximations are inevitable. We suggest here a series
expansion that explains how higher-order moments couple to lower-order ones.
Our approach makes predictions in terms of certain integrals, the so-called
loop integrals. In previous studies these integrals have been evaluated
numerically, but numerical instabilities are sometimes encountered rendering
the results unreliable. Analytic solutions are presented here to overcome this
problem, and to arrive at more robust evaluations. We were able to deduce these
analytic solutions by switching to Fourier space and making use of complex
analysis, specifically Cauchy's residue theorem. We formalized the loop
integrals and explicitly solved them for specific response functions. To
quantify the importance of these corrections for spike train cumulants, we
numerically simulated spiking networks and compared their sample statistics to
our theoretical predictions. Our results demonstrate that the magnitude of the
nonlinear corrections depends on the working point of the nonlinear network
dynamics, and that it is related to the eigenvalues of the mean-field stability
matrix. For our example, the corrections for the firing rates are in the range
between 4 % and 21 % on average. Precise and robust predictions of spike train
statistics accounting for nonlinear effects are, for example, highly relevant
for theories involving spike-timing dependent plasticity (STDP).
| [
{
"created": "Tue, 14 Jan 2020 21:56:13 GMT",
"version": "v1"
}
] | 2020-01-16 | [
[
"Kordovan",
"Michael",
""
],
[
"Rotter",
"Stefan",
""
]
] | Spiking activity in cortical networks is nonlinear in nature. The linear-nonlinear cascade model, some versions of which are also known as point-process generalized linear model, can efficiently capture the nonlinear dynamics exhibited by such networks. Of particular interest in such models are theoretical predictions of spike train statistics. However, due to the moment-closure problem, approximations are inevitable. We suggest here a series expansion that explains how higher-order moments couple to lower-order ones. Our approach makes predictions in terms of certain integrals, the so-called loop integrals. In previous studies these integrals have been evaluated numerically, but numerical instabilities are sometimes encountered rendering the results unreliable. Analytic solutions are presented here to overcome this problem, and to arrive at more robust evaluations. We were able to deduce these analytic solutions by switching to Fourier space and making use of complex analysis, specifically Cauchy's residue theorem. We formalized the loop integrals and explicitly solved them for specific response functions. To quantify the importance of these corrections for spike train cumulants, we numerically simulated spiking networks and compared their sample statistics to our theoretical predictions. Our results demonstrate that the magnitude of the nonlinear corrections depends on the working point of the nonlinear network dynamics, and that it is related to the eigenvalues of the mean-field stability matrix. For our example, the corrections for the firing rates are in the range between 4 % and 21 % on average. Precise and robust predictions of spike train statistics accounting for nonlinear effects are, for example, highly relevant for theories involving spike-timing dependent plasticity (STDP). |
1708.05475 | Huagang He | Huagang He, Shanying Zhu, Yaoyong Ji, Zhengning Jiang, Renhui Zhao,
Tongde Bie | Map-based cloning of the gene Pm21 that confers broad spectrum
resistance to wheat powdery mildew | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common wheat (Triticum aestivum L.) is one of the most important cereal
crops. Wheat powdery mildew caused by Blumeria graminis f. sp. tritici (Bgt) is
a continuing threat to wheat production. The Pm21 gene, originating from
Dasypyrum villosum, confers high resistance to all known Bgt races and has been
widely applied in wheat breeding in China. In this research, we identify Pm21
as a typical coiled-coil, nucleotide-binding site, leucine-rich repeat gene by
an integrated strategy of resistance gene analog (RGA)-based cloning via
comparative genomics, physical and genetic mapping, BSMV-induced gene silencing
(BSMV-VIGS), large-scale mutagenesis and genetic transformation.
| [
{
"created": "Fri, 18 Aug 2017 00:46:44 GMT",
"version": "v1"
}
] | 2017-08-21 | [
[
"He",
"Huagang",
""
],
[
"Zhu",
"Shanying",
""
],
[
"Ji",
"Yaoyong",
""
],
[
"Jiang",
"Zhengning",
""
],
[
"Zhao",
"Renhui",
""
],
[
"Bie",
"Tongde",
""
]
] | Common wheat (Triticum aestivum L.) is one of the most important cereal crops. Wheat powdery mildew caused by Blumeria graminis f. sp. tritici (Bgt) is a continuing threat to wheat production. The Pm21 gene, originating from Dasypyrum villosum, confers high resistance to all known Bgt races and has been widely applied in wheat breeding in China. In this research, we identify Pm21 as a typical coiled-coil, nucleotide-binding site, leucine-rich repeat gene by an integrated strategy of resistance gene analog (RGA)-based cloning via comparative genomics, physical and genetic mapping, BSMV-induced gene silencing (BSMV-VIGS), large-scale mutagenesis and genetic transformation. |
1808.00762 | Don Krieger | Don Krieger, Paul Shepard, Walter Schneider, Sue Beers, Anthony
Kontos, Michael Collins, David O. Okonkwo | MEG-Derived Functional Tractography, Results for Normal and Concussed
Cohorts | 3 pages, 2 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measures of neuroelectric activity from each of 18 automatically identified
white matter tracts were extracted from resting MEG recordings from a
normative, n=588, and a chronic TBI, traumatic brain injury, n=63, cohort, 60
of whose TBIs were mild. Activity in the TBI cohort was significantly reduced
compared with the norms for ten of the tracts, p < 10-6 for each. Significantly
reduced activity (p < 10-3) was seen in more than one tract in seven mTBI
individuals and one member of the normative cohort.
| [
{
"created": "Thu, 2 Aug 2018 11:40:50 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Aug 2018 13:43:53 GMT",
"version": "v2"
}
] | 2018-08-31 | [
[
"Krieger",
"Don",
""
],
[
"Shepard",
"Paul",
""
],
[
"Schneider",
"Walter",
""
],
[
"Beers",
"Sue",
""
],
[
"Kontos",
"Anthony",
""
],
[
"Collins",
"Michael",
""
],
[
"Okonkwo",
"David O.",
""
]
] | Measures of neuroelectric activity from each of 18 automatically identified white matter tracts were extracted from resting MEG recordings from a normative, n=588, and a chronic TBI, traumatic brain injury, n=63, cohort, 60 of whose TBIs were mild. Activity in the TBI cohort was significantly reduced compared with the norms for ten of the tracts, p < 10-6 for each. Significantly reduced activity (p < 10-3) was seen in more than one tract in seven mTBI individuals and one member of the normative cohort. |
2210.00006 | Kiarash Jamali | Kiarash Jamali, Dari Kimanius and Sjors H.W. Scheres | A Graph Neural Network Approach to Automated Model Building in Cryo-EM
Maps | The Eleventh International Conference on Learning Representations | null | null | null | q-bio.QM cs.AI cs.LG q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Electron cryo-microscopy (cryo-EM) produces three-dimensional (3D) maps of
the electrostatic potential of biological macromolecules, including proteins.
Along with knowledge about the imaged molecules, cryo-EM maps allow de novo
atomic modelling, which is typically done through a laborious manual process.
Taking inspiration from recent advances in machine learning applications to
protein structure prediction, we propose a graph neural network (GNN) approach
for automated model building of proteins in cryo-EM maps. The GNN acts on a
graph with nodes assigned to individual amino acids and edges representing the
protein chain. Combining information from the voxel-based cryo-EM data, the
amino acid sequence data and prior knowledge about protein geometries, the GNN
refines the geometry of the protein chain and classifies the amino acids for
each of its nodes. Application to 28 test cases shows that our approach
outperforms the state-of-the-art and approximates manual building for cryo-EM
maps with resolutions better than 3.5 \r{A}.
| [
{
"created": "Fri, 30 Sep 2022 16:47:45 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Nov 2022 17:02:20 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Feb 2023 12:29:01 GMT",
"version": "v3"
}
] | 2023-02-09 | [
[
"Jamali",
"Kiarash",
""
],
[
"Kimanius",
"Dari",
""
],
[
"Scheres",
"Sjors H. W.",
""
]
] | Electron cryo-microscopy (cryo-EM) produces three-dimensional (3D) maps of the electrostatic potential of biological macromolecules, including proteins. Along with knowledge about the imaged molecules, cryo-EM maps allow de novo atomic modelling, which is typically done through a laborious manual process. Taking inspiration from recent advances in machine learning applications to protein structure prediction, we propose a graph neural network (GNN) approach for automated model building of proteins in cryo-EM maps. The GNN acts on a graph with nodes assigned to individual amino acids and edges representing the protein chain. Combining information from the voxel-based cryo-EM data, the amino acid sequence data and prior knowledge about protein geometries, the GNN refines the geometry of the protein chain and classifies the amino acids for each of its nodes. Application to 28 test cases shows that our approach outperforms the state-of-the-art and approximates manual building for cryo-EM maps with resolutions better than 3.5 \r{A}. |
0908.0484 | Tibor Antal | Tibor Antal and P. L. Krapivsky | Exact solution of a two-type branching process: Clone size distribution
in cell division kinetics | 16 pages | Journal of Statistical Mechanics P07028 (2010) | 10.1088/1742-5468/2010/07/P07028 | null | q-bio.PE cond-mat.stat-mech math.PR q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a two-type branching process which provides excellent description of
experimental data on cell dynamics in skin tissue (Clayton et al., 2007). The
model involves only a single type of progenitor cell, and does not require
support from a self-renewed population of stem cells. The progenitor cells
divide and may differentiate into post-mitotic cells. We derive an exact
solution of this model in terms of generating functions for the total number of
cells, and for the number of cells of different types. We also deduce large
time asymptotic behaviors drawing on our exact results, and on an independent
diffusion approximation.
| [
{
"created": "Tue, 4 Aug 2009 16:19:34 GMT",
"version": "v1"
}
] | 2010-11-18 | [
[
"Antal",
"Tibor",
""
],
[
"Krapivsky",
"P. L.",
""
]
] | We study a two-type branching process which provides excellent description of experimental data on cell dynamics in skin tissue (Clayton et al., 2007). The model involves only a single type of progenitor cell, and does not require support from a self-renewed population of stem cells. The progenitor cells divide and may differentiate into post-mitotic cells. We derive an exact solution of this model in terms of generating functions for the total number of cells, and for the number of cells of different types. We also deduce large time asymptotic behaviors drawing on our exact results, and on an independent diffusion approximation. |
2309.07305 | Marc Harary | Marc Harary | SHIELD: Secure Haplotype Imputation Employing Local Differential Privacy | null | null | null | null | q-bio.QM cs.CR | http://creativecommons.org/licenses/by/4.0/ | We introduce Secure Haplotype Imputation Employing Local Differential privacy
(SHIELD), a program for accurately estimating the genotype of target samples at
markers that are not directly assayed by array-based genotyping platforms while
preserving the privacy of donors to public reference panels. At the core of
SHIELD is the Li-Stephens model of genetic recombination, according to which
genomic information is comprised of mosaics of ancestral haplotype fragments
that coalesce via a Markov random field. We use the standard forward-backward
algorithm for inferring the ancestral haplotypes of target genomes, and hence
the most likely genotype at unobserved sites, using a reference panel of
template haplotypes whose privacy is guaranteed by the randomized response
technique from differential privacy.
| [
{
"created": "Wed, 13 Sep 2023 20:51:11 GMT",
"version": "v1"
}
] | 2023-09-15 | [
[
"Harary",
"Marc",
""
]
] | We introduce Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program for accurately estimating the genotype of target samples at markers that are not directly assayed by array-based genotyping platforms while preserving the privacy of donors to public reference panels. At the core of SHIELD is the Li-Stephens model of genetic recombination, according to which genomic information is comprised of mosaics of ancestral haplotype fragments that coalesce via a Markov random field. We use the standard forward-backward algorithm for inferring the ancestral haplotypes of target genomes, and hence the most likely genotype at unobserved sites, using a reference panel of template haplotypes whose privacy is guaranteed by the randomized response technique from differential privacy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.