id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1110.4944 | Peter Ralph | Carl Boettiger, Graham Coop, and Peter Ralph | Is your phylogeny informative? Measuring the power of comparative
methods | 19 pages, 6 figures, 2 tables | Evolution (2012) | 10.1111/j.1558-5646.2012.01574.x | null | q-bio.QM math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic comparative methods may fail to produce meaningful results when
either the underlying model is inappropriate or the data contain insufficient
information to inform the inference. The ability to measure the statistical
power of these methods has become crucial to ensure that data quantity keeps
pace with growing model complexity. Through simulations, we show that commonly
applied model choice methods based on information criteria can have remarkably
high error rates; this can be a problem because methods to estimate the
uncertainty or power are not widely known or applied. Furthermore, the power of
comparative methods can depend significantly on the structure of the data. We
describe a Monte Carlo based method which addresses both of these challenges,
and show how this approach both quantifies and substantially reduces errors
relative to information criteria. The method also produces meaningful
confidence intervals for model parameters. We illustrate how the power to
distinguish different models, such as varying levels of selection, varies both
with number of taxa and structure of the phylogeny. We provide an open-source
implementation in the pmc ("Phylogenetic Monte Carlo") package for the R
programming language. We hope such power analysis becomes a routine part of
model comparison in comparative methods.
| [
{
"created": "Sat, 22 Oct 2011 04:19:27 GMT",
"version": "v1"
}
] | 2012-07-26 | [
[
"Boettiger",
"Carl",
""
],
[
"Coop",
"Graham",
""
],
[
"Ralph",
"Peter",
""
]
] | Phylogenetic comparative methods may fail to produce meaningful results when either the underlying model is inappropriate or the data contain insufficient information to inform the inference. The ability to measure the statistical power of these methods has become crucial to ensure that data quantity keeps pace with growing model complexity. Through simulations, we show that commonly applied model choice methods based on information criteria can have remarkably high error rates; this can be a problem because methods to estimate the uncertainty or power are not widely known or applied. Furthermore, the power of comparative methods can depend significantly on the structure of the data. We describe a Monte Carlo based method which addresses both of these challenges, and show how this approach both quantifies and substantially reduces errors relative to information criteria. The method also produces meaningful confidence intervals for model parameters. We illustrate how the power to distinguish different models, such as varying levels of selection, varies both with number of taxa and structure of the phylogeny. We provide an open-source implementation in the pmc ("Phylogenetic Monte Carlo") package for the R programming language. We hope such power analysis becomes a routine part of model comparison in comparative methods. |
2010.16167 | Geraldine Finlayson Dr | Geraldine Finlayson (1, 2 and 3), Stewart Finlayson (1 and 4), Clive
Finlayson (1, 2, 3 and 5), Keith Bensusan (6 and 3), Rhian Guillem (6 and 3),
Tyson L. Holmes (1 and 3), Francisco Giles-Guzman (1), Jos\'e S. Carri\'on
(7), Crist\'obal Belda (8), Lawrence Sawchuk (5) ((1) The Gibraltar National
Museum, Gibraltar, (2) Department of Life Sciences, Liverpool John Moores
University, United Kingdom, (3) Institute of Life and Earth Sciences, The
University of Gibraltar, Gibraltar, (4) Department of Life Sciences, Anglia
Ruskin University, Cambridge, United Kingdom, (5) Department of Social
Sciences, University of Toronto Scarborough, Canada, (6) Gibraltar Botanic
Gardens, Gibraltar, (7) Departamento de Biologia, Universidad de Murcia,
Spain, (8) Instituto de Salud Carlos III, Ministerio de Ciencia e
Innovaci\'on, Spain.) | Nocturnality, seasonality and the SARS-CoV-2 Ecological Niche | 30 pages plus an Appendix (total number of pages 115); Corresponding
Author: G Finlayson | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the behaviour of hosts of SARS-CoV-2 is crucial to our
understanding of the virus. A comparison of environmental features related to
the incidence of SARS-CoV-2 with those of its potential hosts is critical. We
examine the distribution of coronaviruses among bats. We analyse the
distribution of SARS-CoV-2 in a nine-week period following lockdown in Italy,
Spain, and Australia. We correlate its incidence with environmental variables
particularly ultraviolet radiation, temperature, and humidity. We establish a
clear negative relationship between COVID-19 and ultraviolet radiation,
modulated by temperature and humidity. We relate our results with data showing
that the bat species most vulnerable to coronavirus infection are those which
live in environmental conditions that are similar to those that appear to be
most favourable to the spread of COVID-19. The SARS-CoV-2 ecological niche has
been the product of long-term coevolution of coronaviruses with their host
species. Understanding the key parameters of that niche in host species allows
us to predict circumstances where its spread will be most favourable. Such
conditions can be summarised under the headings of nocturnality and
seasonality. High ultraviolet radiation, in particular, is proposed as a key
limiting variable. We therefore expect the risk of spread of COVID-19 to be
highest in winter conditions, and in low light environments. Human activities
resembling those of highly social cave-dwelling bats (e.g. large nocturnal
gatherings or high density indoor activities) will only serve to compound the
problem of COVID-19.
| [
{
"created": "Fri, 30 Oct 2020 10:19:39 GMT",
"version": "v1"
}
] | 2020-11-02 | [
[
"Finlayson",
"Geraldine",
"",
"1, 2 and 3"
],
[
"Finlayson",
"Stewart",
"",
"1 and 4"
],
[
"Finlayson",
"Clive",
"",
"1, 2, 3 and 5"
],
[
"Bensusan",
"Keith",
"",
"6 and 3"
],
[
"Guillem",
"Rhian",
"",
"6 and 3"
],
[
"Holmes",
"Tyson L.",
"",
"1 and 3"
],
[
"Giles-Guzman",
"Francisco",
""
],
[
"Carrión",
"José S.",
""
],
[
"Belda",
"Cristóbal",
""
],
[
"Sawchuk",
"Lawrence",
""
]
] | Understanding the behaviour of hosts of SARS-CoV-2 is crucial to our understanding of the virus. A comparison of environmental features related to the incidence of SARS-CoV-2 with those of its potential hosts is critical. We examine the distribution of coronaviruses among bats. We analyse the distribution of SARS-CoV-2 in a nine-week period following lockdown in Italy, Spain, and Australia. We correlate its incidence with environmental variables particularly ultraviolet radiation, temperature, and humidity. We establish a clear negative relationship between COVID-19 and ultraviolet radiation, modulated by temperature and humidity. We relate our results with data showing that the bat species most vulnerable to coronavirus infection are those which live in environmental conditions that are similar to those that appear to be most favourable to the spread of COVID-19. The SARS-CoV-2 ecological niche has been the product of long-term coevolution of coronaviruses with their host species. Understanding the key parameters of that niche in host species allows us to predict circumstances where its spread will be most favourable. Such conditions can be summarised under the headings of nocturnality and seasonality. High ultraviolet radiation, in particular, is proposed as a key limiting variable. We therefore expect the risk of spread of COVID-19 to be highest in winter conditions, and in low light environments. Human activities resembling those of highly social cave-dwelling bats (e.g. large nocturnal gatherings or high density indoor activities) will only serve to compound the problem of COVID-19. |
2402.13714 | Hengchuang Yin Dr. | Hengchuang Yin, Zhonghui Gu, Fanhao Wang, Yiparemu Abuduhaibaier,
Yanqiao Zhu, Xinming Tu, Xian-Sheng Hua, Xiao Luo, Yizhou Sun | An Evaluation of Large Language Models in Bioinformatics Research | Under review | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) such as ChatGPT have gained considerable
interest across diverse research communities. Their notable ability for text
completion and generation has inaugurated a novel paradigm for
language-interfaced problem solving. However, the potential and efficacy of
these models in bioinformatics remain incompletely explored. In this work, we
study the performance LLMs on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction
of named entities for genes and proteins, detection of antimicrobial and
anti-cancer peptides, molecular optimization, and resolution of educational
bioinformatics problems. Our findings indicate that, given appropriate prompts,
LLMs like GPT variants can successfully handle most of these tasks. In
addition, we provide a thorough analysis of their limitations in the context of
complicated bioinformatics tasks. In conclusion, we believe that this work can
provide new perspectives and motivate future research in the field of LLMs
applications, AI for Science and bioinformatics.
| [
{
"created": "Wed, 21 Feb 2024 11:27:31 GMT",
"version": "v1"
}
] | 2024-02-22 | [
[
"Yin",
"Hengchuang",
""
],
[
"Gu",
"Zhonghui",
""
],
[
"Wang",
"Fanhao",
""
],
[
"Abuduhaibaier",
"Yiparemu",
""
],
[
"Zhu",
"Yanqiao",
""
],
[
"Tu",
"Xinming",
""
],
[
"Hua",
"Xian-Sheng",
""
],
[
"Luo",
"Xiao",
""
],
[
"Sun",
"Yizhou",
""
]
] | Large language models (LLMs) such as ChatGPT have gained considerable interest across diverse research communities. Their notable ability for text completion and generation has inaugurated a novel paradigm for language-interfaced problem solving. However, the potential and efficacy of these models in bioinformatics remain incompletely explored. In this work, we study the performance LLMs on a wide spectrum of crucial bioinformatics tasks. These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems. Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks. In addition, we provide a thorough analysis of their limitations in the context of complicated bioinformatics tasks. In conclusion, we believe that this work can provide new perspectives and motivate future research in the field of LLMs applications, AI for Science and bioinformatics. |
2104.01490 | Daniel Yamins | Rosa Cao and Daniel Yamins | Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously | null | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite the recent success of neural network models in mimicking animal
performance on visual perceptual tasks, critics worry that these models fail to
illuminate brain function. We take it that a central approach to explanation in
systems neuroscience is that of mechanistic modeling, where understanding the
system is taken to require fleshing out the parts, organization, and activities
of a system, and how those give rise to behaviors of interest. However, it
remains somewhat controversial what it means for a model to describe a
mechanism, and whether neural network models qualify as explanatory.
We argue that certain kinds of neural network models are actually good
examples of mechanistic models, when the right notion of mechanistic mapping is
deployed. Building on existing work on model-to-mechanism mapping (3M), we
describe criteria delineating such a notion, which we call 3M++. These criteria
require us, first, to identify a level of description that is both abstract but
detailed enough to be "runnable", and then, to construct model-to-brain
mappings using the same principles as those employed for brain-to-brain mapping
across individuals. Perhaps surprisingly, the abstractions required are those
already in use in experimental neuroscience, and are of the kind deployed in
the construction of more familiar computational models, just as the principles
of inter-brain mappings are very much in the spirit of those already employed
in the collection and analysis of data across animals.
In a companion paper, we address the relationship between optimization and
intelligibility, in the context of functional evolutionary explanations. Taken
together, mechanistic interpretations of computational models and the
dependencies between form and function illuminated by optimization processes
can help us to understand why brain systems are built they way they are.
| [
{
"created": "Sat, 3 Apr 2021 22:17:40 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Apr 2021 23:39:21 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Cao",
"Rosa",
""
],
[
"Yamins",
"Daniel",
""
]
] | Despite the recent success of neural network models in mimicking animal performance on visual perceptual tasks, critics worry that these models fail to illuminate brain function. We take it that a central approach to explanation in systems neuroscience is that of mechanistic modeling, where understanding the system is taken to require fleshing out the parts, organization, and activities of a system, and how those give rise to behaviors of interest. However, it remains somewhat controversial what it means for a model to describe a mechanism, and whether neural network models qualify as explanatory. We argue that certain kinds of neural network models are actually good examples of mechanistic models, when the right notion of mechanistic mapping is deployed. Building on existing work on model-to-mechanism mapping (3M), we describe criteria delineating such a notion, which we call 3M++. These criteria require us, first, to identify a level of description that is both abstract but detailed enough to be "runnable", and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals. Perhaps surprisingly, the abstractions required are those already in use in experimental neuroscience, and are of the kind deployed in the construction of more familiar computational models, just as the principles of inter-brain mappings are very much in the spirit of those already employed in the collection and analysis of data across animals. In a companion paper, we address the relationship between optimization and intelligibility, in the context of functional evolutionary explanations. Taken together, mechanistic interpretations of computational models and the dependencies between form and function illuminated by optimization processes can help us to understand why brain systems are built they way they are. |
2212.06612 | Jhonny Andres Agudelo Ruiz J. Agudelo | Johny Arteaga, Jhonny Agudelo, Alejandro Brazeiro, Hugo Fort | Land use/land cover dynamics on vulnerable regions in Uruguay approached
by a method combining Maximum Entropy and Population Dynamics | 21 pages, 5 figures, 4 tables | null | null | null | q-bio.PE physics.app-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an exploratory population dynamics approach, described by
Lotka-Volterra (LV) generalized equations, to explain/predict the dynamics and
competition between land use/land cover (LULC) classes over vulnerable regions
in Uruguay. We use the Mapbiomas-Pampa dataset composed by 20 annual LULC maps
from 2000-2019. From these LULC maps we extract the main LULC classes, their
spatial distribution and the time series of areas covered for each class. The
interaction coefficients between species are inferred through the pairwise
maximum entropy (PME) method from the spatial covariance matrices for different
training periods. The main finding is that this LVPME method globally
outperforms the more traditional Markov chains approach at predicting the
trajectories of areas of LULC classes.
| [
{
"created": "Tue, 13 Dec 2022 14:37:34 GMT",
"version": "v1"
}
] | 2022-12-14 | [
[
"Arteaga",
"Johny",
""
],
[
"Agudelo",
"Jhonny",
""
],
[
"Brazeiro",
"Alejandro",
""
],
[
"Fort",
"Hugo",
""
]
] | We present an exploratory population dynamics approach, described by Lotka-Volterra (LV) generalized equations, to explain/predict the dynamics and competition between land use/land cover (LULC) classes over vulnerable regions in Uruguay. We use the Mapbiomas-Pampa dataset composed by 20 annual LULC maps from 2000-2019. From these LULC maps we extract the main LULC classes, their spatial distribution and the time series of areas covered for each class. The interaction coefficients between species are inferred through the pairwise maximum entropy (PME) method from the spatial covariance matrices for different training periods. The main finding is that this LVPME method globally outperforms the more traditional Markov chains approach at predicting the trajectories of areas of LULC classes. |
2406.14801 | Christof Fehrman | Christof Fehrman and C. Daniel Meliza | Model Predictive Control of the Neural Manifold | null | null | null | null | q-bio.NC cs.SY eess.SY q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Neural manifolds are an attractive theoretical framework for characterizing
the complex behaviors of neural populations. However, many of the tools for
identifying these low-dimensional subspaces are correlational and provide
limited insight into the underlying dynamics. The ability to precisely control
this latent activity would allow researchers to investigate the structure and
function of neural manifolds. Employing techniques from the field of optimal
control, we simulate controlling the latent dynamics of a neural population
using closed-loop, dynamically generated sensory inputs. Using a spiking neural
network (SNN) as a model of a neural circuit, we find low-dimensional
representations of both the network activity (the neural manifold) and a set of
salient visual stimuli. With a data-driven latent dynamics model, we apply
model predictive control (MPC) to provide anticipatory, optimal control over
the trajectory of the circuit in a latent space. We are able to control the
latent dynamics of the SNN to follow several reference trajectories despite
observing only a subset of neurons and with a substantial amount of unknown
noise injected into the network. These results provide a framework to
experimentally test for causal relationships between manifold dynamics and
other variables of interest such as organismal behavior and BCI performance.
| [
{
"created": "Fri, 21 Jun 2024 00:36:33 GMT",
"version": "v1"
}
] | 2024-06-24 | [
[
"Fehrman",
"Christof",
""
],
[
"Meliza",
"C. Daniel",
""
]
] | Neural manifolds are an attractive theoretical framework for characterizing the complex behaviors of neural populations. However, many of the tools for identifying these low-dimensional subspaces are correlational and provide limited insight into the underlying dynamics. The ability to precisely control this latent activity would allow researchers to investigate the structure and function of neural manifolds. Employing techniques from the field of optimal control, we simulate controlling the latent dynamics of a neural population using closed-loop, dynamically generated sensory inputs. Using a spiking neural network (SNN) as a model of a neural circuit, we find low-dimensional representations of both the network activity (the neural manifold) and a set of salient visual stimuli. With a data-driven latent dynamics model, we apply model predictive control (MPC) to provide anticipatory, optimal control over the trajectory of the circuit in a latent space. We are able to control the latent dynamics of the SNN to follow several reference trajectories despite observing only a subset of neurons and with a substantial amount of unknown noise injected into the network. These results provide a framework to experimentally test for causal relationships between manifold dynamics and other variables of interest such as organismal behavior and BCI performance. |
2307.14806 | Maitham Yousif | Maitham G. Yousif, Fadhil G. Al-Amran, Alaa M. Sadeq, Nasser Ghaly
Yousif | Prevalence and Associated Factors of Human Papillomavirus Infection
among Iraqi Women | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Human papillomavirus (HPV) is a significant public health concern, as it is a
leading cause of cervical cancer in women. However, data on the prevalence of
HPV infection among Iraqi women is scarce. This study aimed to estimate the
prevalence of HPV infection and its associated factors among Iraqi women aged
15-50 attending health centers. In this cross-sectional study, 362 female
participants aged 15-50 were recruited from health centers in Iraq. Serological
tests were used to screen for HPV infection. Sociodemographic information,
obstetric history, and contraceptive use were collected. Pap smears were
performed to assess cervical changes related to HPV infection. Of the 362
participants, 65 (17.96%) tested positive for HPV. The majority of HPV-positive
women were aged 30-35 years, housewives, and belonged to lower social classes.
Among HPV-positive women, 30% had abnormal Pap smears, with 55% diagnosed with
cervical intraepithelial neoplasia grade 1 (CIN1), 25% with CIN2, and 15% with
CIN3. Biopsy confirmed the diagnosis in 5% of cases. No significant association
was found between HPV infection and contraceptive use. Most HPV-positive women
were multiparous. This study reveals a considerable prevalence of HPV infection
among Iraqi women attending health centers, particularly in the age group of
30-35 years and among housewives. These findings highlight the need for
targeted public health interventions to increase HPV awareness, promote regular
screening, and improve access to healthcare services for women, especially
those from lower social classes. Further research is warranted to better
understand the factors contributing to HPV transmission in Iraq and to develop
effective prevention strategies.
| [
{
"created": "Thu, 27 Jul 2023 12:25:42 GMT",
"version": "v1"
}
] | 2023-07-28 | [
[
"Yousif",
"Maitham G.",
""
],
[
"Al-Amran",
"Fadhil G.",
""
],
[
"Sadeq",
"Alaa M.",
""
],
[
"Yousif",
"Nasser Ghaly",
""
]
] | Human papillomavirus (HPV) is a significant public health concern, as it is a leading cause of cervical cancer in women. However, data on the prevalence of HPV infection among Iraqi women is scarce. This study aimed to estimate the prevalence of HPV infection and its associated factors among Iraqi women aged 15-50 attending health centers. In this cross-sectional study, 362 female participants aged 15-50 were recruited from health centers in Iraq. Serological tests were used to screen for HPV infection. Sociodemographic information, obstetric history, and contraceptive use were collected. Pap smears were performed to assess cervical changes related to HPV infection. Of the 362 participants, 65 (17.96%) tested positive for HPV. The majority of HPV-positive women were aged 30-35 years, housewives, and belonged to lower social classes. Among HPV-positive women, 30% had abnormal Pap smears, with 55% diagnosed with cervical intraepithelial neoplasia grade 1 (CIN1), 25% with CIN2, and 15% with CIN3. Biopsy confirmed the diagnosis in 5% of cases. No significant association was found between HPV infection and contraceptive use. Most HPV-positive women were multiparous. This study reveals a considerable prevalence of HPV infection among Iraqi women attending health centers, particularly in the age group of 30-35 years and among housewives. These findings highlight the need for targeted public health interventions to increase HPV awareness, promote regular screening, and improve access to healthcare services for women, especially those from lower social classes. Further research is warranted to better understand the factors contributing to HPV transmission in Iraq and to develop effective prevention strategies. |
1202.6230 | Ganna Rozhnova | G. Rozhnova, A. Nunes and A. J. McKane | Phase lag in epidemics on a network of cities | 10 pages, 6 figures | Phys. Rev. E 85, 051912 (2012) | 10.1103/PhysRevE.85.051912 | null | q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the synchronisation and phase-lag of fluctuations in the number of
infected individuals in a network of cities between which individuals commute.
The frequency and amplitude of these oscillations is known to be very well
captured by the van Kampen system-size expansion, and we use this approximation
to compute the complex coherence function that describes their correlation. We
find that, if the infection rate differs from city to city and the coupling
between them is not too strong, these oscillations are synchronised with a well
defined phase lag between cities. The analytic description of the effect is
shown to be in good agreement with the results of stochastic simulations for
realistic population sizes.
| [
{
"created": "Tue, 28 Feb 2012 14:14:05 GMT",
"version": "v1"
},
{
"created": "Thu, 17 May 2012 20:04:55 GMT",
"version": "v2"
}
] | 2012-05-23 | [
[
"Rozhnova",
"G.",
""
],
[
"Nunes",
"A.",
""
],
[
"McKane",
"A. J.",
""
]
] | We study the synchronisation and phase-lag of fluctuations in the number of infected individuals in a network of cities between which individuals commute. The frequency and amplitude of these oscillations is known to be very well captured by the van Kampen system-size expansion, and we use this approximation to compute the complex coherence function that describes their correlation. We find that, if the infection rate differs from city to city and the coupling between them is not too strong, these oscillations are synchronised with a well defined phase lag between cities. The analytic description of the effect is shown to be in good agreement with the results of stochastic simulations for realistic population sizes. |
q-bio/0310004 | Boris Zaltzman | Ivan Gotz, Isaak Rubinstein, Eugene Tsvetkov and Boris Zaltzman | Complexity and hierarchical game of life | null | null | 10.1142/S0129183104005838 | null | q-bio.PE | null | Hierarchical structure is an essential part of complexity, important notion
relevant for a wide range of applications ranging from biological population
dynamics through robotics to social sciences. In this paper we propose a simple
cellular-automata tool for study of hierarchical population dynamics.
| [
{
"created": "Mon, 6 Oct 2003 20:07:44 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Gotz",
"Ivan",
""
],
[
"Rubinstein",
"Isaak",
""
],
[
"Tsvetkov",
"Eugene",
""
],
[
"Zaltzman",
"Boris",
""
]
] | Hierarchical structure is an essential part of complexity, important notion relevant for a wide range of applications ranging from biological population dynamics through robotics to social sciences. In this paper we propose a simple cellular-automata tool for study of hierarchical population dynamics. |
1907.05010 | Michael Meehan Dr | Michael T. Meehan and Robert C. Cope and Emma S. McBryde | On the probability of strain invasion in endemic settings: accounting
for individual heterogeneity and control in multi-strain dynamics | 32 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of antimicrobial drug resistance is an imminent threat to global
health that has warranted, and duly received, considerable attention within the
medical, microbiological and modelling communities. Outbreaks of drug-resistant
pathogens are ignited by the emergence and transmission of mutant variants
descended from wild-type strains circulating in the community. In this work we
investigate the stochastic dynamics of the emergence of a novel disease strain,
introduced into a population in which it must compete with an existing endemic
strain. In analogy with past work on single-strain epidemic outbreaks, we apply
a branching process approximation to calculate the probability that the new
strain becomes established. As expected, a critical determinant of the survival
prospects of any invading strain is the magnitude of its reproduction number
relative to that of the background endemic strain. Whilst in most circumstances
this ratio must exceed unity in order for invasion to be viable, we show that
differential control scenarios can lead to less-fit novel strains invading
populations hosting a fitter endemic one. This analysis and the accompanying
findings will inform our understanding of the mechanisms that have led to past
instances of successful strain invasion, and provide valuable lessons for
thwarting future drug-resistant strain incursions.
| [
{
"created": "Thu, 11 Jul 2019 06:15:29 GMT",
"version": "v1"
}
] | 2019-07-12 | [
[
"Meehan",
"Michael T.",
""
],
[
"Cope",
"Robert C.",
""
],
[
"McBryde",
"Emma S.",
""
]
] | The rise of antimicrobial drug resistance is an imminent threat to global health that has warranted, and duly received, considerable attention within the medical, microbiological and modelling communities. Outbreaks of drug-resistant pathogens are ignited by the emergence and transmission of mutant variants descended from wild-type strains circulating in the community. In this work we investigate the stochastic dynamics of the emergence of a novel disease strain, introduced into a population in which it must compete with an existing endemic strain. In analogy with past work on single-strain epidemic outbreaks, we apply a branching process approximation to calculate the probability that the new strain becomes established. As expected, a critical determinant of the survival prospects of any invading strain is the magnitude of its reproduction number relative to that of the background endemic strain. Whilst in most circumstances this ratio must exceed unity in order for invasion to be viable, we show that differential control scenarios can lead to less-fit novel strains invading populations hosting a fitter endemic one. This analysis and the accompanying findings will inform our understanding of the mechanisms that have led to past instances of successful strain invasion, and provide valuable lessons for thwarting future drug-resistant strain incursions. |
1705.01502 | Ran Rubin | Ran Rubin, L.F. Abbott and Haim Sompolinsky | Balanced Excitation and Inhibition are Required for High-Capacity,
Noise-Robust Neuronal Selectivity | Article and supplementary information | Proceedings of the National Academy of Sciences of the United
States of America, 114(41), 2017 | 10.1073/pnas.1705841114 | null | q-bio.NC cond-mat.dis-nn cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurons and networks in the cerebral cortex must operate reliably despite
multiple sources of noise. To evaluate the impact of both input and output
noise, we determine the robustness of single-neuron stimulus selective
responses, as well as the robustness of attractor states of networks of neurons
performing memory tasks. We find that robustness to output noise requires
synaptic connections to be in a balanced regime in which excitation and
inhibition are strong and largely cancel each other. We evaluate the conditions
required for this regime to exist and determine the properties of networks
operating within it. A plausible synaptic plasticity rule for learning that
balances weight configurations is presented. Our theory predicts an optimal
ratio of the number of excitatory and inhibitory synapses for maximizing the
encoding capacity of balanced networks for a given statistics of afferent
activations. Previous work has shown that balanced networks amplify
spatio-temporal variability and account for observed asynchronous irregular
states. Here we present a novel type of balanced network that amplifies small
changes in the impinging signals, and emerges automatically from learning to
perform neuronal and network functions robustly.
| [
{
"created": "Wed, 3 May 2017 16:38:01 GMT",
"version": "v1"
}
] | 2018-01-24 | [
[
"Rubin",
"Ran",
""
],
[
"Abbott",
"L. F.",
""
],
[
"Sompolinsky",
"Haim",
""
]
] | Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for a given statistics of afferent activations. Previous work has shown that balanced networks amplify spatio-temporal variability and account for observed asynchronous irregular states. Here we present a novel type of balanced network that amplifies small changes in the impinging signals, and emerges automatically from learning to perform neuronal and network functions robustly. |
2105.14351 | Mohsen Shahhosseini | Mohsen Shahhosseini, Guiping Hu, Saeed Khaki, Sotirios V. Archontoulis | Corn Yield Prediction with Ensemble CNN-DNN | null | null | 10.3389/fpls.2021.709008 | null | q-bio.QM cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | We investigate the predictive performance of two novel CNN-DNN machine
learning ensemble models in predicting county-level corn yields across the US
Corn Belt (12 states). The developed data set is a combination of management,
environment, and historical corn yields from 1980-2019. Two scenarios for
ensemble creation are considered: homogenous and heterogeneous ensembles. In
homogenous ensembles, the base CNN-DNN models are all the same, but they are
generated with a bagging procedure to ensure they exhibit a certain level of
diversity. Heterogenous ensembles are created from different base CNN-DNN
models which share the same architecture but have different levels of depth.
Three types of ensemble creation methods were used to create several ensembles
for either of the scenarios: Basic Ensemble Method (BEM), Generalized Ensemble
Method (GEM), and stacked generalized ensembles. Results indicated that both
designed ensemble types (heterogenous and homogenous) outperform the ensembles
created from five individual ML models (linear regression, LASSO, random
forest, XGBoost, and LightGBM). Furthermore, by introducing improvements over
the heterogeneous ensembles, the homogenous ensembles provide the most accurate
yield predictions across US Corn Belt states. This model could make 2019 yield
predictions with a root mean square error of 866 kg/ha, equivalent to 8.5%
relative root mean square, and could successfully explain about 77% of the
spatio-temporal variation in the corn grain yields. The significant predictive
power of this model can be leveraged for designing a reliable tool for corn
yield prediction which will, in turn, assist agronomic decision-makers.
| [
{
"created": "Sat, 29 May 2021 18:25:07 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Shahhosseini",
"Mohsen",
""
],
[
"Hu",
"Guiping",
""
],
[
"Khaki",
"Saeed",
""
],
[
"Archontoulis",
"Sotirios V.",
""
]
] | We investigate the predictive performance of two novel CNN-DNN machine learning ensemble models in predicting county-level corn yields across the US Corn Belt (12 states). The developed data set is a combination of management, environment, and historical corn yields from 1980-2019. Two scenarios for ensemble creation are considered: homogenous and heterogeneous ensembles. In homogenous ensembles, the base CNN-DNN models are all the same, but they are generated with a bagging procedure to ensure they exhibit a certain level of diversity. Heterogenous ensembles are created from different base CNN-DNN models which share the same architecture but have different levels of depth. Three types of ensemble creation methods were used to create several ensembles for either of the scenarios: Basic Ensemble Method (BEM), Generalized Ensemble Method (GEM), and stacked generalized ensembles. Results indicated that both designed ensemble types (heterogenous and homogenous) outperform the ensembles created from five individual ML models (linear regression, LASSO, random forest, XGBoost, and LightGBM). Furthermore, by introducing improvements over the heterogeneous ensembles, the homogenous ensembles provide the most accurate yield predictions across US Corn Belt states. This model could make 2019 yield predictions with a root mean square error of 866 kg/ha, equivalent to 8.5% relative root mean square, and could successfully explain about 77% of the spatio-temporal variation in the corn grain yields. The significant predictive power of this model can be leveraged for designing a reliable tool for corn yield prediction which will, in turn, assist agronomic decision-makers. |
1705.00816 | Panqu Wang | Panqu Wang, Garrison W. Cottrell | Central and peripheral vision for scene recognition: A
neurocomputational modeling exploration | http://jov.arvojournals.org/Article.aspx?articleid=2623232 | Journal of Vision April 2017, Vol.17, 9 | 10.1167/17.4.9 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | What are the roles of central and peripheral vision in human scene
recognition? Larson and Loschky (2009) showed that peripheral vision
contributes more than central vision in obtaining maximum scene recognition
accuracy. However, central vision is more efficient for scene recognition than
peripheral, based on the amount of visual area needed for accurate recognition.
In this study, we model and explain the results of Larson and Loschky (2009)
using a neurocomputational modeling approach. We show that the advantage of
peripheral vision in scene recognition, as well as the efficiency advantage for
central vision, can be replicated using state-of-the-art deep neural network
models. In addition, we propose and provide support for the hypothesis that the
peripheral advantage comes from the inherent usefulness of peripheral features.
This result is consistent with data presented by Thibaut, Tran, Szaffarczyk,
and Boucart (2014), who showed that patients with central vision loss can still
categorize natural scenes efficiently. Furthermore, by using a deep
mixture-of-experts model ("The Deep Model," or TDM) that receives central and
peripheral visual information on separate channels simultaneously, we show that
the peripheral advantage emerges naturally in the learning process: When
trained to categorize scenes, the model weights the peripheral pathway more
than the central pathway. As we have seen in our previous modeling work,
learning creates a transform that spreads different scene categories into
different regions in representational space. Finally, we visualize the features
for the two pathways, and find that different preferences for scene categories
emerge for the two pathways during the training process.
| [
{
"created": "Tue, 2 May 2017 06:44:02 GMT",
"version": "v1"
}
] | 2017-05-03 | [
[
"Wang",
"Panqu",
""
],
[
"Cottrell",
"Garrison W.",
""
]
] | What are the roles of central and peripheral vision in human scene recognition? Larson and Loschky (2009) showed that peripheral vision contributes more than central vision in obtaining maximum scene recognition accuracy. However, central vision is more efficient for scene recognition than peripheral, based on the amount of visual area needed for accurate recognition. In this study, we model and explain the results of Larson and Loschky (2009) using a neurocomputational modeling approach. We show that the advantage of peripheral vision in scene recognition, as well as the efficiency advantage for central vision, can be replicated using state-of-the-art deep neural network models. In addition, we propose and provide support for the hypothesis that the peripheral advantage comes from the inherent usefulness of peripheral features. This result is consistent with data presented by Thibaut, Tran, Szaffarczyk, and Boucart (2014), who showed that patients with central vision loss can still categorize natural scenes efficiently. Furthermore, by using a deep mixture-of-experts model ("The Deep Model," or TDM) that receives central and peripheral visual information on separate channels simultaneously, we show that the peripheral advantage emerges naturally in the learning process: When trained to categorize scenes, the model weights the peripheral pathway more than the central pathway. As we have seen in our previous modeling work, learning creates a transform that spreads different scene categories into different regions in representational space. Finally, we visualize the features for the two pathways, and find that different preferences for scene categories emerge for the two pathways during the training process. |
q-bio/0401016 | Leonard M. Sander | Charles R. Doering, Khachik V. Sargsyan, Leonard M. Sander | Extinction times for birth-death processes: exact results, continuum
asymptotics, and the failure of the Fokker-Planck approximation | 20 pages, 6 figures | null | null | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | We consider extinction times for a class of birth-death processes commonly
found in applications, where there is a control parameter which determines
whether the population quickly becomes extinct, or rather persists for a long
time. We give an exact expression for the discrete case and its asymptotic
expansion for large values of the population. We have results below the
threshold, at the threshold, and above the threshold (where there is a
quasi-stationary state and the extinction time is very long.) We show that the
Fokker-Planck approximation is valid only quite near the threshold. We compare
our analytical results to numerical simulations for the SIS epidemic model,
which is in the class that we treat. This is an interesting example of the
delicate relationship between discrete and continuum treatments of the same
problem.
| [
{
"created": "Sat, 10 Jan 2004 20:11:24 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Doering",
"Charles R.",
""
],
[
"Sargsyan",
"Khachik V.",
""
],
[
"Sander",
"Leonard M.",
""
]
] | We consider extinction times for a class of birth-death processes commonly found in applications, where there is a control parameter which determines whether the population quickly becomes extinct, or rather persists for a long time. We give an exact expression for the discrete case and its asymptotic expansion for large values of the population. We have results below the threshold, at the threshold, and above the threshold (where there is a quasi-stationary state and the extinction time is very long.) We show that the Fokker-Planck approximation is valid only quite near the threshold. We compare our analytical results to numerical simulations for the SIS epidemic model, which is in the class that we treat. This is an interesting example of the delicate relationship between discrete and continuum treatments of the same problem. |
1307.4327 | Ulrich Dobramysl | Ulrich Dobramysl, Uwe C. Tauber | Environmental vs demographic variability in stochastic predator-prey
models | 28 pages, 10 figures, Proceedings paper of the STATPHYS 25 conference | J. Stat. Mech. (2013) P10001 | 10.1088/1742-5468/2013/10/P10001 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In contrast to the neutral population cycles of the deterministic mean-field
Lotka--Volterra rate equations, including spatial structure and stochastic
noise in models for predator-prey interactions yields complex spatio-temporal
structures associated with long-lived erratic population oscillations.
Environmental variability in the form of quenched spatial randomness in the
predation rates results in more localized activity patches. Population
fluctuations in rare favorable regions in turn cause a remarkable increase in
the asymptotic densities of both predators and prey. Very intriguing features
are found when variable interaction rates are affixed to individual particles
rather than lattice sites. Stochastic dynamics with demographic variability in
conjunction with inheritable predation efficiencies generate non-trivial time
evolution for the predation rate distributions, yet with overall essentially
neutral optimization.
| [
{
"created": "Tue, 16 Jul 2013 16:16:22 GMT",
"version": "v1"
}
] | 2013-10-16 | [
[
"Dobramysl",
"Ulrich",
""
],
[
"Tauber",
"Uwe C.",
""
]
] | In contrast to the neutral population cycles of the deterministic mean-field Lotka--Volterra rate equations, including spatial structure and stochastic noise in models for predator-prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. |
2408.01683 | Sarah Stednitz | Sarah Josephine Stednitz, Andrew Lesak, Adeline L Fecker, Peregrine
Painter, Phil Washbourne, Luca Mazzucato, Ethan K Scott | Probabilistic modeling reveals coordinated social interaction states and
their multisensory bases | 29 pages, 6 figures, 3 supplementary figures. *Joint first author,
+Joint last author | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Social behavior across animal species ranges from simple pairwise
interactions to thousands of individuals coordinating goal-directed movements.
Regardless of the scale, these interactions are governed by the interplay
between multimodal sensory information and the internal state of each animal.
Here, we investigate how animals use multiple sensory modalities to guide
social behavior in the highly social zebrafish (Danio rerio) and uncover the
complex features of pairwise interactions early in development. To identify
distinct behaviors and understand how they vary over time, we developed a new
hidden Markov model with constrained linear-model emissions to automatically
classify states of coordinated interaction, using the movements of one animal
to predict those of another. We discovered that social behaviors alternate
between two interaction states within a single experimental session,
distinguished by unique movements and timescales. Long-range interactions, akin
to shoaling, rely on vision, while mechanosensation underlies rapid
synchronized movements and parallel swimming, precursors of schooling.
Altogether, we observe spontaneous interactions in pairs of fish, develop novel
hidden Markov modeling to reveal two fundamental interaction modes, and
identify the sensory systems involved in each. Our modeling approach to
pairwise social interactions has broad applicability to a wide variety of
naturalistic behaviors and species and solves the challenge of detecting
transient couplings between quasi-periodic time series.
| [
{
"created": "Sat, 3 Aug 2024 06:45:42 GMT",
"version": "v1"
}
] | 2024-08-06 | [
[
"Stednitz",
"Sarah Josephine",
""
],
[
"Lesak",
"Andrew",
""
],
[
"Fecker",
"Adeline L",
""
],
[
"Painter",
"Peregrine",
""
],
[
"Washbourne",
"Phil",
""
],
[
"Mazzucato",
"Luca",
""
],
[
"Scott",
"Ethan K",
""
]
] | Social behavior across animal species ranges from simple pairwise interactions to thousands of individuals coordinating goal-directed movements. Regardless of the scale, these interactions are governed by the interplay between multimodal sensory information and the internal state of each animal. Here, we investigate how animals use multiple sensory modalities to guide social behavior in the highly social zebrafish (Danio rerio) and uncover the complex features of pairwise interactions early in development. To identify distinct behaviors and understand how they vary over time, we developed a new hidden Markov model with constrained linear-model emissions to automatically classify states of coordinated interaction, using the movements of one animal to predict those of another. We discovered that social behaviors alternate between two interaction states within a single experimental session, distinguished by unique movements and timescales. Long-range interactions, akin to shoaling, rely on vision, while mechanosensation underlies rapid synchronized movements and parallel swimming, precursors of schooling. Altogether, we observe spontaneous interactions in pairs of fish, develop novel hidden Markov modeling to reveal two fundamental interaction modes, and identify the sensory systems involved in each. Our modeling approach to pairwise social interactions has broad applicability to a wide variety of naturalistic behaviors and species and solves the challenge of detecting transient couplings between quasi-periodic time series. |
2208.12231 | Lia Papadopoulos | Lia Papadopoulos, Demian Battaglia, and Dani S. Bassett | Controlling collective dynamical states of mesoscale brain networks with
local perturbations | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Oscillatory synchrony is hypothesized to support the flow of information
between brain regions, with different phase-locked configurations enabling
activation of different effective interactions. Along these lines, past work
has proposed multistable phase-locking as a means for hardwired brain networks
to flexibly support multiple functional patterns, without having to reconfigure
their anatomical connections. Given the potential link between interareal
communication and phase-locked states, it is thus important to understand how
those states might be controlled to achieve rapid alteration of functional
connectivity in interareal circuits. Here, we study functional state control in
small networks of coupled neural masses that display collective multistability
under determinstic conditions, and that display more biologically-realistic
irregular oscillations and transient phase-locking when conditions are
stochastic. In particular, we investigate the global responses of these
mesoscale circuits to external signals that target only a single subunit.
Focusing mainly on the more realistic scenario wherein network dynamics are
stochastic, we identify conditions under which local inputs (i) can trigger
fast transitions to topologically distinct functional connectivity motifs that
are temporarily stable ("state switching"), (ii) can smoothly adjust the
spatial pattern of phase-relations for a particular set of lead-lag
relationships ("state morphing"), and (iii) fail to regulate global
phase-locking states. In total, our results add to a growing literature
highlighting that the modulation of multistable, interareal coherence patterns
could provide a basis for flexible brain network operation.
| [
{
"created": "Thu, 25 Aug 2022 17:29:32 GMT",
"version": "v1"
}
] | 2022-08-26 | [
[
"Papadopoulos",
"Lia",
""
],
[
"Battaglia",
"Demian",
""
],
[
"Bassett",
"Dani S.",
""
]
] | Oscillatory synchrony is hypothesized to support the flow of information between brain regions, with different phase-locked configurations enabling activation of different effective interactions. Along these lines, past work has proposed multistable phase-locking as a means for hardwired brain networks to flexibly support multiple functional patterns, without having to reconfigure their anatomical connections. Given the potential link between interareal communication and phase-locked states, it is thus important to understand how those states might be controlled to achieve rapid alteration of functional connectivity in interareal circuits. Here, we study functional state control in small networks of coupled neural masses that display collective multistability under determinstic conditions, and that display more biologically-realistic irregular oscillations and transient phase-locking when conditions are stochastic. In particular, we investigate the global responses of these mesoscale circuits to external signals that target only a single subunit. Focusing mainly on the more realistic scenario wherein network dynamics are stochastic, we identify conditions under which local inputs (i) can trigger fast transitions to topologically distinct functional connectivity motifs that are temporarily stable ("state switching"), (ii) can smoothly adjust the spatial pattern of phase-relations for a particular set of lead-lag relationships ("state morphing"), and (iii) fail to regulate global phase-locking states. In total, our results add to a growing literature highlighting that the modulation of multistable, interareal coherence patterns could provide a basis for flexible brain network operation. |
1605.00033 | Sepehr Ehsani | Sepehr Ehsani | A framework for philosophical biology | 16 pages, 1 figure, abstract abridged due to the 1,920 character
limit (please see PDF file for complete version) | null | 10.1007/978-3-030-41309-5_13 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in biology have mostly relied on theories that were subsequently
revised, expanded or eventually refuted using experimental and other means.
Theoretical biology used to primarily provide a basis to rationally examine the
frameworks within which biological experiments were carried out and to shed
light on overlooked gaps in understanding. Today, however, theoretical biology
has generally become synonymous with computational and mathematical biology.
This could in part be explained by a relatively recent tendency in which a
"data first", rather than a "theory first", approach is preferred. Moreover,
generating hypotheses has at times become procedural rather than theoretical.
This situation leaves our understanding enmeshed in data, which should be
disentangled from much noise. Given the many unresolved questions in biology
and medicine, it seems apt to revive the role of pure theory in the biological
sciences. This paper makes the case for a "philosophical biology"
(philbiology), distinct from but quite complementary to philosophy of biology
(philobiology), which would entail biological investigation through
philosophical approaches. Philbiology would thus be a reincarnation of
theoretical biology, adopting the true sense of the word "theory" and making
use of a rich tradition of serious philosophical approaches in the natural
sciences. A philbiological investigation, after clearly defining a given
biological problem, would aim to propose a set of empirical questions, along
with a class of possible solutions, about that problem. Importantly, whether or
not the questions can be tested using current experimental paradigms would be
secondary to whether the questions are inherently empirical or not. The final
goal of a philbiological investigation would be to develop a theoretical
framework that can lead observational and/or interventional experimental
studies of the defined problem.
| [
{
"created": "Fri, 29 Apr 2016 21:45:53 GMT",
"version": "v1"
}
] | 2020-05-22 | [
[
"Ehsani",
"Sepehr",
""
]
] | Advances in biology have mostly relied on theories that were subsequently revised, expanded or eventually refuted using experimental and other means. Theoretical biology used to primarily provide a basis to rationally examine the frameworks within which biological experiments were carried out and to shed light on overlooked gaps in understanding. Today, however, theoretical biology has generally become synonymous with computational and mathematical biology. This could in part be explained by a relatively recent tendency in which a "data first", rather than a "theory first", approach is preferred. Moreover, generating hypotheses has at times become procedural rather than theoretical. This situation leaves our understanding enmeshed in data, which should be disentangled from much noise. Given the many unresolved questions in biology and medicine, it seems apt to revive the role of pure theory in the biological sciences. This paper makes the case for a "philosophical biology" (philbiology), distinct from but quite complementary to philosophy of biology (philobiology), which would entail biological investigation through philosophical approaches. Philbiology would thus be a reincarnation of theoretical biology, adopting the true sense of the word "theory" and making use of a rich tradition of serious philosophical approaches in the natural sciences. A philbiological investigation, after clearly defining a given biological problem, would aim to propose a set of empirical questions, along with a class of possible solutions, about that problem. Importantly, whether or not the questions can be tested using current experimental paradigms would be secondary to whether the questions are inherently empirical or not. The final goal of a philbiological investigation would be to develop a theoretical framework that can lead observational and/or interventional experimental studies of the defined problem. |
1906.02785 | Josef Tkadlec | Josef Tkadlec, Andreas Pavlogiannis, Krishnendu Chatterjee, Martin A.
Nowak | Limits on amplifiers of natural selection under death-Birth updating | null | null | 10.1371/journal.pcbi.1007494 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fixation probability of a single mutant invading a population of
residents is among the most widely-studied quantities in evolutionary dynamics.
Amplifiers of natural selection are population structures that increase the
fixation probability of advantageous mutants, compared to well-mixed
populations. Extensive studies have shown that many amplifiers exist for the
Birth-death Moran process, some of them substantially increasing the fixation
probability or even guaranteeing fixation in the limit of large population
size. On the other hand, no amplifiers are known for the death-Birth Moran
process, and computer-assisted exhaustive searches have failed to discover
amplification. In this work we resolve this disparity, by showing that any
amplification under death-Birth updating is necessarily \emph{bounded} and
\emph{transient}. Our boundedness result states that even if a population
structure does amplify selection, the resulting fixation probability is close
to that of the well-mixed population. Our transience result states that for any
population structure there exists a threshold $r^*$ such that the population
structure ceases to amplify selection if the mutant fitness advantage $r$ is
larger than $r^\star$. Finally, we also extend the above results to
$\delta$-death-Birth updating, which is a combination of Birth-death and
death-Birth updating. On the positive side, we identify population structures
that maintain amplification for a wide range of values $r$ and $\delta$. These
results demonstrate that amplification of natural selection depends on the
specific mechanisms of the evolutionary process.
| [
{
"created": "Thu, 6 Jun 2019 19:31:06 GMT",
"version": "v1"
}
] | 2020-07-01 | [
[
"Tkadlec",
"Josef",
""
],
[
"Pavlogiannis",
"Andreas",
""
],
[
"Chatterjee",
"Krishnendu",
""
],
[
"Nowak",
"Martin A.",
""
]
] | The fixation probability of a single mutant invading a population of residents is among the most widely-studied quantities in evolutionary dynamics. Amplifiers of natural selection are population structures that increase the fixation probability of advantageous mutants, compared to well-mixed populations. Extensive studies have shown that many amplifiers exist for the Birth-death Moran process, some of them substantially increasing the fixation probability or even guaranteeing fixation in the limit of large population size. On the other hand, no amplifiers are known for the death-Birth Moran process, and computer-assisted exhaustive searches have failed to discover amplification. In this work we resolve this disparity, by showing that any amplification under death-Birth updating is necessarily \emph{bounded} and \emph{transient}. Our boundedness result states that even if a population structure does amplify selection, the resulting fixation probability is close to that of the well-mixed population. Our transience result states that for any population structure there exists a threshold $r^*$ such that the population structure ceases to amplify selection if the mutant fitness advantage $r$ is larger than $r^\star$. Finally, we also extend the above results to $\delta$-death-Birth updating, which is a combination of Birth-death and death-Birth updating. On the positive side, we identify population structures that maintain amplification for a wide range of values $r$ and $\delta$. These results demonstrate that amplification of natural selection depends on the specific mechanisms of the evolutionary process. |
2112.00119 | Ulises Pereira-Obilinovic | Ulises Pereira-Obilinovic, Johnatan Aljadeff, Nicolas Brunel | Forgetting leads to chaos in attractor networks | null | null | null | null | q-bio.NC cond-mat.dis-nn | http://creativecommons.org/licenses/by/4.0/ | Attractor networks are an influential theory for memory storage in brain
systems. This theory has recently been challenged by the observation of strong
temporal variability in neuronal recordings during memory tasks. In this work,
we study a sparsely connected attractor network where memories are learned
according to a Hebbian synaptic plasticity rule. After recapitulating known
results for the continuous, sparsely connected Hopfield model, we investigate a
model in which new memories are learned continuously and old memories are
forgotten, using an online synaptic plasticity rule. We show that for a
forgetting time scale that optimizes storage capacity, the qualitative features
of the network's memory retrieval dynamics are age-dependent: most recent
memories are retrieved as fixed-point attractors while older memories are
retrieved as chaotic attractors characterized by strong heterogeneity and
temporal fluctuations. Therefore, fixed-point and chaotic attractors co-exist
in the network phase space. The network presents a continuum of statistically
distinguishable memory states, where chaotic fluctuations appear abruptly above
a critical age and then increase gradually until the memory disappears. We
develop a dynamical mean field theory (DMFT) to analyze the age-dependent
dynamics and compare the theory with simulations of large networks. Our
numerical simulations show that a high-degree of sparsity is necessary for the
DMFT to accurately predict the network capacity. Finally, our theory provides
specific predictions for delay response tasks with aging memoranda. Our theory
of attractor networks that continuously learn new information at the price of
forgetting old memories can account for the observed diversity of retrieval
states in the cortex, and in particular the strong temporal fluctuations of
cortical activity.
| [
{
"created": "Tue, 30 Nov 2021 21:43:03 GMT",
"version": "v1"
}
] | 2021-12-02 | [
[
"Pereira-Obilinovic",
"Ulises",
""
],
[
"Aljadeff",
"Johnatan",
""
],
[
"Brunel",
"Nicolas",
""
]
] | Attractor networks are an influential theory for memory storage in brain systems. This theory has recently been challenged by the observation of strong temporal variability in neuronal recordings during memory tasks. In this work, we study a sparsely connected attractor network where memories are learned according to a Hebbian synaptic plasticity rule. After recapitulating known results for the continuous, sparsely connected Hopfield model, we investigate a model in which new memories are learned continuously and old memories are forgotten, using an online synaptic plasticity rule. We show that for a forgetting time scale that optimizes storage capacity, the qualitative features of the network's memory retrieval dynamics are age-dependent: most recent memories are retrieved as fixed-point attractors while older memories are retrieved as chaotic attractors characterized by strong heterogeneity and temporal fluctuations. Therefore, fixed-point and chaotic attractors co-exist in the network phase space. The network presents a continuum of statistically distinguishable memory states, where chaotic fluctuations appear abruptly above a critical age and then increase gradually until the memory disappears. We develop a dynamical mean field theory (DMFT) to analyze the age-dependent dynamics and compare the theory with simulations of large networks. Our numerical simulations show that a high-degree of sparsity is necessary for the DMFT to accurately predict the network capacity. Finally, our theory provides specific predictions for delay response tasks with aging memoranda. Our theory of attractor networks that continuously learn new information at the price of forgetting old memories can account for the observed diversity of retrieval states in the cortex, and in particular the strong temporal fluctuations of cortical activity. |
1503.07040 | Thomas Michelitsch | Jicun Wang-Michelitsch, Thomas Michelitsch (DALEMBERT) | Traditional aging theories: which ones are useful? | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many theories have been proposed to answer two questions on aging: "Why do we
age?" and "How do we age?" Among them, evolutionary theories are proposed to
interpret the evolutionary advantage of aging, and "saving resources for group
benefit" is thought to be the purpose of aging. However for saving resources, a
more economic strategy should be to make a rapid death to the individuals who
are over the reproduction age rather than to make them aging. Biological
theories are proposed to identify the causes and the biological processes of
aging. However, some theories including cell senescence/telomere theory,
gene-controlling theory, and developmental theory, have unfortunately ignored
the influence of damage on aging. Free-radical theory suggests that free
radicals by causing intrinsic damage are the main cause of aging. However, even
if intracellular free radicals cause injuries, they could be only associated
with some but not all of the aging changes. Damage (fault)-accumulation theory
predicts that faults as intrinsic damage can accumulate and lead to aging.
However, in fact an unrepaired fault could not possibly remain in a living
organism, since it can destroy the integrity of tissue structure and cause
rapid failure of the organism. These traditional theories are all incomplete on
interpreting aging phenomena. Nevertheless, developmental theory and damage
(fault)-accumulation theory are more useful, because they have recognized the
importance of damage and development process in aging. Some physical theories
are useful, because they point out the common characteristics of aging changes,
including loss of complexity, consequence of increase of entropy, and failure
of information-transmission. An advanced theory, which can include all of these
useful ideas in traditional theories, is needed.
| [
{
"created": "Tue, 24 Mar 2015 13:54:50 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jun 2016 15:31:36 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Jan 2017 15:48:01 GMT",
"version": "v3"
}
] | 2017-01-04 | [
[
"Wang-Michelitsch",
"Jicun",
"",
"DALEMBERT"
],
[
"Michelitsch",
"Thomas",
"",
"DALEMBERT"
]
] | Many theories have been proposed to answer two questions on aging: "Why do we age?" and "How do we age?" Among them, evolutionary theories are proposed to interpret the evolutionary advantage of aging, and "saving resources for group benefit" is thought to be the purpose of aging. However for saving resources, a more economic strategy should be to make a rapid death to the individuals who are over the reproduction age rather than to make them aging. Biological theories are proposed to identify the causes and the biological processes of aging. However, some theories including cell senescence/telomere theory, gene-controlling theory, and developmental theory, have unfortunately ignored the influence of damage on aging. Free-radical theory suggests that free radicals by causing intrinsic damage are the main cause of aging. However, even if intracellular free radicals cause injuries, they could be only associated with some but not all of the aging changes. Damage (fault)-accumulation theory predicts that faults as intrinsic damage can accumulate and lead to aging. However, in fact an unrepaired fault could not possibly remain in a living organism, since it can destroy the integrity of tissue structure and cause rapid failure of the organism. These traditional theories are all incomplete on interpreting aging phenomena. Nevertheless, developmental theory and damage (fault)-accumulation theory are more useful, because they have recognized the importance of damage and development process in aging. Some physical theories are useful, because they point out the common characteristics of aging changes, including loss of complexity, consequence of increase of entropy, and failure of information-transmission. An advanced theory, which can include all of these useful ideas in traditional theories, is needed. |
1504.02719 | Jian Peng | Hyunghoon Cho, Bonnie Berger and Jian Peng | Diffusion Component Analysis: Unraveling Functional Topology in
Biological Networks | RECOMB 2015 | null | null | null | q-bio.MN cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex biological systems have been successfully modeled by biochemical and
genetic interaction networks, typically gathered from high-throughput (HTP)
data. These networks can be used to infer functional relationships between
genes or proteins. Using the intuition that the topological role of a gene in a
network relates to its biological function, local or diffusion based
"guilt-by-association" and graph-theoretic methods have had success in
inferring gene functions. Here we seek to improve function prediction by
integrating diffusion-based methods with a novel dimensionality reduction
technique to overcome the incomplete and noisy nature of network data. In this
paper, we introduce diffusion component analysis (DCA), a framework that plugs
in a diffusion model and learns a low-dimensional vector representation of each
node to encode the topological properties of a network. As a proof of concept,
we demonstrate DCA's substantial improvement over state-of-the-art
diffusion-based approaches in predicting protein function from molecular
interaction networks. Moreover, our DCA framework can integrate multiple
networks from heterogeneous sources, consisting of genomic information,
biochemical experiments and other resources, to even further improve function
prediction. Yet another layer of performance gain is achieved by integrating
the DCA framework with support vector machines that take our node vector
representations as features. Overall, our DCA framework provides a novel
representation of nodes in a network that can be used as a plug-in architecture
to other machine learning algorithms to decipher topological properties of and
obtain novel insights into interactomes.
| [
{
"created": "Fri, 10 Apr 2015 15:42:11 GMT",
"version": "v1"
}
] | 2015-04-13 | [
[
"Cho",
"Hyunghoon",
""
],
[
"Berger",
"Bonnie",
""
],
[
"Peng",
"Jian",
""
]
] | Complex biological systems have been successfully modeled by biochemical and genetic interaction networks, typically gathered from high-throughput (HTP) data. These networks can be used to infer functional relationships between genes or proteins. Using the intuition that the topological role of a gene in a network relates to its biological function, local or diffusion based "guilt-by-association" and graph-theoretic methods have had success in inferring gene functions. Here we seek to improve function prediction by integrating diffusion-based methods with a novel dimensionality reduction technique to overcome the incomplete and noisy nature of network data. In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network. As a proof of concept, we demonstrate DCA's substantial improvement over state-of-the-art diffusion-based approaches in predicting protein function from molecular interaction networks. Moreover, our DCA framework can integrate multiple networks from heterogeneous sources, consisting of genomic information, biochemical experiments and other resources, to even further improve function prediction. Yet another layer of performance gain is achieved by integrating the DCA framework with support vector machines that take our node vector representations as features. Overall, our DCA framework provides a novel representation of nodes in a network that can be used as a plug-in architecture to other machine learning algorithms to decipher topological properties of and obtain novel insights into interactomes. |
1803.08595 | Tobias Galla | Peter G. Hufton, Elizabeth Buckingham-Jeffery, Tobias Galla | Calculating normal tissue complication probabilities and probabilities
of complication-free tumour control from stochastic models of population
dynamics | 27 pages, 5 figures, 4 tables | null | null | null | q-bio.PE cond-mat.stat-mech q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use a stochastic birth-death model for a population of cells to estimate
the normal tissue complication probability (NTCP) under a particular
radiotherapy protocol. We specifically allow for interaction between cells, via
a nonlinear logistic growth model. To capture some of the effects of intrinsic
noise in the population we develop several approximations of NTCP, using
Kramers-Moyal expansion techniques. These approaches provide an approximation
to the first and second moments of a general first-passage time problem in the
limit of large, but finite populations. We use this method to study NTCP in a
simple model of normal cells and in a model of normal and damaged cells. We
also study a combined model of normal tissue cells and tumour cells. Based on
existing methods to calculate tumour control probabilities, and our procedure
to approximate NTCP, we estimate the probability of complication free tumour
control.
| [
{
"created": "Thu, 22 Mar 2018 21:58:41 GMT",
"version": "v1"
}
] | 2018-03-26 | [
[
"Hufton",
"Peter G.",
""
],
[
"Buckingham-Jeffery",
"Elizabeth",
""
],
[
"Galla",
"Tobias",
""
]
] | We use a stochastic birth-death model for a population of cells to estimate the normal tissue complication probability (NTCP) under a particular radiotherapy protocol. We specifically allow for interaction between cells, via a nonlinear logistic growth model. To capture some of the effects of intrinsic noise in the population we develop several approximations of NTCP, using Kramers-Moyal expansion techniques. These approaches provide an approximation to the first and second moments of a general first-passage time problem in the limit of large, but finite populations. We use this method to study NTCP in a simple model of normal cells and in a model of normal and damaged cells. We also study a combined model of normal tissue cells and tumour cells. Based on existing methods to calculate tumour control probabilities, and our procedure to approximate NTCP, we estimate the probability of complication free tumour control. |
0803.1552 | Kavita Jain | Kavita Jain | Loss of least-loaded class in asexual populations due to drift and
epistasis | Version to appear in Genetics | Genetics 179, 2125 (2008) | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the dynamics of a non-recombining haploid population of finite
size which accumulates deleterious mutations irreversibly. This ratchet like
process occurs at a finite speed in the absence of epistasis, but it has been
suggested that synergistic epistasis can halt the ratchet. Using a diffusion
theory, we find explicit analytical expressions for the typical time between
successive clicks of the ratchet for both non-epistatic and epistatic fitness
functions. Our calculations show that the inter-click time is of a scaling form
which in the absence of epistasis gives a speed that is determined by size of
the least-loaded class and the selection coefficient. With synergistic
interactions, the ratchet speed is found to approach zero rapidly for arbitrary
epistasis. Our analytical results are in good agreement with the numerical
simulations.
| [
{
"created": "Tue, 11 Mar 2008 11:11:21 GMT",
"version": "v1"
},
{
"created": "Fri, 30 May 2008 04:26:22 GMT",
"version": "v2"
}
] | 2008-08-18 | [
[
"Jain",
"Kavita",
""
]
] | We consider the dynamics of a non-recombining haploid population of finite size which accumulates deleterious mutations irreversibly. This ratchet like process occurs at a finite speed in the absence of epistasis, but it has been suggested that synergistic epistasis can halt the ratchet. Using a diffusion theory, we find explicit analytical expressions for the typical time between successive clicks of the ratchet for both non-epistatic and epistatic fitness functions. Our calculations show that the inter-click time is of a scaling form which in the absence of epistasis gives a speed that is determined by size of the least-loaded class and the selection coefficient. With synergistic interactions, the ratchet speed is found to approach zero rapidly for arbitrary epistasis. Our analytical results are in good agreement with the numerical simulations. |
2007.13494 | Ionel Popescu | Marian Petrica and Radu D. Stochitoiu and Marius Leordeanu and Ionel
Popescu | A regime switching on Covid19 analysis and prediction in Romania | null | null | null | null | q-bio.PE physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a three stages analysis of the evolution of Covid19
in Romania.
There are two main issues when it comes to pandemic prediction. The first one
is the fact that the numbers reported of infected and recovered are unreliable,
however the number of deaths is more accurate. The second issue is that there
were many factors which affected the evolution of the pandemic.
In this paper we propose an analysis in three stages. The first stage is
based on the classical SIR model which we do using a neural network. This
provides a first set of daily parameters.
In the second stage we propose a refinement of the SIR model in which we
separate the deceased into a distinct category. By using the first estimate and
a grid search, we give a daily estimation of the parameters.
The third stage is used to define a notion of turning points (local extremes)
for the parameters. We call a regime the time between these points.
We outline a general way based on time varying parameters of SIRD to make
predictions.
| [
{
"created": "Mon, 27 Jul 2020 12:37:32 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Aug 2020 20:24:16 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Aug 2022 21:52:48 GMT",
"version": "v3"
}
] | 2022-09-01 | [
[
"Petrica",
"Marian",
""
],
[
"Stochitoiu",
"Radu D.",
""
],
[
"Leordeanu",
"Marius",
""
],
[
"Popescu",
"Ionel",
""
]
] | In this paper we propose a three stages analysis of the evolution of Covid19 in Romania. There are two main issues when it comes to pandemic prediction. The first one is the fact that the numbers reported of infected and recovered are unreliable, however the number of deaths is more accurate. The second issue is that there were many factors which affected the evolution of the pandemic. In this paper we propose an analysis in three stages. The first stage is based on the classical SIR model which we do using a neural network. This provides a first set of daily parameters. In the second stage we propose a refinement of the SIR model in which we separate the deceased into a distinct category. By using the first estimate and a grid search, we give a daily estimation of the parameters. The third stage is used to define a notion of turning points (local extremes) for the parameters. We call a regime the time between these points. We outline a general way based on time varying parameters of SIRD to make predictions. |
0711.1141 | Srividya Iyer-Biswas | Srividya Iyer-Biswas, F. Hayot, C. Jayaprakash | Transcriptional pulsing and consequent stochasticity in gene expression | 12 pages, 6 figures. Submitted to PloS Computational Biology on
10/04/2007 | null | null | null | q-bio.QM q-bio.SC | null | Transcriptional pulsing has been observed in both prokaryotes and eukaryotes
and plays a crucial role in cell to cell variability of protein and mRNA
numbers. The issue is how the time constants associated with episodes of
transcriptional bursting impact cellular mRNA and protein distributions and
reciprocally, to what extent experimentally observed distributions can be
attributed to transcriptional pulsing. We address these questions by
investigating the exact time-dependent solution of the Master equation for a
transcriptional pulsing model of mRNA distributions. We find a plethora of
results: we show that, among others, bimodal and long-tailed (power law)
distributions occur in the steady state as the rate constants are varied over
biologically significant time scales. Since steady state distributions may not
be reached experimentally we present results for the time evolution of the
distributions. Because cellular behavior is essentially determined by proteins,
we investigate the effect of the different mRNA distributions on the
corresponding protein distributions. We delineate the regimes of rate constants
for which the protein distribution mimics the mRNA distribution and those for
which the protein distribution deviates significantly from the mRNA
distribution.
| [
{
"created": "Wed, 7 Nov 2007 19:08:45 GMT",
"version": "v1"
}
] | 2009-09-29 | [
[
"Iyer-Biswas",
"Srividya",
""
],
[
"Hayot",
"F.",
""
],
[
"Jayaprakash",
"C.",
""
]
] | Transcriptional pulsing has been observed in both prokaryotes and eukaryotes and plays a crucial role in cell to cell variability of protein and mRNA numbers. The issue is how the time constants associated with episodes of transcriptional bursting impact cellular mRNA and protein distributions and reciprocally, to what extent experimentally observed distributions can be attributed to transcriptional pulsing. We address these questions by investigating the exact time-dependent solution of the Master equation for a transcriptional pulsing model of mRNA distributions. We find a plethora of results: we show that, among others, bimodal and long-tailed (power law) distributions occur in the steady state as the rate constants are varied over biologically significant time scales. Since steady state distributions may not be reached experimentally we present results for the time evolution of the distributions. Because cellular behavior is essentially determined by proteins, we investigate the effect of the different mRNA distributions on the corresponding protein distributions. We delineate the regimes of rate constants for which the protein distribution mimics the mRNA distribution and those for which the protein distribution deviates significantly from the mRNA distribution. |
1002.1593 | Manoj Gopalakrishnan | Melissa Reneaux (Delhi University) and Manoj Gopalakrishnan (IIT
Madras) | Theoretical results for chemotactic response and drift of E. coli in a
weak attractant gradient | 24 pages with 5 figures | J. Theor. Biol. 266(1), 99-106 (2010) | null | null | q-bio.QM cond-mat.stat-mech q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bacterium Escherichia coli (E. coli) moves in its natural environment in
a series of straight runs, interrupted by tumbles which cause change of
direction. It performs chemotaxis towards chemo-attractants by extending the
duration of runs in the direction of the source. When there is a spatial
gradient in the attractant concentration, this bias produces a drift velocity
directed towards its source, whereas in a uniform concentration, E.coli adapts,
almost perfectly in case of methyl aspartate. Recently, microfluidic
experiments have measured the drift velocity of E.coli in precisely controlled
attractant gradients, but no general theoretical expression for the same
exists. With this motivation, we study an analytically soluble model here,
based on the Barkai-Leibler model, originally introduced to explain the perfect
adaptation. Rigorous mathematical expressions are obtained for the chemotactic
response function and the drift velocity in the limit of weak gradients and
under the assumption of completely random tumbles. The theoretical predictions
compare favorably with experimental results, especially at high concentrations.
We further show that the signal transduction network weakens the dependence of
the drift on concentration, thus enhancing the range of sensitivity.
| [
{
"created": "Mon, 8 Feb 2010 12:45:53 GMT",
"version": "v1"
}
] | 2010-07-12 | [
[
"Reneaux",
"Melissa",
"",
"Delhi University"
],
[
"Gopalakrishnan",
"Manoj",
"",
"IIT\n Madras"
]
] | The bacterium Escherichia coli (E. coli) moves in its natural environment in a series of straight runs, interrupted by tumbles which cause change of direction. It performs chemotaxis towards chemo-attractants by extending the duration of runs in the direction of the source. When there is a spatial gradient in the attractant concentration, this bias produces a drift velocity directed towards its source, whereas in a uniform concentration, E.coli adapts, almost perfectly in case of methyl aspartate. Recently, microfluidic experiments have measured the drift velocity of E.coli in precisely controlled attractant gradients, but no general theoretical expression for the same exists. With this motivation, we study an analytically soluble model here, based on the Barkai-Leibler model, originally introduced to explain the perfect adaptation. Rigorous mathematical expressions are obtained for the chemotactic response function and the drift velocity in the limit of weak gradients and under the assumption of completely random tumbles. The theoretical predictions compare favorably with experimental results, especially at high concentrations. We further show that the signal transduction network weakens the dependence of the drift on concentration, thus enhancing the range of sensitivity. |
1510.08002 | Max Alekseyev | Nikita Alexeev and Max A. Alekseyev | Estimation of the True Evolutionary Distance under the Fragile Breakage
Model | null | BMC Genomics 18:Suppl 4 (2017), 356 | 10.1186/s12864-017-3733-3 | null | q-bio.GN math.PR q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to estimate the evolutionary distance between extant genomes
plays a crucial role in many phylogenomic studies. Often such estimation is
based on the parsimony assumption, implying that the distance between two
genomes can be estimated as the rearrangement distance equal the minimal number
of genome rearrangements required to transform one genome into the other.
However, in reality the parsimony assumption may not always hold, emphasizing
the need for estimation that does not rely on the rearrangement distance. The
distance that accounts for the actual (rather than minimal) number of
rearrangements between two genomes is often referred to as the true
evolutionary distance. While there exists a method for the true evolutionary
distance estimation, it however assumes that genomes can be broken by
rearrangements equally likely at any position in the course of evolution. This
assumption, known as the random breakage model, has recently been refuted in
favor of the more rigorous fragile breakage model postulating that only certain
"fragile" genomic regions are prone to rearrangements.
We propose a new method for estimating the true evolutionary distance between
two genomes under the fragile breakage model. We evaluate the proposed method
on simulated genomes, which show its high accuracy. We further apply the
proposed method for estimation of evolutionary distances within a set of five
yeast genomes and a set of two fish genomes.
| [
{
"created": "Tue, 27 Oct 2015 17:38:29 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2017 21:09:12 GMT",
"version": "v2"
}
] | 2017-05-29 | [
[
"Alexeev",
"Nikita",
""
],
[
"Alekseyev",
"Max A.",
""
]
] | The ability to estimate the evolutionary distance between extant genomes plays a crucial role in many phylogenomic studies. Often such estimation is based on the parsimony assumption, implying that the distance between two genomes can be estimated as the rearrangement distance equal the minimal number of genome rearrangements required to transform one genome into the other. However, in reality the parsimony assumption may not always hold, emphasizing the need for estimation that does not rely on the rearrangement distance. The distance that accounts for the actual (rather than minimal) number of rearrangements between two genomes is often referred to as the true evolutionary distance. While there exists a method for the true evolutionary distance estimation, it however assumes that genomes can be broken by rearrangements equally likely at any position in the course of evolution. This assumption, known as the random breakage model, has recently been refuted in favor of the more rigorous fragile breakage model postulating that only certain "fragile" genomic regions are prone to rearrangements. We propose a new method for estimating the true evolutionary distance between two genomes under the fragile breakage model. We evaluate the proposed method on simulated genomes, which show its high accuracy. We further apply the proposed method for estimation of evolutionary distances within a set of five yeast genomes and a set of two fish genomes. |
1812.03796 | Richard Betzel | Richard F. Betzel, Katherine C. Wood, Christopher Angeloni, Maria
Neimark Geffen, Danielle S. Bassett | Stability of spontaneous, correlated activity in mouse auditory cortex | 15 pages, 3 figures | null | 10.1371/journal.pcbi.1007360 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural systems can be modeled as networks of functionally connected neural
elements. The resulting network can be analyzed using mathematical tools from
network science and graph theory to quantify the system's topological
organization and to better understand its function. While the network-based
approach is common in the analysis of large-scale neural systems probed by
non-invasive neuroimaging, few studies have used network science to study the
organization of networks reconstructed at the cellular level, and thus many
very basic and fundamental questions remain unanswered. Here, we used
two-photon calcium imaging to record spontaneous activity from the same set of
cells in mouse auditory cortex over the course of several weeks. We reconstruct
functional networks in which cells are linked to one another by edges weighted
according to the correlation of their fluorescence traces. We show that the
networks exhibit modular structure across multiple topological scales and that
these multi-scale modules unfold as part of a hierarchy. We also show that, on
average, network architecture becomes increasingly dissimilar over time, with
similarity decaying monotonically with the distance (in time) between sessions.
Finally, we show that a small fraction of cells maintain strongly-correlated
activity over multiple days, forming a stable temporal core surrounded by a
fluctuating and variable periphery. Our work provides a careful methodological
blueprint for future studies of spontaneous activity measured by two-photon
calcium imaging using cutting-edge computational methods and machine learning
algorithms informed by explicit graphical models from network science. The
methods are easily extended to additional datasets, opening the possibility of
studying cellular level network organization of neural systems and how that
organization is modulated by stimuli or altered in models of disease.
| [
{
"created": "Mon, 10 Dec 2018 13:53:10 GMT",
"version": "v1"
}
] | 2020-07-01 | [
[
"Betzel",
"Richard F.",
""
],
[
"Wood",
"Katherine C.",
""
],
[
"Angeloni",
"Christopher",
""
],
[
"Geffen",
"Maria Neimark",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Neural systems can be modeled as networks of functionally connected neural elements. The resulting network can be analyzed using mathematical tools from network science and graph theory to quantify the system's topological organization and to better understand its function. While the network-based approach is common in the analysis of large-scale neural systems probed by non-invasive neuroimaging, few studies have used network science to study the organization of networks reconstructed at the cellular level, and thus many very basic and fundamental questions remain unanswered. Here, we used two-photon calcium imaging to record spontaneous activity from the same set of cells in mouse auditory cortex over the course of several weeks. We reconstruct functional networks in which cells are linked to one another by edges weighted according to the correlation of their fluorescence traces. We show that the networks exhibit modular structure across multiple topological scales and that these multi-scale modules unfold as part of a hierarchy. We also show that, on average, network architecture becomes increasingly dissimilar over time, with similarity decaying monotonically with the distance (in time) between sessions. Finally, we show that a small fraction of cells maintain strongly-correlated activity over multiple days, forming a stable temporal core surrounded by a fluctuating and variable periphery. Our work provides a careful methodological blueprint for future studies of spontaneous activity measured by two-photon calcium imaging using cutting-edge computational methods and machine learning algorithms informed by explicit graphical models from network science. The methods are easily extended to additional datasets, opening the possibility of studying cellular level network organization of neural systems and how that organization is modulated by stimuli or altered in models of disease. |
1901.08521 | Andrea Gabrielli | Rossana Mastrandrea, Fabrizio Piras, Andrea Gabrielli, Nerisa Banaj,
Guido Caldarelli, Gianfranco Spalletta, Tommaso Gili | The unbalanced reorganization of weaker functional connections induces
the altered brain network topology in schizophrenia | 37 pages, 8 figures | null | null | null | q-bio.NC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network neuroscience shed some light on the functional and structural
modifications occurring to the brain associated with the phenomenology of
schizophrenia. In particular, resting-state functional networks have helped our
understanding of the illness by highlighting the global and local alterations
within the cerebral organization. We investigated the robustness of the brain
functional architecture in forty-four medicated schizophrenic patients and
forty healthy comparators through an advanced network analysis of resting-state
functional magnetic resonance imaging data. The networks in patients showed
more resistance to disconnection than in healthy controls, with an evident
discrepancy between the two groups in the node degree distribution computed
along a percolation process. Despite a substantial similarity of the basal
functional organization between the two groups, the expected hierarchy of
healthy brains modular organization is crumbled in schizophrenia, showing a
peculiar arrangement of the functional connections, characterized by several
topologically equivalent backbones.
| [
{
"created": "Thu, 24 Jan 2019 17:33:14 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Oct 2019 10:18:19 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Apr 2021 11:17:17 GMT",
"version": "v3"
}
] | 2021-04-30 | [
[
"Mastrandrea",
"Rossana",
""
],
[
"Piras",
"Fabrizio",
""
],
[
"Gabrielli",
"Andrea",
""
],
[
"Banaj",
"Nerisa",
""
],
[
"Caldarelli",
"Guido",
""
],
[
"Spalletta",
"Gianfranco",
""
],
[
"Gili",
"Tommaso",
""
]
] | Network neuroscience shed some light on the functional and structural modifications occurring to the brain associated with the phenomenology of schizophrenia. In particular, resting-state functional networks have helped our understanding of the illness by highlighting the global and local alterations within the cerebral organization. We investigated the robustness of the brain functional architecture in forty-four medicated schizophrenic patients and forty healthy comparators through an advanced network analysis of resting-state functional magnetic resonance imaging data. The networks in patients showed more resistance to disconnection than in healthy controls, with an evident discrepancy between the two groups in the node degree distribution computed along a percolation process. Despite a substantial similarity of the basal functional organization between the two groups, the expected hierarchy of healthy brains modular organization is crumbled in schizophrenia, showing a peculiar arrangement of the functional connections, characterized by several topologically equivalent backbones. |
1408.4480 | G Sampath | G. Sampath | A Tandem Cell for Nanopore-based DNA Sequencing with Exonuclease | 14 pages, 8 figures | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A tandem cell is proposed for DNA sequencing in which an exonuclease enzyme
cleaves bases (mononucleotides) from a strand of DNA for identification inside
a nanopore. It has two nanopores and three compartments with the structure
[cis1, upstream nanopore (UNP), trans1=cis2, downstream nanopore (DNP),
trans2]. The exonuclease is attached to the exit side of UNP in trans1/cis2. A
cleaved base cannot regress into cis1 because of the remaining DNA strand in
UNP. A profiled electric field over DNP with positive and negative components
slows down base translocation through DNP. The proposed structure is modeled
with a Fokker-Planck equation and a piecewise solution presented. Results from
the model indicate that with probability approaching 1 bases enter DNP in their
natural order, are detected without any loss, and do not regress into DNP after
progressing into trans2. Sequencing efficiency with a tandem cell would then be
determined solely by the level of discrimination among the base types inside
DNP.
| [
{
"created": "Sun, 17 Aug 2014 15:43:18 GMT",
"version": "v1"
}
] | 2014-08-21 | [
[
"Sampath",
"G.",
""
]
] | A tandem cell is proposed for DNA sequencing in which an exonuclease enzyme cleaves bases (mononucleotides) from a strand of DNA for identification inside a nanopore. It has two nanopores and three compartments with the structure [cis1, upstream nanopore (UNP), trans1=cis2, downstream nanopore (DNP), trans2]. The exonuclease is attached to the exit side of UNP in trans1/cis2. A cleaved base cannot regress into cis1 because of the remaining DNA strand in UNP. A profiled electric field over DNP with positive and negative components slows down base translocation through DNP. The proposed structure is modeled with a Fokker-Planck equation and a piecewise solution presented. Results from the model indicate that with probability approaching 1 bases enter DNP in their natural order, are detected without any loss, and do not regress into DNP after progressing into trans2. Sequencing efficiency with a tandem cell would then be determined solely by the level of discrimination among the base types inside DNP. |
2307.09720 | Varad Deshmukh | Golnar Gharooni-Fard, Morgan Byers, Varad Deshmukh, Elizabeth Bradley,
Carissa Mayo, Chad Topaz, and Orit Peleg | A Computational Topology-based Spatiotemporal Analysis Technique for
Honeybee Aggregation | null | null | null | null | q-bio.QM math.AT nlin.AO | http://creativecommons.org/licenses/by/4.0/ | A primary challenge in understanding collective behavior is characterizing
the spatiotemporal dynamics of the group. We employ topological data analysis
to explore the structure of honeybee aggregations that form during
trophallaxis, which is the direct exchange of food among nestmates. From the
positions of individual bees, we build topological summaries called CROCKER
matrices to track the morphology of the group as a function of scale and time.
Each column of a CROCKER matrix records the number of topological features,
such as the number of components or holes, that exist in the data for a range
of analysis scales at a given point in time. To detect important changes in the
morphology of the group from this information, we first apply dimensionality
reduction techniques to these matrices and then use classic clustering and
change-point detection algorithms on the resulting scalar data. A test of this
methodology on synthetic data from an agent-based model of honeybees and their
trophallaxis behavior shows two distinct phases: a dispersed phase that occurs
before food is introduced, followed by a food-exchange phase during which
aggregations form. We then move to laboratory data, successfully detecting the
same two phases across multiple experiments. Interestingly, our method reveals
an additional phase change towards the end of the experiments, suggesting the
possibility of another dispersed phase that follows the food-exchange phase.
| [
{
"created": "Wed, 19 Jul 2023 02:03:32 GMT",
"version": "v1"
}
] | 2023-07-20 | [
[
"Gharooni-Fard",
"Golnar",
""
],
[
"Byers",
"Morgan",
""
],
[
"Deshmukh",
"Varad",
""
],
[
"Bradley",
"Elizabeth",
""
],
[
"Mayo",
"Carissa",
""
],
[
"Topaz",
"Chad",
""
],
[
"Peleg",
"Orit",
""
]
] | A primary challenge in understanding collective behavior is characterizing the spatiotemporal dynamics of the group. We employ topological data analysis to explore the structure of honeybee aggregations that form during trophallaxis, which is the direct exchange of food among nestmates. From the positions of individual bees, we build topological summaries called CROCKER matrices to track the morphology of the group as a function of scale and time. Each column of a CROCKER matrix records the number of topological features, such as the number of components or holes, that exist in the data for a range of analysis scales at a given point in time. To detect important changes in the morphology of the group from this information, we first apply dimensionality reduction techniques to these matrices and then use classic clustering and change-point detection algorithms on the resulting scalar data. A test of this methodology on synthetic data from an agent-based model of honeybees and their trophallaxis behavior shows two distinct phases: a dispersed phase that occurs before food is introduced, followed by a food-exchange phase during which aggregations form. We then move to laboratory data, successfully detecting the same two phases across multiple experiments. Interestingly, our method reveals an additional phase change towards the end of the experiments, suggesting the possibility of another dispersed phase that follows the food-exchange phase. |
1201.3574 | Reynaldo Daniel Pinto Ph.D. | Caroline Garcia Forlim, Reynaldo Daniel Pinto | Noninvasive Realistic Stimulation/Recording of Freely Swimming Weakly
Electric Fish: Movement Detection and Discharge Entropy to Infer Fish
Behavior | 25 pages, 7 figures | null | null | null | q-bio.QM physics.bio-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly electric fish are unique models in Neuroscience allowing
experimentalists to access, with non invasive techniques,a central nervous
system generated spatio-temporal electric pattern of pulses with roles in at
least two complex and not yet completely understood
abilities:electrocommunication and electrolocation. We developed an apparatus
to allow realistic stimulation and simultaneous recording of electric pulses in
freely moving Gymnotus carapo for very long periods-several days. Voltage time
series from a 3dimensional array of sensitive dipoles that detects electric
field in several positions underwater were digitized and home made real-time
software allowed reliable recording of pulse timestamps,independently of the
fish's position,and also to infer fish movement. A stimulus fish was mimicked
by a dipole electrode that reproduced the voltage time series of real
conspecific pulses,but according to timestamp sequences previously recorded
that could be chosen by the experimenter. Two independent variables were used
to analyze fish behavior:the entropy of the recorded timestamp sequences and
the movement of the fish inferred from pulse amplitude variability at each
detection dipole. All fish presented very long transient exploratory behavior
(about 8hours) when exposed to a new environment in the absence of stimuli.
After the transient there were several intervals(5min-2hours),in which entropy
vanished and no movement was observed, that could be associated with behavioral
sleeping. Our experiments also revealed that fish are able to discriminate
between real and random stimuli distributions by changing the timing
probability of the next discharge. Moreover,most fish presented behavioral
sleep periods when the artificial fish timestamp sequence was random,but no
fish showed any behavioral sleep period when the artificial fish fired
according to a real fish timestamp series.
| [
{
"created": "Tue, 17 Jan 2012 17:55:18 GMT",
"version": "v1"
}
] | 2012-01-24 | [
[
"Forlim",
"Caroline Garcia",
""
],
[
"Pinto",
"Reynaldo Daniel",
""
]
] | Weakly electric fish are unique models in Neuroscience allowing experimentalists to access, with non invasive techniques,a central nervous system generated spatio-temporal electric pattern of pulses with roles in at least two complex and not yet completely understood abilities:electrocommunication and electrolocation. We developed an apparatus to allow realistic stimulation and simultaneous recording of electric pulses in freely moving Gymnotus carapo for very long periods-several days. Voltage time series from a 3dimensional array of sensitive dipoles that detects electric field in several positions underwater were digitized and home made real-time software allowed reliable recording of pulse timestamps,independently of the fish's position,and also to infer fish movement. A stimulus fish was mimicked by a dipole electrode that reproduced the voltage time series of real conspecific pulses,but according to timestamp sequences previously recorded that could be chosen by the experimenter. Two independent variables were used to analyze fish behavior:the entropy of the recorded timestamp sequences and the movement of the fish inferred from pulse amplitude variability at each detection dipole. All fish presented very long transient exploratory behavior (about 8hours) when exposed to a new environment in the absence of stimuli. After the transient there were several intervals(5min-2hours),in which entropy vanished and no movement was observed, that could be associated with behavioral sleeping. Our experiments also revealed that fish are able to discriminate between real and random stimuli distributions by changing the timing probability of the next discharge. Moreover,most fish presented behavioral sleep periods when the artificial fish timestamp sequence was random,but no fish showed any behavioral sleep period when the artificial fish fired according to a real fish timestamp series. |
2007.11113 | Andreas Mayer | Mario U. Gaimann, Maximilian Nguyen, Jonathan Desponds, Andreas Mayer | Early life imprints the hierarchy of T cell clone sizes | 8 pages, 4 figures + 27 pages supplement with 20 figures | eLife 2020 9:e61639 | 10.7554/eLife.61639 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The adaptive immune system responds to pathogens by selecting clones of cells
with specific receptors. While clonal selection in response to particular
antigens has been studied in detail, it is unknown how a lifetime of exposures
to many antigens collectively shape the immune repertoire. Here, through
mathematical modeling and statistical analyses of T cell receptor sequencing
data we demonstrate that clonal expansions during a perinatal time window leave
a long-lasting imprint on the human T cell repertoire. We demonstrate how the
empirical scaling law relating the rank of the largest clones to their size can
emerge from clonal growth during repertoire formation. We statistically
identify early founded clones and find that they are indeed highly enriched
among the largest clones. This enrichment persists even after decades of human
aging, in a way that is quantitatively predicted by a model of fluctuating
clonal selection. Our work presents a quantitative theory of human T cell
dynamics compatible with the statistical laws of repertoire organization and
provides a mechanism for how early clonal dynamics imprint the hierarchy of T
cell clone sizes with implications for pathogen defense and autoimmunity.
| [
{
"created": "Tue, 21 Jul 2020 22:09:22 GMT",
"version": "v1"
}
] | 2021-01-06 | [
[
"Gaimann",
"Mario U.",
""
],
[
"Nguyen",
"Maximilian",
""
],
[
"Desponds",
"Jonathan",
""
],
[
"Mayer",
"Andreas",
""
]
] | The adaptive immune system responds to pathogens by selecting clones of cells with specific receptors. While clonal selection in response to particular antigens has been studied in detail, it is unknown how a lifetime of exposures to many antigens collectively shape the immune repertoire. Here, through mathematical modeling and statistical analyses of T cell receptor sequencing data we demonstrate that clonal expansions during a perinatal time window leave a long-lasting imprint on the human T cell repertoire. We demonstrate how the empirical scaling law relating the rank of the largest clones to their size can emerge from clonal growth during repertoire formation. We statistically identify early founded clones and find that they are indeed highly enriched among the largest clones. This enrichment persists even after decades of human aging, in a way that is quantitatively predicted by a model of fluctuating clonal selection. Our work presents a quantitative theory of human T cell dynamics compatible with the statistical laws of repertoire organization and provides a mechanism for how early clonal dynamics imprint the hierarchy of T cell clone sizes with implications for pathogen defense and autoimmunity. |
1802.07914 | Chuan Zhang | Chuan Zhang (1 and 2 and 3), Lulu Ge (1 and 2 and 3), Xiaohu You (2)
((1) Lab of Efficient Architectures for Digital-communication and
Signal-processing (LEADS), (2) National Mobile Communications Research
Laboratory, (3) Quantum Information Center, Southeast University, China) | Synthesizing a Clock Signal with Reactions---Part I: Duty Cycle
Implementation Based on Gears | null | null | null | null | q-bio.MN physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Timing is of fundamental importance in biology and our life. Borrowing ideas
from mechanism, we map our clock signals onto a gear system, in pursuit of
better depiction of a clock signal implemented with chemical reaction networks
(CRNs). On a chassis of gear theory, more quantitative descriptions are offered
for our method. Inspired by gears, our work to synthesize a tunable clock
signal could be divided into two parts. Part I, this paper, mainly focuses on
the implementation of clock signals with three types of duty cycles, namely
$1/2$, $1/N$ ($N > 2$), and $M/N$. Part II devotes itself in addressing
frequency alteration issues of clock signals. \textcolor{black}{Guaranteed by
existing literature, the experimental chassis can be taken care of by DNA
strand displacement reactions, which lay a solid foundation for the physical
implementation of nearly arbitrary CRNs.
| [
{
"created": "Thu, 22 Feb 2018 06:39:26 GMT",
"version": "v1"
}
] | 2018-02-23 | [
[
"Zhang",
"Chuan",
"",
"1 and 2 and 3"
],
[
"Ge",
"Lulu",
"",
"1 and 2 and 3"
],
[
"You",
"Xiaohu",
""
]
] | Timing is of fundamental importance in biology and our life. Borrowing ideas from mechanism, we map our clock signals onto a gear system, in pursuit of better depiction of a clock signal implemented with chemical reaction networks (CRNs). On a chassis of gear theory, more quantitative descriptions are offered for our method. Inspired by gears, our work to synthesize a tunable clock signal could be divided into two parts. Part I, this paper, mainly focuses on the implementation of clock signals with three types of duty cycles, namely $1/2$, $1/N$ ($N > 2$), and $M/N$. Part II devotes itself in addressing frequency alteration issues of clock signals. \textcolor{black}{Guaranteed by existing literature, the experimental chassis can be taken care of by DNA strand displacement reactions, which lay a solid foundation for the physical implementation of nearly arbitrary CRNs. |
1108.3495 | Pierre Degond | Emmanuel Boissard (IMT), Pierre Degond (IMT), S\'ebastien Motsch | Trail formation based on directed pheromone deposition | null | Journal of Mathematical Biology, 66 (2013), pp. 1267-1301 | 10.1007/s00285-012-0529-6 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an Individual-Based Model of ant-trail formation. The ants are
modeled as self-propelled particles which deposit directed pheromones and
interact with them through alignment interaction. The directed pheromones
intend to model pieces of trails, while the alignment interaction translates
the tendency for an ant to follow a trail when it meets it. Thanks to adequate
quantitative descriptors of the trail patterns, the existence of a phase
transition as the ant-pheromone interaction frequency is increased can be
evidenced. Finally, we propose both kinetic and fluid descriptions of this
model and analyze the capabilities of the fluid model to develop trail
patterns. We observe that the development of patterns by fluid models require
extra trail amplification mechanisms that are not needed at the
Individual-Based Model level.
| [
{
"created": "Tue, 16 Aug 2011 06:12:21 GMT",
"version": "v1"
}
] | 2014-04-08 | [
[
"Boissard",
"Emmanuel",
"",
"IMT"
],
[
"Degond",
"Pierre",
"",
"IMT"
],
[
"Motsch",
"Sébastien",
""
]
] | We propose an Individual-Based Model of ant-trail formation. The ants are modeled as self-propelled particles which deposit directed pheromones and interact with them through alignment interaction. The directed pheromones intend to model pieces of trails, while the alignment interaction translates the tendency for an ant to follow a trail when it meets it. Thanks to adequate quantitative descriptors of the trail patterns, the existence of a phase transition as the ant-pheromone interaction frequency is increased can be evidenced. Finally, we propose both kinetic and fluid descriptions of this model and analyze the capabilities of the fluid model to develop trail patterns. We observe that the development of patterns by fluid models require extra trail amplification mechanisms that are not needed at the Individual-Based Model level. |
1912.05781 | Dr Rowena Ball | Rowena Ball and John Brindley | Anomalous thermal fluctuation distribution sustains proto-metabolic
cycles and biomolecule synthesis | 4 pages, 3 figures | Physical Chemistry Chemical Physics 2019 | 10.1039/C9CP05756K | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An environment far from equilibrium is thought to be a necessary condition
for the origin and persistence of life. In this context we report open-flow
simulations of a non-enzymic proto-metabolic system, in which hydrogen peroxide
acts both as oxidant and driver of thermochemical cycling. We find that a
Gaussian perturbed input produces a non-Boltzmann output fluctuation
distribution around the mean oscillation maximum. Our main result is that net
biosynthesis can occur under fluctuating cyclical but not steady drive.
Consequently we may revise the necessary condition to "dynamically far from
equilibrium".
| [
{
"created": "Thu, 12 Dec 2019 05:48:15 GMT",
"version": "v1"
}
] | 2020-02-19 | [
[
"Ball",
"Rowena",
""
],
[
"Brindley",
"John",
""
]
] | An environment far from equilibrium is thought to be a necessary condition for the origin and persistence of life. In this context we report open-flow simulations of a non-enzymic proto-metabolic system, in which hydrogen peroxide acts both as oxidant and driver of thermochemical cycling. We find that a Gaussian perturbed input produces a non-Boltzmann output fluctuation distribution around the mean oscillation maximum. Our main result is that net biosynthesis can occur under fluctuating cyclical but not steady drive. Consequently we may revise the necessary condition to "dynamically far from equilibrium". |
1707.03382 | Maxim Makukov | Maxim A. Makukov and Vladimir I. shCherbak | SETI in vivo: testing the we-are-them hypothesis | Published in the International Journal of Astrobiology | International Journal of Astrobiology, 2017 | 10.1017/S1473550417000210 | null | q-bio.OT physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After it was proposed that life on Earth might descend from seeding by an
earlier civilization, some authors noted that this alternative offers a
testable aspect: the seeds could be supplied with a signature that might be
found in extant organisms. In particular, it was suggested that the optimal
location for such an artifact is the genetic code, as the least evolving part
of cells. However, as the mainstream view goes, this scenario is too
speculative and cannot be meaningfully tested because encoding/decoding a
signature within the genetic code is ill-defined, so any retrieval attempt is
doomed to guesswork. Here we refresh the seeded-Earth hypothesis and discuss
the motivation for inserting a signature. We then show that "biological SETI"
involves even weaker assumptions than traditional SETI and admits a
well-defined methodological framework. After assessing the possibility in terms
of molecular and evolutionary biology, we formalize the approach and, adopting
the guideline of SETI that encoding/decoding should follow from first
principles and be convention-free, develop a retrieval strategy. Applied to the
canonical code, it reveals a nontrivial precision structure of interlocked
systematic attributes. To assess this result in view of the initial assumption,
we perform statistical, comparison, interdependence, and semiotic analyses.
Statistical analysis reveals no causal connection to evolutionary models of the
code, interdependence analysis precludes overinterpretation, and comparison
analysis shows that known code variations lack any precision-logic structures,
in agreement with these variations being post-seeding deviations from the
canonical code. Finally, semiotic analysis shows that not only the found
attributes are consistent with the initial assumption, but that they make
perfect sense from SETI perspective, as they maintain some of the most
universal codes of culture.
| [
{
"created": "Tue, 11 Jul 2017 17:39:26 GMT",
"version": "v1"
}
] | 2017-07-12 | [
[
"Makukov",
"Maxim A.",
""
],
[
"shCherbak",
"Vladimir I.",
""
]
] | After it was proposed that life on Earth might descend from seeding by an earlier civilization, some authors noted that this alternative offers a testable aspect: the seeds could be supplied with a signature that might be found in extant organisms. In particular, it was suggested that the optimal location for such an artifact is the genetic code, as the least evolving part of cells. However, as the mainstream view goes, this scenario is too speculative and cannot be meaningfully tested because encoding/decoding a signature within the genetic code is ill-defined, so any retrieval attempt is doomed to guesswork. Here we refresh the seeded-Earth hypothesis and discuss the motivation for inserting a signature. We then show that "biological SETI" involves even weaker assumptions than traditional SETI and admits a well-defined methodological framework. After assessing the possibility in terms of molecular and evolutionary biology, we formalize the approach and, adopting the guideline of SETI that encoding/decoding should follow from first principles and be convention-free, develop a retrieval strategy. Applied to the canonical code, it reveals a nontrivial precision structure of interlocked systematic attributes. To assess this result in view of the initial assumption, we perform statistical, comparison, interdependence, and semiotic analyses. Statistical analysis reveals no causal connection to evolutionary models of the code, interdependence analysis precludes overinterpretation, and comparison analysis shows that known code variations lack any precision-logic structures, in agreement with these variations being post-seeding deviations from the canonical code. Finally, semiotic analysis shows that not only the found attributes are consistent with the initial assumption, but that they make perfect sense from SETI perspective, as they maintain some of the most universal codes of culture. |
1211.3993 | Joel Adamson | Joel James Adamson | Evolution of male life histories and age-dependent sexual signals under
female choice | 18 pages, 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Sexual selection theory models evolution of sexual signals and preferences
using simple life histories. However, life-history models predict that males
benefit from increasing sexual investment approaching old age, producing
age-dependent sexual traits. Age-dependent traits require time and energy to
grow, and will not fully mature before individuals enter mating competition.
Early evolutionary stages pose several problems for these traits. Age-dependent
traits suffer from strong viability selection and gain little benefit from mate
choice when rare. Few males will grow large traits, and they will rarely
encounter choosy females. The evolutionary origins of age-dependent traits
therefore remain unclear. I used numerical simulations to analyze evolution of
preferences, condition (viability) and traits in an age-structured population.
Traits in the model depended on age and condition ("good genes") in a
population with no genetic drift. I asked (1) if age-dependent indicator traits
and their preferences can originate depending on the strength of selection and
the size of the trait; (2) which mode of development (age-dependent versus
age-independent) eventually predominates when both modes occur in the
population; and (3) if age-independent traits can invade a population with
age-dependent traits. Age-dependent traits evolve under weaker selection and at
smaller sizes than age-independent traits. Evolution of age-independent traits
depends only on trait size, whereas evolution of age-dependent traits depends
on both strength of selection and growth rate. Invasion of age-independence
into populations with established traits followed a similar pattern with
age-dependence predominating at small trait sizes. I suggest that reduced adult
mortality facilitates sexual selection by favoring the evolution of
age-dependent sexual signals under weak selection.
| [
{
"created": "Fri, 16 Nov 2012 19:01:38 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jul 2013 20:10:26 GMT",
"version": "v2"
},
{
"created": "Fri, 30 Aug 2013 19:23:46 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Dec 2013 20:01:25 GMT",
"version": "v4"
}
] | 2013-12-16 | [
[
"Adamson",
"Joel James",
""
]
] | Sexual selection theory models evolution of sexual signals and preferences using simple life histories. However, life-history models predict that males benefit from increasing sexual investment approaching old age, producing age-dependent sexual traits. Age-dependent traits require time and energy to grow, and will not fully mature before individuals enter mating competition. Early evolutionary stages pose several problems for these traits. Age-dependent traits suffer from strong viability selection and gain little benefit from mate choice when rare. Few males will grow large traits, and they will rarely encounter choosy females. The evolutionary origins of age-dependent traits therefore remain unclear. I used numerical simulations to analyze evolution of preferences, condition (viability) and traits in an age-structured population. Traits in the model depended on age and condition ("good genes") in a population with no genetic drift. I asked (1) if age-dependent indicator traits and their preferences can originate depending on the strength of selection and the size of the trait; (2) which mode of development (age-dependent versus age-independent) eventually predominates when both modes occur in the population; and (3) if age-independent traits can invade a population with age-dependent traits. Age-dependent traits evolve under weaker selection and at smaller sizes than age-independent traits. Evolution of age-independent traits depends only on trait size, whereas evolution of age-dependent traits depends on both strength of selection and growth rate. Invasion of age-independence into populations with established traits followed a similar pattern with age-dependence predominating at small trait sizes. I suggest that reduced adult mortality facilitates sexual selection by favoring the evolution of age-dependent sexual signals under weak selection. |
2007.12242 | Ruili Huang | Hu Zhu, Catherine Z. Chen, Srilatha Sakamuru, Anton Simeonov, Mathew
D. Hall, Menghang Xia, Wei Zheng, Ruili Huang | Mining of high throughput screening database reveals AP-1 and autophagy
pathways as potential targets for COVID-19 therapeutics | null | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent global pandemic of Coronavirus Disease 2019 (COVID-19) caused by
the new coronavirus SARS-CoV-2 presents an urgent need for new therapeutic
candidates. Many efforts have been devoted to screening existing drug libraries
with the hope to repurpose approved drugs as potential treatments for COVID-19.
However, the antiviral mechanisms of action for the drugs found active in these
phenotypic screens are largely unknown. To deconvolute the viral targets for
more effective anti-COVID-19 drug development, we mined our in-house database
of approved drug screens against 994 assays and compared their activity
profiles with the drug activity profile in a cytopathic effect (CPE) assay of
SARS-CoV-2. We found that the autophagy and AP-1 signaling pathway activity
profiles are significantly correlated with the anti-SARS-CoV-2 activity
profile. In addition, a class of neurology/psychiatry drugs was found
significantly enriched with anti-SARS-CoV-2 activity. Taken together, these
results have provided new insights into SARS-CoV-2 infection and potential
targets for COVID-19 therapeutics.
| [
{
"created": "Thu, 23 Jul 2020 20:30:21 GMT",
"version": "v1"
}
] | 2020-07-27 | [
[
"Zhu",
"Hu",
""
],
[
"Chen",
"Catherine Z.",
""
],
[
"Sakamuru",
"Srilatha",
""
],
[
"Simeonov",
"Anton",
""
],
[
"Hall",
"Mathew D.",
""
],
[
"Xia",
"Menghang",
""
],
[
"Zheng",
"Wei",
""
],
[
"Huang",
"Ruili",
""
]
] | The recent global pandemic of Coronavirus Disease 2019 (COVID-19) caused by the new coronavirus SARS-CoV-2 presents an urgent need for new therapeutic candidates. Many efforts have been devoted to screening existing drug libraries with the hope to repurpose approved drugs as potential treatments for COVID-19. However, the antiviral mechanisms of action for the drugs found active in these phenotypic screens are largely unknown. To deconvolute the viral targets for more effective anti-COVID-19 drug development, we mined our in-house database of approved drug screens against 994 assays and compared their activity profiles with the drug activity profile in a cytopathic effect (CPE) assay of SARS-CoV-2. We found that the autophagy and AP-1 signaling pathway activity profiles are significantly correlated with the anti-SARS-CoV-2 activity profile. In addition, a class of neurology/psychiatry drugs was found significantly enriched with anti-SARS-CoV-2 activity. Taken together, these results have provided new insights into SARS-CoV-2 infection and potential targets for COVID-19 therapeutics. |
1504.07203 | Thierry Mora | Thierry Mora | Physical limit to concentration sensing amid spurious ligands | null | Phys. Rev. Lett. 115, 038102 (2015) | 10.1103/PhysRevLett.115.038102 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To adapt their behaviour in changing environments, cells sense concentrations
by binding external ligands to their receptors. However, incorrect ligands may
bind nonspecifically to receptors, and when their concentration is large, this
binding activity may interfere with the sensing of the ligand of interest.
Here, I derive analytically the physical limit to the accuracy of concentration
sensing amid a large number of interfering ligands. A scaling transition is
found when the mean bound time of correct ligands is twice that of incorrect
ligands. I discuss how the physical bound can be approached by a cascade of
receptor states generalizing kinetic proof-reading schemes.
| [
{
"created": "Mon, 27 Apr 2015 18:49:11 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2015 05:30:47 GMT",
"version": "v2"
}
] | 2015-07-23 | [
[
"Mora",
"Thierry",
""
]
] | To adapt their behaviour in changing environments, cells sense concentrations by binding external ligands to their receptors. However, incorrect ligands may bind nonspecifically to receptors, and when their concentration is large, this binding activity may interfere with the sensing of the ligand of interest. Here, I derive analytically the physical limit to the accuracy of concentration sensing amid a large number of interfering ligands. A scaling transition is found when the mean bound time of correct ligands is twice that of incorrect ligands. I discuss how the physical bound can be approached by a cascade of receptor states generalizing kinetic proof-reading schemes. |
1209.3945 | Denis Menshykau | G\'eraldine Celli\`ere, Denis Menshykau and Dagmar Iber | Simulations demonstrate a simple network to be sufficient to control
branch point selection, smooth muscle and vasculature formation during lung
branching morphogenesis | Initially published at Biology Open | G. Celli\`ere, D Menshykau and Dagmar Iber (2012) Biology Open 1,
775-788 | 10.1242/bio.20121339 | null | q-bio.TO q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Proper lung functioning requires not only a correct structure of the
conducting airway tree, but also the simultaneous development of smooth muscles
and vasculature. Lung branching morphogenesis is strongly stereotyped and
involves the recursive use of only three modes of branching. We have previously
shown that the experimentally described interactions between Fibroblast growth
factor (FGF)10, Sonic hedgehog (SHH) and Patched (Ptc) can give rise to a
Turing mechanism that not only reproduces the experimentally observed wildtype
branching pattern but also, in part counterintuitive, patterns in mutant mice.
Here we show that, even though many proteins affect smooth muscle formation and
the expression of Vegfa, an inducer of blood vessel formation, it is sufficient
to add FGF9 to the FGF10/SHH/Ptc module to successfully predict simultaneously
the emergence of smooth muscles in the clefts between growing lung buds, and
Vegfa expression in the distal sub-epithelial mesenchyme. Our model reproduces
the phenotype of both wildtype and relevant mutant mice, as well as the results
of most culture conditions described in the literature.
| [
{
"created": "Tue, 18 Sep 2012 13:09:53 GMT",
"version": "v1"
}
] | 2012-09-19 | [
[
"Cellière",
"Géraldine",
""
],
[
"Menshykau",
"Denis",
""
],
[
"Iber",
"Dagmar",
""
]
] | Proper lung functioning requires not only a correct structure of the conducting airway tree, but also the simultaneous development of smooth muscles and vasculature. Lung branching morphogenesis is strongly stereotyped and involves the recursive use of only three modes of branching. We have previously shown that the experimentally described interactions between Fibroblast growth factor (FGF)10, Sonic hedgehog (SHH) and Patched (Ptc) can give rise to a Turing mechanism that not only reproduces the experimentally observed wildtype branching pattern but also, in part counterintuitive, patterns in mutant mice. Here we show that, even though many proteins affect smooth muscle formation and the expression of Vegfa, an inducer of blood vessel formation, it is sufficient to add FGF9 to the FGF10/SHH/Ptc module to successfully predict simultaneously the emergence of smooth muscles in the clefts between growing lung buds, and Vegfa expression in the distal sub-epithelial mesenchyme. Our model reproduces the phenotype of both wildtype and relevant mutant mice, as well as the results of most culture conditions described in the literature. |
1305.2769 | Eric Tromeur | Eric Tromeur, Lars Rudolf, Thilo Gross | Impact of dispersal on the stability of metapopulations | null | Journal of Theoretical Biology 392 (2016) 1-11 | 10.1016/j.jtbi.2015.11.029 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dispersal is a key ecological process, that enables local populations to form
spatially extended systems called metapopulations. In the present study, we
investigate how dispersal affects the linear stability of a general
single-species metapopulation model. We discuss both the influence of local
within-patch dynamics and the effects of various dispersal behaviors on
stability. We find that positive density-dependent dispersal and positive
density-dependent settlement are destabilizing dispersal behaviors while
negative density-dependent dispersal and negative density-dependent settlement
are stabilizing. It is also shown that dispersal has a stabilizing impact on
heterogeneous metapopulations that correlates positively with the number of
patches and the connectance of metapopulation networks.
| [
{
"created": "Mon, 13 May 2013 13:16:44 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jan 2016 14:19:53 GMT",
"version": "v2"
}
] | 2016-01-11 | [
[
"Tromeur",
"Eric",
""
],
[
"Rudolf",
"Lars",
""
],
[
"Gross",
"Thilo",
""
]
] | Dispersal is a key ecological process, that enables local populations to form spatially extended systems called metapopulations. In the present study, we investigate how dispersal affects the linear stability of a general single-species metapopulation model. We discuss both the influence of local within-patch dynamics and the effects of various dispersal behaviors on stability. We find that positive density-dependent dispersal and positive density-dependent settlement are destabilizing dispersal behaviors while negative density-dependent dispersal and negative density-dependent settlement are stabilizing. It is also shown that dispersal has a stabilizing impact on heterogeneous metapopulations that correlates positively with the number of patches and the connectance of metapopulation networks. |
2401.10237 | Larry Bull | Larry Bull | Evolving Diploid Boolean and Multi-Valued Gene Networks | arXiv admin note: substantial text overlap with arXiv:2302.01694 | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Boolean networks have been widely used to explore aspects of gene regulation,
traditionally with a single network. A modified form of the model to explore
the effects of increasing the number of gene states has also recently been
introduced. In this paper, these discrete dynamical networks are evolved as
diploids within rugged fitness landscapes to explore their behaviour. Results
suggest the general properties of haploid networks in similar circumstances
remain for diploids. The previously proposed inherent fitness landscape
smoothing properties of eukaryotic sex are shown to be exhibited in these
dynamical systems, as is their propensity to change in size based upon the
characteristics of the network and fitness landscape.
| [
{
"created": "Sun, 19 Nov 2023 18:00:39 GMT",
"version": "v1"
}
] | 2024-01-22 | [
[
"Bull",
"Larry",
""
]
] | Boolean networks have been widely used to explore aspects of gene regulation, traditionally with a single network. A modified form of the model to explore the effects of increasing the number of gene states has also recently been introduced. In this paper, these discrete dynamical networks are evolved as diploids within rugged fitness landscapes to explore their behaviour. Results suggest the general properties of haploid networks in similar circumstances remain for diploids. The previously proposed inherent fitness landscape smoothing properties of eukaryotic sex are shown to be exhibited in these dynamical systems, as is their propensity to change in size based upon the characteristics of the network and fitness landscape. |
0711.0700 | Peter van der Gulik | Peter van der Gulik | Three phases in the evolution of the standard genetic code: how
translation could get started | null | null | null | null | q-bio.PE | null | A primordial genetic code is proposed, having only four codons assigned, GGC
meaning glycine, GAC meaning aspartate/glutamate, GCC meaning alanine-like and
GUC meaning valine-like. Pathways of ambiguity reduction enlarged the codon
repertoire with CUC meaning leucine, AUC meaning isoleucine, ACC meaning
threonine-like and GAG meaning glutamate. Introduction of UNN anticodons, in a
next episode of code evolution in which nonsense elimination was the leading
theme, introduced a family box structure superposed on the original mirror
structure. Finally, growth rate was the leading theme during the remaining
repertoire expansion, explaining the ordered phylogenetic pattern of
aminoacyl-tRNA synthetases. The special role of natural aptamers in the process
is high-lighted, and the error robustness characteristics of the code are shown
to have evolved by way of a stepwise, restricted enlargement of the tRNA
repertoire, instead of by an exhaustive selection process testing myriads of
codes.
| [
{
"created": "Mon, 5 Nov 2007 17:16:17 GMT",
"version": "v1"
}
] | 2007-11-06 | [
[
"van der Gulik",
"Peter",
""
]
] | A primordial genetic code is proposed, having only four codons assigned, GGC meaning glycine, GAC meaning aspartate/glutamate, GCC meaning alanine-like and GUC meaning valine-like. Pathways of ambiguity reduction enlarged the codon repertoire with CUC meaning leucine, AUC meaning isoleucine, ACC meaning threonine-like and GAG meaning glutamate. Introduction of UNN anticodons, in a next episode of code evolution in which nonsense elimination was the leading theme, introduced a family box structure superposed on the original mirror structure. Finally, growth rate was the leading theme during the remaining repertoire expansion, explaining the ordered phylogenetic pattern of aminoacyl-tRNA synthetases. The special role of natural aptamers in the process is high-lighted, and the error robustness characteristics of the code are shown to have evolved by way of a stepwise, restricted enlargement of the tRNA repertoire, instead of by an exhaustive selection process testing myriads of codes. |
1301.7115 | Madhu Advani Madhu Advani | Madhu Advani, Subhaneil Lahiri, and Surya Ganguli | Statistical mechanics of complex neural systems and high dimensional
data | 72 pages, 8 figures, iopart.cls, to appear in JSTAT | null | 10.1088/1742-5468/2013/03/P03014 | null | q-bio.NC cond-mat.dis-nn stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent experimental advances in neuroscience have opened new vistas into the
immense complexity of neuronal networks. This proliferation of data challenges
us on two parallel fronts. First, how can we form adequate theoretical
frameworks for understanding how dynamical network processes cooperate across
widely disparate spatiotemporal scales to solve important computational
problems? And second, how can we extract meaningful models of neuronal systems
from high dimensional datasets? To aid in these challenges, we give a
pedagogical review of a collection of ideas and theoretical methods arising at
the intersection of statistical physics, computer science and neurobiology. We
introduce the interrelated replica and cavity methods, which originated in
statistical physics as powerful ways to quantitatively analyze large highly
heterogeneous systems of many interacting degrees of freedom. We also introduce
the closely related notion of message passing in graphical models, which
originated in computer science as a distributed algorithm capable of solving
large inference and optimization problems involving many coupled variables. We
then show how both the statistical physics and computer science perspectives
can be applied in a wide diversity of contexts to problems arising in
theoretical neuroscience and data analysis. Along the way we discuss spin
glasses, learning theory, illusions of structure in noise, random matrices,
dimensionality reduction, and compressed sensing, all within the unified
formalism of the replica method. Moreover, we review recent conceptual
connections between message passing in graphical models, and neural computation
and learning. Overall, these ideas illustrate how statistical physics and
computer science might provide a lens through which we can uncover emergent
computational functions buried deep within the dynamical complexities of
neuronal networks.
| [
{
"created": "Wed, 30 Jan 2013 00:50:05 GMT",
"version": "v1"
}
] | 2015-06-12 | [
[
"Advani",
"Madhu",
""
],
[
"Lahiri",
"Subhaneil",
""
],
[
"Ganguli",
"Surya",
""
]
] | Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? And second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction, and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. |
1305.4858 | Pietro Hiram Guzzi | Pietro Hiram Guzzi, Simone Truglia, Pierangelo Veltri, Mario Cannataro | Thresholding of Semantic Similarity Networks using a Spectral Graph
Based Technique | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic similarity measures (SSMs) refer to a set of algorithms used to
quantify the similarity of two or more terms belonging to the same ontology.
Ontology terms may be associated to concepts, for instance in computational
biology gene and proteins are associated with terms of biological ontologies.
Thus, SSMs may be used to quantify the similarity of genes and proteins
starting from the comparison of the associated annotations. SSMs have been
recently used to compare genes and proteins even on a system level scale. More
recently some works have focused on the building and analysis of Semantic
Similarity Networks (SSNs) i.e. weighted networks in which nodes represents
genes or proteins while weighted edges represent the semantic similarity score
among them. SSNs are quasi-complete networks, thus their analysis presents
different challenges that should be addressed. For instance, the need for the
introduction of reliable thresholds for the elimination of meaningless edges
arises. Nevertheless, the use of global thresholding methods may produce the
elimination of meaningful nodes, while the use of local thresholds may
introduce biases. For these aims, we introduce a novel technique, based on
spectral graph considerations and on a mixed global-local focus. The
effectiveness of our technique is demonstrated by using markov clustering for
the extraction of biological modules. We applied clustering to simplified
networks demonstrating a considerable improvements with respect to the original
ones.
| [
{
"created": "Tue, 21 May 2013 15:30:07 GMT",
"version": "v1"
}
] | 2013-05-22 | [
[
"Guzzi",
"Pietro Hiram",
""
],
[
"Truglia",
"Simone",
""
],
[
"Veltri",
"Pierangelo",
""
],
[
"Cannataro",
"Mario",
""
]
] | Semantic similarity measures (SSMs) refer to a set of algorithms used to quantify the similarity of two or more terms belonging to the same ontology. Ontology terms may be associated to concepts, for instance in computational biology gene and proteins are associated with terms of biological ontologies. Thus, SSMs may be used to quantify the similarity of genes and proteins starting from the comparison of the associated annotations. SSMs have been recently used to compare genes and proteins even on a system level scale. More recently some works have focused on the building and analysis of Semantic Similarity Networks (SSNs) i.e. weighted networks in which nodes represents genes or proteins while weighted edges represent the semantic similarity score among them. SSNs are quasi-complete networks, thus their analysis presents different challenges that should be addressed. For instance, the need for the introduction of reliable thresholds for the elimination of meaningless edges arises. Nevertheless, the use of global thresholding methods may produce the elimination of meaningful nodes, while the use of local thresholds may introduce biases. For these aims, we introduce a novel technique, based on spectral graph considerations and on a mixed global-local focus. The effectiveness of our technique is demonstrated by using markov clustering for the extraction of biological modules. We applied clustering to simplified networks demonstrating a considerable improvements with respect to the original ones. |
1807.05097 | Matthias Hennig | Michael Deistler, Martino Sorbaro, Michael E. Rule and Matthias H.
Hennig | Local learning rules to attenuate forgetting in neural networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hebbian synaptic plasticity inevitably leads to interference and forgetting
when different, overlapping memory patterns are sequentially stored in the same
network. Recent work on artificial neural networks shows that an
information-geometric approach can be used to protect important weights to slow
down forgetting. This strategy however is biologically implausible as it
requires knowledge of the history of previously learned patterns. In this work,
we show that a purely local weight consolidation mechanism, based on estimating
energy landscape curvatures from locally available statistics, prevents pattern
interference. Exploring a local calculation of energy curvature in the
sparse-coding limit, we demonstrate that curvature-aware learning rules reduce
forgetting in the Hopfield network. We further show that this method connects
information-geometric global learning rules based on the Fisher information to
local spike-dependent rules accessible to biological neural networks. We
conjecture that, if combined with other learning procedures, it could provide a
building-block for content-aware learning strategies that use only quantities
computable in biological neural networks to attenuate pattern interference and
catastrophic forgetting. Additionally, this work clarifies how global
information-geometric structure in a learning problem can be exposed in local
model statistics, building a deeper theoretical connection between the
statistics of single units in a network, and the global structure of the
collective learning space.
| [
{
"created": "Fri, 13 Jul 2018 14:12:49 GMT",
"version": "v1"
}
] | 2018-07-16 | [
[
"Deistler",
"Michael",
""
],
[
"Sorbaro",
"Martino",
""
],
[
"Rule",
"Michael E.",
""
],
[
"Hennig",
"Matthias H.",
""
]
] | Hebbian synaptic plasticity inevitably leads to interference and forgetting when different, overlapping memory patterns are sequentially stored in the same network. Recent work on artificial neural networks shows that an information-geometric approach can be used to protect important weights to slow down forgetting. This strategy however is biologically implausible as it requires knowledge of the history of previously learned patterns. In this work, we show that a purely local weight consolidation mechanism, based on estimating energy landscape curvatures from locally available statistics, prevents pattern interference. Exploring a local calculation of energy curvature in the sparse-coding limit, we demonstrate that curvature-aware learning rules reduce forgetting in the Hopfield network. We further show that this method connects information-geometric global learning rules based on the Fisher information to local spike-dependent rules accessible to biological neural networks. We conjecture that, if combined with other learning procedures, it could provide a building-block for content-aware learning strategies that use only quantities computable in biological neural networks to attenuate pattern interference and catastrophic forgetting. Additionally, this work clarifies how global information-geometric structure in a learning problem can be exposed in local model statistics, building a deeper theoretical connection between the statistics of single units in a network, and the global structure of the collective learning space. |
1801.04057 | Sitabhra Sinha | Tanmay Mitra, Shakti N. Menon and Sitabhra Sinha | Emergent memory in cell signaling: Persistent adaptive dynamics in
cascades can arise from the diversity of relaxation time-scales | 9 pages, 7 figures + 12 pages supplementary information | Sci. Rep. 8, 13230 (2018) | 10.1038/s41598-018-31626-9 | null | q-bio.SC nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mitogen-activated protein kinase (MAPK) signaling cascade, an
evolutionarily conserved motif present in all eukaryotic cells, is involved in
coordinating critical cell-fate decisions, regulating protein synthesis, and
mediating learning and memory. While the steady-state behavior of the pathway
stimulated by a time-invariant signal is relatively well-understood, we show
using a computational model that it exhibits a rich repertoire of transient
adaptive responses to changes in stimuli. When the signal is switched on, the
response is characterized by long-lived modulations in frequency as well as
amplitude. On withdrawing the stimulus, the activity decays over timescales
much longer than that of phosphorylation-dephosphorylation processes,
exhibiting reverberations characterized by repeated spiking in the activated
MAPK concentration. The long-term persistence of such post-stimulus activity
suggests that the cascade retains memory of the signal for a significant
duration following its removal, even in the absence of any explicit feedback or
cross-talk with other pathways. We find that the molecular mechanism underlying
this behavior is related to the existence of distinct relaxation rates for the
different cascade components. This results in the imbalance of fluxes between
different layers of the cascade, with the repeated reuse of activated kinases
as enzymes when they are released from sequestration in complexes leading to
one or more spike events following the removal of the stimulus. The persistent
adaptive response reported here, indicative of a cellular "short-term" memory,
suggests that this ubiquitous signaling pathway plays an even more central role
in information processing by eukaryotic cells.
| [
{
"created": "Fri, 12 Jan 2018 05:03:54 GMT",
"version": "v1"
}
] | 2024-06-11 | [
[
"Mitra",
"Tanmay",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Sinha",
"Sitabhra",
""
]
] | The mitogen-activated protein kinase (MAPK) signaling cascade, an evolutionarily conserved motif present in all eukaryotic cells, is involved in coordinating critical cell-fate decisions, regulating protein synthesis, and mediating learning and memory. While the steady-state behavior of the pathway stimulated by a time-invariant signal is relatively well-understood, we show using a computational model that it exhibits a rich repertoire of transient adaptive responses to changes in stimuli. When the signal is switched on, the response is characterized by long-lived modulations in frequency as well as amplitude. On withdrawing the stimulus, the activity decays over timescales much longer than that of phosphorylation-dephosphorylation processes, exhibiting reverberations characterized by repeated spiking in the activated MAPK concentration. The long-term persistence of such post-stimulus activity suggests that the cascade retains memory of the signal for a significant duration following its removal, even in the absence of any explicit feedback or cross-talk with other pathways. We find that the molecular mechanism underlying this behavior is related to the existence of distinct relaxation rates for the different cascade components. This results in the imbalance of fluxes between different layers of the cascade, with the repeated reuse of activated kinases as enzymes when they are released from sequestration in complexes leading to one or more spike events following the removal of the stimulus. The persistent adaptive response reported here, indicative of a cellular "short-term" memory, suggests that this ubiquitous signaling pathway plays an even more central role in information processing by eukaryotic cells. |
2308.15678 | Th\'eo Michelot | Th\'eo Michelot, Natasha J. Klappstein, Jonathan R. Potts, John
Fieberg | Understanding step selection analysis through numerical integration | null | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by/4.0/ | Step selection functions (SSFs) are flexible models to jointly describe
animals' movement and habitat preferences. Their popularity has grown rapidly
and extensions have been developed to increase their utility, including various
distributions to describe movement constraints, interactions to allow movements
to depend on local environmental features, and random effects and latent states
to account for within- and among-individual variability. Although the SSF is a
relatively simple statistical model, its presentation has not been consistent
in the literature, leading to confusion about model flexibility and
interpretation. We believe that part of the confusion has arisen from the
conflation of the SSF model with the methods used for parameter estimation.
Notably, conditional logistic regression can be used to fit SSFs in exponential
form, and this approach is often presented interchangeably with the actual
model (the SSF itself). However, reliance on conditional logistic regression
reduces model flexibility, and suggests a misleading interpretation of step
selection analysis as being equivalent to a case-control study. In this review,
we explicitly distinguish between model formulation and inference technique,
presenting a coherent framework to fit SSFs based on numerical integration and
maximum likelihood estimation. We provide an overview of common numerical
integration techniques, and explain how they relate to step selection analyses.
This framework unifies different model fitting techniques for SSFs, and opens
the way for improved inference. In particular, it makes it straightforward to
model movement with distributions outside the exponential family, and to apply
different SSF formulations to a data set and compare them with AIC. By
separating the model formulation from the inference technique, we hope to
clarify many important concepts in step selection analysis.
| [
{
"created": "Wed, 30 Aug 2023 00:26:54 GMT",
"version": "v1"
}
] | 2023-08-31 | [
[
"Michelot",
"Théo",
""
],
[
"Klappstein",
"Natasha J.",
""
],
[
"Potts",
"Jonathan R.",
""
],
[
"Fieberg",
"John",
""
]
] | Step selection functions (SSFs) are flexible models to jointly describe animals' movement and habitat preferences. Their popularity has grown rapidly and extensions have been developed to increase their utility, including various distributions to describe movement constraints, interactions to allow movements to depend on local environmental features, and random effects and latent states to account for within- and among-individual variability. Although the SSF is a relatively simple statistical model, its presentation has not been consistent in the literature, leading to confusion about model flexibility and interpretation. We believe that part of the confusion has arisen from the conflation of the SSF model with the methods used for parameter estimation. Notably, conditional logistic regression can be used to fit SSFs in exponential form, and this approach is often presented interchangeably with the actual model (the SSF itself). However, reliance on conditional logistic regression reduces model flexibility, and suggests a misleading interpretation of step selection analysis as being equivalent to a case-control study. In this review, we explicitly distinguish between model formulation and inference technique, presenting a coherent framework to fit SSFs based on numerical integration and maximum likelihood estimation. We provide an overview of common numerical integration techniques, and explain how they relate to step selection analyses. This framework unifies different model fitting techniques for SSFs, and opens the way for improved inference. In particular, it makes it straightforward to model movement with distributions outside the exponential family, and to apply different SSF formulations to a data set and compare them with AIC. By separating the model formulation from the inference technique, we hope to clarify many important concepts in step selection analysis. |
2107.00835 | Monika Heiner | Shannon Connolly, David Gilbert, Monika Heiner | From Epidemic to Pandemic Modelling | 79 pages (with Appendix), 23 figures, 7 tables | null | null | null | q-bio.PE cs.CE | http://creativecommons.org/licenses/by/4.0/ | We present a methodology for systematically extending epidemic models to
multilevel and multiscale spatio-temporal pandemic ones. Our approach builds on
the use of coloured stochastic and continuous Petri nets facilitating the sound
component-based extension of basic SIR models to include population
stratification and also spatio-geographic information and travel connections,
represented as graphs, resulting in robust stratified pandemic metapopulation
models. This method is inherently easy to use, producing scalable and reusable
models with a high degree of clarity and accessibility which can be read either
in a deterministic or stochastic paradigm. Our method is supported by a
publicly available platform PetriNuts; it enables the visual construction and
editing of models; deterministic, stochastic and hybrid simulation as well as
structural and behavioural analysis. All the models are available as
supplementary material, ensuring reproducibility.
| [
{
"created": "Thu, 1 Jul 2021 13:43:52 GMT",
"version": "v1"
}
] | 2021-07-05 | [
[
"Connolly",
"Shannon",
""
],
[
"Gilbert",
"David",
""
],
[
"Heiner",
"Monika",
""
]
] | We present a methodology for systematically extending epidemic models to multilevel and multiscale spatio-temporal pandemic ones. Our approach builds on the use of coloured stochastic and continuous Petri nets facilitating the sound component-based extension of basic SIR models to include population stratification and also spatio-geographic information and travel connections, represented as graphs, resulting in robust stratified pandemic metapopulation models. This method is inherently easy to use, producing scalable and reusable models with a high degree of clarity and accessibility which can be read either in a deterministic or stochastic paradigm. Our method is supported by a publicly available platform PetriNuts; it enables the visual construction and editing of models; deterministic, stochastic and hybrid simulation as well as structural and behavioural analysis. All the models are available as supplementary material, ensuring reproducibility. |
1507.00189 | Anna Cattani | Anna Cattani, Sergio Solinas, Claudio Canuto | A hybrid model for large neural network description | 18 pages, 6 figures | null | null | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of the present paper is to efficiently describe the membrane
potential dynamics of neural populations formed by species having a high
density difference in specific brain areas. We propose a hybrid model whose
main ingredients are a conductance-based model (ODE system) and its continuous
counterpart (PDE system) obtained through a limit process in which the number
of neurons confined in a bounded region of the brain is sent to infinity.
Specifically, in the discrete model each cell of the low-density populations is
individually described by a set of time-dependent variables, whereas in the
continuum model the high-density populations are described as a whole by a
small set of continuous variables depending on space and time. Communications
among populations, which translate into interactions among the discrete and the
continuous models, are the essence of the hybrid model we present here. Such an
approach has been validated reconstructing the ensemble activity of the
granular layer network of the Cerebellum, leading to a computational cost
reduction. The hybrid model reproduced interesting dynamics such as local
microcircuit synchronization, travelling waves, center-surround and
time-windowing.
| [
{
"created": "Wed, 1 Jul 2015 11:14:00 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Oct 2015 10:08:26 GMT",
"version": "v2"
}
] | 2015-10-06 | [
[
"Cattani",
"Anna",
""
],
[
"Solinas",
"Sergio",
""
],
[
"Canuto",
"Claudio",
""
]
] | The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system) and its continuous counterpart (PDE system) obtained through a limit process in which the number of neurons confined in a bounded region of the brain is sent to infinity. Specifically, in the discrete model each cell of the low-density populations is individually described by a set of time-dependent variables, whereas in the continuum model the high-density populations are described as a whole by a small set of continuous variables depending on space and time. Communications among populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. Such an approach has been validated reconstructing the ensemble activity of the granular layer network of the Cerebellum, leading to a computational cost reduction. The hybrid model reproduced interesting dynamics such as local microcircuit synchronization, travelling waves, center-surround and time-windowing. |
2005.04106 | Ido Kanter | Shira Sardi, Roni Vardi, Yuval Meir, Yael Tugendhaft, Shiri Hodassman,
Amir Goldental and Ido Kanter | Brain experiments imply adaptation mechanisms which outperform common AI
learning algorithms | 30 pages, 7 figures | Scientific Reports 10, Article number: 6923 (2020)
https://www.nature.com/articles/s41598-020-63755-5 | 10.1038/s41598-020-63755-5 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attempting to imitate the brain functionalities, researchers have bridged
between neuroscience and artificial intelligence for decades; however,
experimental neuroscience has not directly advanced the field of machine
learning. Here, using neuronal cultures, we demonstrate that increased training
frequency accelerates the neuronal adaptation processes. This mechanism was
implemented on artificial neural networks, where a local learning step-size
increases for coherent consecutive learning steps and tested on a simple
dataset of handwritten digits, MNIST. Based on our online learning results with
a few handwriting examples, success rates for brain-inspired algorithms
substantially outperform the commonly used machine learning algorithms. We
speculate this emerging bridge from slow brain function to machine learning
will promote ultrafast decision making under limited examples, which is the
reality in many aspects of human activity, robotic control, and network
optimization.
| [
{
"created": "Thu, 23 Apr 2020 14:02:53 GMT",
"version": "v1"
}
] | 2020-05-11 | [
[
"Sardi",
"Shira",
""
],
[
"Vardi",
"Roni",
""
],
[
"Meir",
"Yuval",
""
],
[
"Tugendhaft",
"Yael",
""
],
[
"Hodassman",
"Shiri",
""
],
[
"Goldental",
"Amir",
""
],
[
"Kanter",
"Ido",
""
]
] | Attempting to imitate the brain functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning. Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps and tested on a simple dataset of handwritten digits, MNIST. Based on our online learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used machine learning algorithms. We speculate this emerging bridge from slow brain function to machine learning will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization. |
1711.10545 | Hector Banos | Hector Ba\~nos | Identifying species network features from gene tree quartets under the
coalescent model | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that many topological features of level-1 species networks are
identifiable from the distribution of the gene tree quartets under the network
multi-species coalescent model. In particular, every cycle of size at least 4
and every hybrid node in a cycle of size at least 5 is identifiable. This is a
step toward justifying the inference of such networks which was recently
implemented by Sol\'is-Lemus and An\'e. We show additionally how to compute
quartet concordance factors for a network in terms of simpler networks, and
explore some circumstances in which cycles of size 3 and hybrid nodes in
4-cycles can be detected.
| [
{
"created": "Tue, 28 Nov 2017 20:39:22 GMT",
"version": "v1"
}
] | 2017-11-30 | [
[
"Baños",
"Hector",
""
]
] | We show that many topological features of level-1 species networks are identifiable from the distribution of the gene tree quartets under the network multi-species coalescent model. In particular, every cycle of size at least 4 and every hybrid node in a cycle of size at least 5 is identifiable. This is a step toward justifying the inference of such networks which was recently implemented by Sol\'is-Lemus and An\'e. We show additionally how to compute quartet concordance factors for a network in terms of simpler networks, and explore some circumstances in which cycles of size 3 and hybrid nodes in 4-cycles can be detected. |
1507.01390 | Rajani Raman | Rajani Raman, Sandip Sarkar | Predictive coding: A Possible Explanation of Filling-in at the blind
spot | 23 pages, 9 figures | null | 10.1371/journal.pone.0151194 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Filling-in at the blind-spot is a perceptual phenomenon in which the visual
system fills the informational void, which arises due to the absence of retinal
input corresponding to the optic disc, with surrounding visual attributes.
Though there are enough evidence to conclude that some kind of neural
computation is involved in filling-in at the blind spot especially in the early
visual cortex, the knowledge of the actual computational mechanism is far from
complete. We have investigated the bar experiments and the associated
filling-in phenomenon in the light of the hierarchical predictive coding
framework, where the blind-spot was represented by the absence of early
feed-forward connection. We recorded the responses of predictive estimator
neurons at the blind-spot region in the V1 area of our three level (LGN-V1-V2)
model network. These responses are in agreement with the results of earlier
physiological studies and using the generative model we also showed that these
response profiles indeed represent the filling-in completion. These demonstrate
that predictive coding framework could account for the filling-in phenomena
observed in several psychophysical and physiological experiments involving bar
stimuli. These results suggest that the filling-in could naturally arise from
the computational principle of hierarchical predictive coding (HPC) of natural
images.
| [
{
"created": "Mon, 6 Jul 2015 11:20:31 GMT",
"version": "v1"
}
] | 2016-07-12 | [
[
"Raman",
"Rajani",
""
],
[
"Sarkar",
"Sandip",
""
]
] | Filling-in at the blind-spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. Though there are enough evidence to conclude that some kind of neural computation is involved in filling-in at the blind spot especially in the early visual cortex, the knowledge of the actual computational mechanism is far from complete. We have investigated the bar experiments and the associated filling-in phenomenon in the light of the hierarchical predictive coding framework, where the blind-spot was represented by the absence of early feed-forward connection. We recorded the responses of predictive estimator neurons at the blind-spot region in the V1 area of our three level (LGN-V1-V2) model network. These responses are in agreement with the results of earlier physiological studies and using the generative model we also showed that these response profiles indeed represent the filling-in completion. These demonstrate that predictive coding framework could account for the filling-in phenomena observed in several psychophysical and physiological experiments involving bar stimuli. These results suggest that the filling-in could naturally arise from the computational principle of hierarchical predictive coding (HPC) of natural images. |
2107.02429 | Hong-Gyu Yoon | Hong-Gyu Yoon and Pilwon Kim | STDP-based Associative Memory Formation and Retrieval | 7 pages of main article, 12 pages of appendices | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spike-timing-dependent plasticity(STDP) is a biological process in which the
precise order and timing of neuronal spikes affect the degree of synaptic
modification. While there have been numerous research focusing on the role of
STDP in neural coding, the functional implications of STDP at the macroscopic
level in the brain have not been fully explored yet. In this work, we propose a
neurodynamical model based on STDP that renders storage and retrieval of a
group of associative memories. We showed that the function of STDP at the
macroscopic level is to form a "memory plane" in the neural state space which
dynamically encodes high dimensional data. We derived the analytic relation
between the input, the memory plane, and the induced macroscopic neural
oscillations around the memory plane. Such plane produces a limit cycle in
reaction to a similar memory cue, which can be used for retrieval of the
original input.
| [
{
"created": "Tue, 6 Jul 2021 07:08:21 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2021 04:18:20 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Aug 2021 01:47:54 GMT",
"version": "v3"
}
] | 2021-08-10 | [
[
"Yoon",
"Hong-Gyu",
""
],
[
"Kim",
"Pilwon",
""
]
] | Spike-timing-dependent plasticity(STDP) is a biological process in which the precise order and timing of neuronal spikes affect the degree of synaptic modification. While there have been numerous research focusing on the role of STDP in neural coding, the functional implications of STDP at the macroscopic level in the brain have not been fully explored yet. In this work, we propose a neurodynamical model based on STDP that renders storage and retrieval of a group of associative memories. We showed that the function of STDP at the macroscopic level is to form a "memory plane" in the neural state space which dynamically encodes high dimensional data. We derived the analytic relation between the input, the memory plane, and the induced macroscopic neural oscillations around the memory plane. Such plane produces a limit cycle in reaction to a similar memory cue, which can be used for retrieval of the original input. |
1406.1446 | Sebastien Benzekry | S\'ebastien Benzekry, Clare Lamont, Afshin Beheshti, Amanda Tracz,
John M.L. Ebos, Lynn Hlatky, Philip Hahnfeldt | Classical Mathematical Models for Description and Prediction of
Experimental Tumor Growth | 5 figures, 6 tables, 5 supplementary figures and 3 supplementary
tables | PLoS Comput Biol 10(8): e1003800 (2014) | 10.1371/journal.pcbi.1003800 | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite internal complexity, tumor growth kinetics follow relatively simple
macroscopic laws that have been quantified by mathematical models. To resolve
this further, quantitative and discriminant analyses were performed for the
purpose of comparing alternative models for their abilities to describe and
predict tumor growth. For this we used two in vivo experimental systems, an
ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically
xenografted human breast carcinoma. The goals were threefold: to 1) determine a
statistical model for description of the volume measurement error, 2) establish
the descriptive power of each model, using several goodness-of-fit metrics and
a study of parametric identifiability, and 3) assess the models ability to
forecast future tumor growth.
Nine models were compared that included the exponential, power law, Gompertz
and (generalized) logistic formalisms. The Gompertz and power law provided the
most parsimonious and parametrically identifiable description of the lung data,
whereas the breast data were best captured by the Gompertz and
exponential-linear models. The latter also exhibited the highest predictive
power for the breast tumor growth curves, with excellent prediction scores
(greater than 80$\%$) extending out as far as 12 days. In contrast, for the
lung data, none of the models were able to achieve substantial prediction rates
(greater than 70$\%$) further than the next day data point. In this context,
adjunction of a priori information on the parameter distribution led to
considerable improvement of predictions.
These results not only have important implications for biological theories of
tumor growth and the use of mathematical modeling in preclinical anti-cancer
drug investigations, but also may assist in defining how mathematical models
could serve as potential prognostic tools in the clinical setting.
| [
{
"created": "Thu, 5 Jun 2014 17:13:07 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jul 2014 09:21:51 GMT",
"version": "v2"
}
] | 2014-09-02 | [
[
"Benzekry",
"Sébastien",
""
],
[
"Lamont",
"Clare",
""
],
[
"Beheshti",
"Afshin",
""
],
[
"Tracz",
"Amanda",
""
],
[
"Ebos",
"John M. L.",
""
],
[
"Hlatky",
"Lynn",
""
],
[
"Hahnfeldt",
"Philip",
""
]
] | Despite internal complexity, tumor growth kinetics follow relatively simple macroscopic laws that have been quantified by mathematical models. To resolve this further, quantitative and discriminant analyses were performed for the purpose of comparing alternative models for their abilities to describe and predict tumor growth. For this we used two in vivo experimental systems, an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: to 1) determine a statistical model for description of the volume measurement error, 2) establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) assess the models ability to forecast future tumor growth. Nine models were compared that included the exponential, power law, Gompertz and (generalized) logistic formalisms. The Gompertz and power law provided the most parsimonious and parametrically identifiable description of the lung data, whereas the breast data were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power for the breast tumor growth curves, with excellent prediction scores (greater than 80$\%$) extending out as far as 12 days. In contrast, for the lung data, none of the models were able to achieve substantial prediction rates (greater than 70$\%$) further than the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement of predictions. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinical setting. |
2108.02249 | Joshua Kynaston | Joshua C. Kynaston, Chris Guiver, Christian A. Yates | An equivalence framework for an age-structured multi-stage
representation of the cell cycle | Author accepted version | Phys. Rev. E. 105, 064411 (2022) | 10.1103/PhysRevE.105.064411 | null | q-bio.PE cond-mat.stat-mech q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | We develop theoretical equivalences between stochastic and deterministic
models for populations of individual cells stratified by age. Specifically, we
develop a hierarchical system of equations describing the full dynamics of an
age-structured multi-stage Markov process for approximating cell cycle time
distributions. We further demonstrate that the resulting mean behaviour is
equivalent, over large timescales, to the classical McKendrick-von Foerster
integro-partial differential equation. We conclude by extending this framework
to a spatial context, facilitating the modelling of travelling wave phenomena
and cell-mediated pattern formation. More generally, this methodology may be
extended to myriad reaction-diffusion processes for which the age of
individuals is relevant to the dynamics.
| [
{
"created": "Wed, 4 Aug 2021 19:02:15 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Oct 2021 15:43:43 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Feb 2022 11:57:44 GMT",
"version": "v3"
},
{
"created": "Tue, 28 Jun 2022 11:38:30 GMT",
"version": "v4"
}
] | 2022-06-29 | [
[
"Kynaston",
"Joshua C.",
""
],
[
"Guiver",
"Chris",
""
],
[
"Yates",
"Christian A.",
""
]
] | We develop theoretical equivalences between stochastic and deterministic models for populations of individual cells stratified by age. Specifically, we develop a hierarchical system of equations describing the full dynamics of an age-structured multi-stage Markov process for approximating cell cycle time distributions. We further demonstrate that the resulting mean behaviour is equivalent, over large timescales, to the classical McKendrick-von Foerster integro-partial differential equation. We conclude by extending this framework to a spatial context, facilitating the modelling of travelling wave phenomena and cell-mediated pattern formation. More generally, this methodology may be extended to myriad reaction-diffusion processes for which the age of individuals is relevant to the dynamics. |
0809.4291 | Kilian Koepsell | Charles F. Cadieu and Kilian Koepsell | A multivariate phase distribution and its estimation | 9 pages, 5 figures, minor change in conventions and minor errors
corrected | null | null | null | q-bio.NC nlin.AO nlin.CD q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Circular variables such as phase or orientation have received considerable
attention throughout the scientific and engineering communities and have
recently been quite prominent in the field of neuroscience. While many analytic
techniques have used phase as an effective representation, there has been
little work on techniques that capture the joint statistics of multiple phase
variables. In this paper we introduce a distribution that captures empirically
observed pair-wise phase relationships. Importantly, we have developed a
computationally efficient and accurate technique for estimating the parameters
of this distribution from data. We show that the algorithm performs well in
high-dimensions (d=100), and in cases with limited data (as few as 100 samples
per dimension). We also demonstrate how this technique can be applied to
electrocorticography (ECoG) recordings to investigate the coupling of brain
areas during different behavioral states. This distribution and estimation
technique can be broadly applied to any setting that produces multiple circular
variables.
| [
{
"created": "Thu, 25 Sep 2008 03:00:05 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jun 2009 04:47:46 GMT",
"version": "v2"
}
] | 2009-06-21 | [
[
"Cadieu",
"Charles F.",
""
],
[
"Koepsell",
"Kilian",
""
]
] | Circular variables such as phase or orientation have received considerable attention throughout the scientific and engineering communities and have recently been quite prominent in the field of neuroscience. While many analytic techniques have used phase as an effective representation, there has been little work on techniques that capture the joint statistics of multiple phase variables. In this paper we introduce a distribution that captures empirically observed pair-wise phase relationships. Importantly, we have developed a computationally efficient and accurate technique for estimating the parameters of this distribution from data. We show that the algorithm performs well in high-dimensions (d=100), and in cases with limited data (as few as 100 samples per dimension). We also demonstrate how this technique can be applied to electrocorticography (ECoG) recordings to investigate the coupling of brain areas during different behavioral states. This distribution and estimation technique can be broadly applied to any setting that produces multiple circular variables. |
1412.7975 | Antti Niemi | Xubiao Peng, Alireza Chenani, Shuangwei Hu, Yifan Zhou, Antti J. Niemi | Virtual reality based approach to protein heavy-atom structure
reconstruction | null | null | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A commonly recurring problem in structural protein studies, is the
determination of all heavy atom positions from the knowledge of the central
alpha-carbon coordinates. We employ advances in virtual reality to address the
problem. The outcome is a 3D visualisation based technique where all the heavy
backbone and side chain atoms are treated on equal footing, in terms of the
C-alpha coordinates. Each heavy atom can be visualised on the surfaces of the
different two-spheres, that are centered at the other heavy backbone and side
chain atoms. In particular, the rotamers are visible as clusters which display
strong dependence on the underlying backbone secondary structure. Our method
easily detects those atoms in a crystallographic protein structure which have
been been likely misplaced. Our approach forms a basis for the development of a
new generation, visualisation based side chain construction, validation and
refinement tools. The heavy atom positions are identified in a manner which
accounts for the secondary structure environment, leading to improved accuracy
over existing methods.
| [
{
"created": "Fri, 26 Dec 2014 19:31:52 GMT",
"version": "v1"
}
] | 2014-12-30 | [
[
"Peng",
"Xubiao",
""
],
[
"Chenani",
"Alireza",
""
],
[
"Hu",
"Shuangwei",
""
],
[
"Zhou",
"Yifan",
""
],
[
"Niemi",
"Antti J.",
""
]
] | A commonly recurring problem in structural protein studies, is the determination of all heavy atom positions from the knowledge of the central alpha-carbon coordinates. We employ advances in virtual reality to address the problem. The outcome is a 3D visualisation based technique where all the heavy backbone and side chain atoms are treated on equal footing, in terms of the C-alpha coordinates. Each heavy atom can be visualised on the surfaces of the different two-spheres, that are centered at the other heavy backbone and side chain atoms. In particular, the rotamers are visible as clusters which display strong dependence on the underlying backbone secondary structure. Our method easily detects those atoms in a crystallographic protein structure which have been been likely misplaced. Our approach forms a basis for the development of a new generation, visualisation based side chain construction, validation and refinement tools. The heavy atom positions are identified in a manner which accounts for the secondary structure environment, leading to improved accuracy over existing methods. |
2106.01756 | Chiara Viglione | Francesco Cerritelli, Martin G. Frasch, Marta C. Antonelli, Chiara
Viglione, Stefano Vecchi, Marco Chiera and Andrea Manzotti | The role of the vagus nerve during fetal development and its
relationship with the environment | Word count: 16,009 Tables: 1 Figures: 0 | null | 10.3389/fnins.2021.721605 | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The autonomic nervous system (ANS) regulatory capacity begins before birth as
the sympathetic and parasympathetic activity contributes significantly to the
fetus' development. Several studies have shown how vagus nerve is involved in
many vital processes during fetal, perinatal and postnatal life: from the
regulation of inflammation through the anti-inflammatory cholinergic pathway,
which may affect the functioning of each organ, to the production of hormones
involved in bioenergetic metabolism. In addition, the vagus nerve has been
recognized as the primary afferent pathway capable of transmitting information
to the brain from every organ of the body. Therefore, this hypothesis paper
aims to review the development of ANS during fetal and perinatal life, focusing
particularly on the vagus nerve, to identify possible "critical windows" that
could impact its maturation. These "critical windows" could help clinicians
know when to monitor fetuses to effectively assess the developmental status of
both ANS and specifically the vagus nerve. In addition, this paper will focus
on which factors (i.e. fetal characteristics and behaviors, maternal lifestyle
and pathologies, placental health and dysfunction, labor, incubator conditions,
and drug exposure) may have an impact on the development of the vagus during
the above-mentioned "critical window" and how. This analysis could help
clinicians and stakeholders define precise guidelines for improving the
management of fetuses and newborns, particularly to reduce the potential
adverse environmental impacts on ANS development that may lead to persistent
long-term consequences. Since the development of ANS and the vagus influence
have been shown to be reflected in cardiac variability, this paper will rely in
particular on studies using fetal heart rate variability (fHRV) to monitor the
continued growth and health of both animal and human fetuses.
| [
{
"created": "Thu, 3 Jun 2021 11:19:01 GMT",
"version": "v1"
}
] | 2021-12-06 | [
[
"Cerritelli",
"Francesco",
""
],
[
"Frasch",
"Martin G.",
""
],
[
"Antonelli",
"Marta C.",
""
],
[
"Viglione",
"Chiara",
""
],
[
"Vecchi",
"Stefano",
""
],
[
"Chiera",
"Marco",
""
],
[
"Manzotti",
"Andrea",
""
]
] | The autonomic nervous system (ANS) regulatory capacity begins before birth as the sympathetic and parasympathetic activity contributes significantly to the fetus' development. Several studies have shown how vagus nerve is involved in many vital processes during fetal, perinatal and postnatal life: from the regulation of inflammation through the anti-inflammatory cholinergic pathway, which may affect the functioning of each organ, to the production of hormones involved in bioenergetic metabolism. In addition, the vagus nerve has been recognized as the primary afferent pathway capable of transmitting information to the brain from every organ of the body. Therefore, this hypothesis paper aims to review the development of ANS during fetal and perinatal life, focusing particularly on the vagus nerve, to identify possible "critical windows" that could impact its maturation. These "critical windows" could help clinicians know when to monitor fetuses to effectively assess the developmental status of both ANS and specifically the vagus nerve. In addition, this paper will focus on which factors (i.e. fetal characteristics and behaviors, maternal lifestyle and pathologies, placental health and dysfunction, labor, incubator conditions, and drug exposure) may have an impact on the development of the vagus during the above-mentioned "critical window" and how. This analysis could help clinicians and stakeholders define precise guidelines for improving the management of fetuses and newborns, particularly to reduce the potential adverse environmental impacts on ANS development that may lead to persistent long-term consequences. Since the development of ANS and the vagus influence have been shown to be reflected in cardiac variability, this paper will rely in particular on studies using fetal heart rate variability (fHRV) to monitor the continued growth and health of both animal and human fetuses. |
1407.0868 | B\'oris Marin | B\'oris Marin, Reynaldo Daniel Pinto, Robert C Elson, Eduardo Colli | Noise, transient dynamics, and the generation of realistic interspike
interval variation in square-wave burster neurons | REVTeX4-1, 18 pages, 9 figures | null | 10.1103/PhysRevE.90.042718 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | First return maps of interspike intervals for biological neurons that
generate repetitive bursts of impulses can display stereotyped structures
(neuronal signatures). Such structures have been linked to the possibility of
multicoding and multifunctionality in neural networks that produce and control
rhythmical motor patterns. In some cases, isolating the neurons from their
synaptic network revealsirregular, complex signatures that have been regarded
as evidence of intrinsic, chaotic behavior.
We show that incorporation of dynamical noise into minimal neuron models of
square-wave bursting (either conductance-based or abstract) produces signatures
akin to those observed in biological examples, without the need for fine-tuning
of parameters or ad hoc constructions for inducing chaotic activity. The form
of the stochastic term is not strongly constrained, and can approximate several
possible sources of noise, e.g. random channel gating or synaptic bombardment.
The cornerstone of this signature generation mechanism is the rich,
transient, but deterministic dynamics inherent in the square-wave
(saddle-node/homoclinic) mode of neuronal bursting. We show that noise causes
the dynamics to populate a complex transient scaffolding or skeleton in state
space, even for models that (without added noise) generate only periodic
activity (whether in bursting or tonic spiking mode).
| [
{
"created": "Thu, 3 Jul 2014 11:26:11 GMT",
"version": "v1"
}
] | 2015-06-22 | [
[
"Marin",
"Bóris",
""
],
[
"Pinto",
"Reynaldo Daniel",
""
],
[
"Elson",
"Robert C",
""
],
[
"Colli",
"Eduardo",
""
]
] | First return maps of interspike intervals for biological neurons that generate repetitive bursts of impulses can display stereotyped structures (neuronal signatures). Such structures have been linked to the possibility of multicoding and multifunctionality in neural networks that produce and control rhythmical motor patterns. In some cases, isolating the neurons from their synaptic network revealsirregular, complex signatures that have been regarded as evidence of intrinsic, chaotic behavior. We show that incorporation of dynamical noise into minimal neuron models of square-wave bursting (either conductance-based or abstract) produces signatures akin to those observed in biological examples, without the need for fine-tuning of parameters or ad hoc constructions for inducing chaotic activity. The form of the stochastic term is not strongly constrained, and can approximate several possible sources of noise, e.g. random channel gating or synaptic bombardment. The cornerstone of this signature generation mechanism is the rich, transient, but deterministic dynamics inherent in the square-wave (saddle-node/homoclinic) mode of neuronal bursting. We show that noise causes the dynamics to populate a complex transient scaffolding or skeleton in state space, even for models that (without added noise) generate only periodic activity (whether in bursting or tonic spiking mode). |
1602.09044 | Patrick Hillenbrand | Patrick Hillenbrand, Ulrich Gerland, Gasper Tkacik | Beyond the French Flag Model: Exploiting Spatial and Gene Regulatory
Interactions for Positional Information | null | null | 10.1371/journal.pone.0163628 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A crucial step in the early development of multicellular organisms involves
the establishment of spatial patterns of gene expression which later direct
proliferating cells to take on different cell fates. These patterns enable the
cells to infer their global position within a tissue or an organism by reading
out local gene expression levels. The patterning system is thus said to encode
positional information, a concept that was formalized recently in the framework
of information theory. Here we introduce a toy model of patterning in one
spatial dimension, which can be seen as an extension of Wolpert's paradigmatic
"French Flag" model, to patterning by several interacting, spatially coupled
genes subject to intrinsic and extrinsic noise. Our model, a variant of an
Ising spin system, allows us to systematically explore expression patterns that
optimally encode positional information. We find that optimal patterning
systems use positional cues, as in the French Flag model, together with
gene-gene interactions to generate combinatorial codes for position which we
call "Counter" patterns. Counter patterns can also be stabilized against noise
and variations in system size or morphogen dosage by longer-range spatial
interactions of the type invoked in the Turing model. The simple setup proposed
here qualitatively captures many of the experimentally observed properties of
biological patterning systems and allows them to be studied in a single,
theoretically consistent framework.
| [
{
"created": "Mon, 29 Feb 2016 17:10:39 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2016 10:15:17 GMT",
"version": "v2"
}
] | 2017-02-08 | [
[
"Hillenbrand",
"Patrick",
""
],
[
"Gerland",
"Ulrich",
""
],
[
"Tkacik",
"Gasper",
""
]
] | A crucial step in the early development of multicellular organisms involves the establishment of spatial patterns of gene expression which later direct proliferating cells to take on different cell fates. These patterns enable the cells to infer their global position within a tissue or an organism by reading out local gene expression levels. The patterning system is thus said to encode positional information, a concept that was formalized recently in the framework of information theory. Here we introduce a toy model of patterning in one spatial dimension, which can be seen as an extension of Wolpert's paradigmatic "French Flag" model, to patterning by several interacting, spatially coupled genes subject to intrinsic and extrinsic noise. Our model, a variant of an Ising spin system, allows us to systematically explore expression patterns that optimally encode positional information. We find that optimal patterning systems use positional cues, as in the French Flag model, together with gene-gene interactions to generate combinatorial codes for position which we call "Counter" patterns. Counter patterns can also be stabilized against noise and variations in system size or morphogen dosage by longer-range spatial interactions of the type invoked in the Turing model. The simple setup proposed here qualitatively captures many of the experimentally observed properties of biological patterning systems and allows them to be studied in a single, theoretically consistent framework. |
1811.04512 | Liane Gabora | Liane Gabora | Reframing Convergent and Divergent Thought for the 21st Century | 7 pages; 2 figures; | Published in 2019 in A. Goel, C. Seifert, & C. Freska (Eds.),
Proceedings of 41st Annual Meeting of the Cognitive Science Society. Austin
TX: Cognitive Science Society | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convergent thought is defined and measured in terms of the ability to perform
on tasks where there is a single correct solution, and divergent thought is
defined and measured in terms of the ability to generate multiple different
solutions. However, this characterization of them presents inconsistencies, and
despite that they are promoted as key constructs of creativity, they do not
capture the capacity to reiteratively modify an idea in light of new
perspectives arising out of an overarching conceptual framework. Research on
formal models of concepts and their interactions suggests that different
creative outputs may be projections of the same underlying idea at different
phases of this kind of 'honing' process. This leads us to redefine convergent
thought as thought in which the relevant concepts are considered from
conventional contexts, and divergent thought as thought in which they are
considered from unconventional contexts. Implications for the assessment of
creativity are discussed.
| [
{
"created": "Sun, 11 Nov 2018 23:59:33 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2019 13:48:56 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Jul 2019 22:26:57 GMT",
"version": "v3"
}
] | 2019-07-09 | [
[
"Gabora",
"Liane",
""
]
] | Convergent thought is defined and measured in terms of the ability to perform on tasks where there is a single correct solution, and divergent thought is defined and measured in terms of the ability to generate multiple different solutions. However, this characterization of them presents inconsistencies, and despite that they are promoted as key constructs of creativity, they do not capture the capacity to reiteratively modify an idea in light of new perspectives arising out of an overarching conceptual framework. Research on formal models of concepts and their interactions suggests that different creative outputs may be projections of the same underlying idea at different phases of this kind of 'honing' process. This leads us to redefine convergent thought as thought in which the relevant concepts are considered from conventional contexts, and divergent thought as thought in which they are considered from unconventional contexts. Implications for the assessment of creativity are discussed. |
2006.00289 | Gustavo Libotte | Gustavo Barbosa Libotte and Fran S\'ergio Lobato and Gustavo Mendes
Platt | Identification of an Epidemiological Model to Simulate the COVID-19
Epidemic using Robust Multi-objective Optimization and Stochastic Fractal
Search | null | null | null | null | q-bio.PE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditionally, the identification of parameters in the formulation and
solution of inverse problems considers that models, variables and mathematical
parameters are free of uncertainties. This aspect simplifies the estimation
process, but does not consider the influence of relatively small changes in the
design variables in terms of the objective function. In this work, the SIDR
(Susceptible, Infected, Dead and Recovered) model is used to simulate the
dynamic behavior of the novel coronavirus disease (COVID-19), and its
parameters are estimated by formulating a robust inverse problem, that is,
considering the sensitivity of design variables. For this purpose, a robust
multi-objective optimization problem is formulated, considering the
minimization of uncertainties associated to the estimation process and the
maximization of the robustness parameter. To solve this problem, the
Multi-objective Stochastic Fractal Search algorithm is associated with the
Effective Mean concept for the evaluation of robustness. The results obtained
considering real data of the epidemic in China demonstrate that the evaluation
of the sensitivity of the design variables can provide more reliable results.
| [
{
"created": "Sat, 30 May 2020 14:56:59 GMT",
"version": "v1"
}
] | 2020-06-02 | [
[
"Libotte",
"Gustavo Barbosa",
""
],
[
"Lobato",
"Fran Sérgio",
""
],
[
"Platt",
"Gustavo Mendes",
""
]
] | Traditionally, the identification of parameters in the formulation and solution of inverse problems considers that models, variables and mathematical parameters are free of uncertainties. This aspect simplifies the estimation process, but does not consider the influence of relatively small changes in the design variables in terms of the objective function. In this work, the SIDR (Susceptible, Infected, Dead and Recovered) model is used to simulate the dynamic behavior of the novel coronavirus disease (COVID-19), and its parameters are estimated by formulating a robust inverse problem, that is, considering the sensitivity of design variables. For this purpose, a robust multi-objective optimization problem is formulated, considering the minimization of uncertainties associated to the estimation process and the maximization of the robustness parameter. To solve this problem, the Multi-objective Stochastic Fractal Search algorithm is associated with the Effective Mean concept for the evaluation of robustness. The results obtained considering real data of the epidemic in China demonstrate that the evaluation of the sensitivity of the design variables can provide more reliable results. |
1601.07191 | Andrea De Martino | Araks Martirosyan, Matteo Figliuzzi, Enzo Marinari, Andrea De Martino | Probing the limits to microRNA-mediated control of gene expression | 16 pages | PLoS Comput Biol 12: e1004715 (2016) | 10.1371/journal.pcbi.1004715 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the `ceRNA hypothesis', microRNAs (miRNAs) may act as mediators
of an effective positive interaction between long coding or non-coding RNA
molecules, carrying significant potential implications for a variety of
biological processes. Here, inspired by recent work providing a quantitative
description of small regulatory elements as information-conveying channels, we
characterize the effectiveness of miRNA-mediated regulation in terms of the
optimal information flow achievable between modulator (transcription factors)
and target nodes (long RNAs). Our findings show that, while a sufficiently
large degree of target derepression is needed to activate miRNA-mediated
transmission, (a) in case of differential mechanisms of complex processing
and/or transcriptional capabilities, regulation by a post-transcriptional
miRNA-channel can outperform that achieved through direct transcriptional
control; moreover, (b) in the presence of large populations of weakly
interacting miRNA molecules the extra noise coming from titration disappears,
allowing the miRNA-channel to process information as effectively as the direct
channel. These observations establish the limits of miRNA-mediated
post-transcriptional cross-talk and suggest that, besides providing a degree of
noise buffering, this type of control may be effectively employed in cells both
as a failsafe mechanism and as a preferential fine tuner of gene expression,
pointing to the specific situations in which each of these functionalities is
maximized.
| [
{
"created": "Tue, 26 Jan 2016 21:17:52 GMT",
"version": "v1"
}
] | 2016-01-28 | [
[
"Martirosyan",
"Araks",
""
],
[
"Figliuzzi",
"Matteo",
""
],
[
"Marinari",
"Enzo",
""
],
[
"De Martino",
"Andrea",
""
]
] | According to the `ceRNA hypothesis', microRNAs (miRNAs) may act as mediators of an effective positive interaction between long coding or non-coding RNA molecules, carrying significant potential implications for a variety of biological processes. Here, inspired by recent work providing a quantitative description of small regulatory elements as information-conveying channels, we characterize the effectiveness of miRNA-mediated regulation in terms of the optimal information flow achievable between modulator (transcription factors) and target nodes (long RNAs). Our findings show that, while a sufficiently large degree of target derepression is needed to activate miRNA-mediated transmission, (a) in case of differential mechanisms of complex processing and/or transcriptional capabilities, regulation by a post-transcriptional miRNA-channel can outperform that achieved through direct transcriptional control; moreover, (b) in the presence of large populations of weakly interacting miRNA molecules the extra noise coming from titration disappears, allowing the miRNA-channel to process information as effectively as the direct channel. These observations establish the limits of miRNA-mediated post-transcriptional cross-talk and suggest that, besides providing a degree of noise buffering, this type of control may be effectively employed in cells both as a failsafe mechanism and as a preferential fine tuner of gene expression, pointing to the specific situations in which each of these functionalities is maximized. |
q-bio/0608012 | Leo Liberti | Carlile Lavor, Leo Liberti and Nelson Maculan | The Discretizable Molecular Distance Geometry Problem | 23 pages, 9 figures | null | null | null | q-bio.BM q-bio.QM | null | Given a weighted undirected graph $G=(V,E,d)$, the Molecular Distance
Geometry Problem (MDGP) is that of finding a function $x:G\to \mathbb{R}^{3}$,
where $||x(u)-x(v)||=d(u,v)$ for each $\{u,v\}\in E$. We show that under a few
assumptions usually satisfied in proteins, the MDGP can be formulated as a
search in a discrete space. We call this MDGP subclass the Discretizable MDGP
(DMDGP). We show that the DMDGP is \textbf{NP}-complete and we propose an
algorithm, called Branch-and-Prune (BP), which solves the DMDGP exactly. The BP
algorithm performs exceptionally well in terms of solution accuracy and can
find all solutions to any DMDGP instance. We successfully test the BP algorithm
on several randomly generated instances.
| [
{
"created": "Sat, 5 Aug 2006 10:44:11 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Lavor",
"Carlile",
""
],
[
"Liberti",
"Leo",
""
],
[
"Maculan",
"Nelson",
""
]
] | Given a weighted undirected graph $G=(V,E,d)$, the Molecular Distance Geometry Problem (MDGP) is that of finding a function $x:G\to \mathbb{R}^{3}$, where $||x(u)-x(v)||=d(u,v)$ for each $\{u,v\}\in E$. We show that under a few assumptions usually satisfied in proteins, the MDGP can be formulated as a search in a discrete space. We call this MDGP subclass the Discretizable MDGP (DMDGP). We show that the DMDGP is \textbf{NP}-complete and we propose an algorithm, called Branch-and-Prune (BP), which solves the DMDGP exactly. The BP algorithm performs exceptionally well in terms of solution accuracy and can find all solutions to any DMDGP instance. We successfully test the BP algorithm on several randomly generated instances. |
2103.11214 | Biraja Ghoshal | Bhargab Ghoshal, Biraja Ghoshal, Stephen Swift, Allan Tucker | Uncertainty Estimation in SARS-CoV-2 B-cell Epitope Prediction for
Vaccine Development | Paper accepted for the 19th International Conference on Artificial
Intelligence in Medicine | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | B-cell epitopes play a key role in stimulating B-cells, triggering the
primary immune response which results in antibody production as well as the
establishment of long-term immunity in the form of memory cells. Consequently,
being able to accurately predict appropriate linear B-cell epitope regions
would pave the way for the development of new protein-based vaccines. Knowing
how much confidence there is in a prediction is also essential for gaining
clinicians' trust in the technology. In this article, we propose a calibrated
uncertainty estimation in deep learning to approximate variational Bayesian
inference using MC-DropWeights to predict epitope regions using the data from
the immune epitope database. Having applied this onto SARS-CoV-2, it can more
reliably predict B-cell epitopes than standard methods. This will be able to
identify safe and effective vaccine candidates against Covid-19.
| [
{
"created": "Sat, 20 Mar 2021 17:10:49 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Ghoshal",
"Bhargab",
""
],
[
"Ghoshal",
"Biraja",
""
],
[
"Swift",
"Stephen",
""
],
[
"Tucker",
"Allan",
""
]
] | B-cell epitopes play a key role in stimulating B-cells, triggering the primary immune response which results in antibody production as well as the establishment of long-term immunity in the form of memory cells. Consequently, being able to accurately predict appropriate linear B-cell epitope regions would pave the way for the development of new protein-based vaccines. Knowing how much confidence there is in a prediction is also essential for gaining clinicians' trust in the technology. In this article, we propose a calibrated uncertainty estimation in deep learning to approximate variational Bayesian inference using MC-DropWeights to predict epitope regions using the data from the immune epitope database. Having applied this onto SARS-CoV-2, it can more reliably predict B-cell epitopes than standard methods. This will be able to identify safe and effective vaccine candidates against Covid-19. |
1709.08628 | Marc Barthelemy | Giulia Carra, Kirone Mallick, Marc Barthelemy | The coalescing colony model: mean-field, scaling, and geometry | Paper (6 pages, 8 figures) and Supp. Material (4 pages, 4 figures) | Phys. Rev. E 96, 062316 (2017) | 10.1103/PhysRevE.96.062316 | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the coalescing model where a `primary' colony grows and randomly
emits secondary colonies that spread and eventually coalesce with it. This
model describes population proliferation in theoretical ecology, tumor growth
and is also of great interest for modeling the development of cities. Assuming
the primary colony to be always spherical of radius $r(t)$ and the emission
rate proportional to $r(t)^\theta$ where $\theta>0$, we derive the mean-field
equations governing the dynamics of the primary colony, calculate the scaling
exponents versus $\theta$ and compare our results with numerical simulations.
We then critically test the validity of the circular approximation and show
that it is sound for a constant emission rate ($\theta=0$). However, when the
emission rate is proportional to the perimeter, the circular approximation
breaks down and the roughness of the primary colony can not be discarded, thus
modifying the scaling exponents.
| [
{
"created": "Mon, 25 Sep 2017 12:57:29 GMT",
"version": "v1"
}
] | 2018-01-03 | [
[
"Carra",
"Giulia",
""
],
[
"Mallick",
"Kirone",
""
],
[
"Barthelemy",
"Marc",
""
]
] | We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth and is also of great interest for modeling the development of cities. Assuming the primary colony to be always spherical of radius $r(t)$ and the emission rate proportional to $r(t)^\theta$ where $\theta>0$, we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus $\theta$ and compare our results with numerical simulations. We then critically test the validity of the circular approximation and show that it is sound for a constant emission rate ($\theta=0$). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony can not be discarded, thus modifying the scaling exponents. |
2108.11546 | Prateek Kunwar | Prateek Kunwar (1), Oleksandr Markovichenko (1), Monique Chyba (1),
Yuriy Mileyko (1), Alice Koniges (2), Thomas Lee (3) ((1) Applied and
computational Epidemiological Studies (ACES), University of Hawai'i at Manoa
Department of Mathematics, Honolulu, Hawai'i, United States, (2) Hawai'i Data
Science Institute, University of Hawai'i at Manoa, Honolulu, Hawai'i, United
States, (3) Office of Public Health Studies, University of HAwai'i at Manoa,
Honolulu, Hawai'i, United States) | A study of computational and conceptual complexities of compartment and
agent based models | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The ongoing COVID-19 pandemic highlights the essential role of mathematical
models in understanding the spread of the virus along with a quantifiable and
science-based prediction of the impact of various mitigation measures. Numerous
types of models have been employed with various levels of success. This leads
to the question of what kind of a mathematical model is most appropriate for a
given situation. We consider two widely used types of models: equation-based
models (such as standard compartmental epidemiological models) and agent-based
models. We assess their performance by modeling the spread of COVID-19 on the
Hawaiian island of Oahu under different scenarios. We show that when it comes
to information crucial to decision making, both models produce very similar
results. At the same time, the two types of models exhibit very different
characteristics when considering their computational and conceptual complexity.
Consequently, we conclude that choosing the model should be mostly guided by
available computational and human resources.
| [
{
"created": "Thu, 26 Aug 2021 01:59:55 GMT",
"version": "v1"
}
] | 2021-08-27 | [
[
"Kunwar",
"Prateek",
""
],
[
"Markovichenko",
"Oleksandr",
""
],
[
"Chyba",
"Monique",
""
],
[
"Mileyko",
"Yuriy",
""
],
[
"Koniges",
"Alice",
""
],
[
"Lee",
"Thomas",
""
]
] | The ongoing COVID-19 pandemic highlights the essential role of mathematical models in understanding the spread of the virus along with a quantifiable and science-based prediction of the impact of various mitigation measures. Numerous types of models have been employed with various levels of success. This leads to the question of what kind of a mathematical model is most appropriate for a given situation. We consider two widely used types of models: equation-based models (such as standard compartmental epidemiological models) and agent-based models. We assess their performance by modeling the spread of COVID-19 on the Hawaiian island of Oahu under different scenarios. We show that when it comes to information crucial to decision making, both models produce very similar results. At the same time, the two types of models exhibit very different characteristics when considering their computational and conceptual complexity. Consequently, we conclude that choosing the model should be mostly guided by available computational and human resources. |
2004.01453 | Dror Meidan | Dror Meidan, Nava Schulmann, Reuven Cohen, Simcha Haber, Eyal Yaniv,
Ronit Sarid, Baruch Barzel | Alternating quarantine for sustainable epidemic mitigation | 36 pages, 13 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Absent a drug or vaccine, containing epidemic outbreaks is achieved by means
of social distancing, specifically mobility restrictions and lock-downs. Such
measures impose a hurtful toll on the economy, and are difficult to sustain for
extended periods. As an alternative, we propose here an alternating quarantine
strategy, in which at every instance, half of the population remains under
lock-down while the other half continues to be active, maintaining a routine of
weekly succession between activity and quarantine. This regime affords a dual
partition:\ half of the population interacts for only half of the time,
resulting in a dramatic reduction in transmission, comparable to that achieved
by a population-wide lock-down. All the while, it enables socioeconomic
continuity at $50\%$ capacity. The proposed weekly alternations also address an
additional challenge, with specific relevance to COVID-19. Indeed, SARS-CoV-2
exhibits a relatively long incubation period, in which individuals experience
no symptoms, but may already contribute to the spread. Unable to selectively
isolate these invisible spreaders, we resort to population-wide restrictions.
However, under the alternating quarantine routine, if an individual was exposed
during their active week, by the time they complete their quarantine they will,
in most cases, begin to exhibit symptoms. Hence this strategy isolates the
majority of pre-symptomatic individuals during their infectious phase, leading
to a rapid decline in the viral spread, thus addressing one of the main
challenges in COVID-19 mitigation.
| [
{
"created": "Fri, 3 Apr 2020 10:00:00 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Apr 2020 12:49:40 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Sep 2020 12:16:23 GMT",
"version": "v3"
},
{
"created": "Sat, 21 Nov 2020 15:57:10 GMT",
"version": "v4"
}
] | 2020-11-24 | [
[
"Meidan",
"Dror",
""
],
[
"Schulmann",
"Nava",
""
],
[
"Cohen",
"Reuven",
""
],
[
"Haber",
"Simcha",
""
],
[
"Yaniv",
"Eyal",
""
],
[
"Sarid",
"Ronit",
""
],
[
"Barzel",
"Baruch",
""
]
] | Absent a drug or vaccine, containing epidemic outbreaks is achieved by means of social distancing, specifically mobility restrictions and lock-downs. Such measures impose a hurtful toll on the economy, and are difficult to sustain for extended periods. As an alternative, we propose here an alternating quarantine strategy, in which at every instance, half of the population remains under lock-down while the other half continues to be active, maintaining a routine of weekly succession between activity and quarantine. This regime affords a dual partition:\ half of the population interacts for only half of the time, resulting in a dramatic reduction in transmission, comparable to that achieved by a population-wide lock-down. All the while, it enables socioeconomic continuity at $50\%$ capacity. The proposed weekly alternations also address an additional challenge, with specific relevance to COVID-19. Indeed, SARS-CoV-2 exhibits a relatively long incubation period, in which individuals experience no symptoms, but may already contribute to the spread. Unable to selectively isolate these invisible spreaders, we resort to population-wide restrictions. However, under the alternating quarantine routine, if an individual was exposed during their active week, by the time they complete their quarantine they will, in most cases, begin to exhibit symptoms. Hence this strategy isolates the majority of pre-symptomatic individuals during their infectious phase, leading to a rapid decline in the viral spread, thus addressing one of the main challenges in COVID-19 mitigation. |
2208.13484 | Qasim Ali | Qasim Ali, Sen Ma, Umar Farooq, Jiakuan Niu, Fen Li, Muhammad
Abaidullah, Boshuai Liu, Shaokai La, Defeng Li, Zhichang Wang, Hao Sun, Yalei
Cui, and Yinghua Shi | Pasture Intake Protects Against Commercial Diet-induced
Lipopolysaccharide Production Facilitated by Gut Microbiota through
Activating Intestinal Alkaline Phosphatase Enzyme in Meat Geese | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | In-house feeding system (IHF, a low dietary fiber source) may cause altered
cecal microbiota composition and inflammatory responses in meat geese via
increased endotoxemia (lipopolysaccharides) with reduced intestinal alkaline
phosphatase (ALP) production. The effects of artificial pasture grazing system
(AGF, a high dietary fiber source) on modulating gut microbiota architecture
and gut barrier functions have not been investigated in meat geese. The
intestinal ALP functions to regulate gut microbial homeostasis and barrier
function appears to inhibit pro-inflammatory cytokines by reducing LPS-induced
reactive oxygen species (ROS) production. The purpose of our study was to
investigate whether this enzyme could play a critical role in attenuating ROS
generation and then ROS facilitated NF-\k{appa}B pathway-induced systemic
inflammation in meat geese. First, we assessed the impacts of IHF and AGF on
gut microbial composition via 16 sRNA sequencing in meat geese. In the gut
microbiota analysis, meat geese supplemented with pasture demonstrated a
significant reduction in microbial richness and diversity compared to IHF meat
geese demonstrating antimicrobial, antioxidation, and anti-inflammatory ability
of AGF system. Second host markers analysis through protein expression of serum
and cecal tissues and quantitative PCR of cecal tissues were evaluated. We
confirmed a significant increase in intestinal ALP-induced Nrf2 signaling
pathway representing LPS dephosphorylation mediated TLR4/MyD88 induced ROS
reduction mechanisms in AGF meat geese. Further, the correlation analysis of
top 44 host markers with gut microbiota shows that artificial pasture intake
induced gut barrier functions via reducing ROS-mediated NF-\k{appa}B
pathway-induced gut permeability, systemic inflammation, and aging phenotypes.
| [
{
"created": "Mon, 29 Aug 2022 10:37:31 GMT",
"version": "v1"
}
] | 2022-08-30 | [
[
"Ali",
"Qasim",
""
],
[
"Ma",
"Sen",
""
],
[
"Farooq",
"Umar",
""
],
[
"Niu",
"Jiakuan",
""
],
[
"Li",
"Fen",
""
],
[
"Abaidullah",
"Muhammad",
""
],
[
"Liu",
"Boshuai",
""
],
[
"La",
"Shaokai",
""
],
[
"Li",
"Defeng",
""
],
[
"Wang",
"Zhichang",
""
],
[
"Sun",
"Hao",
""
],
[
"Cui",
"Yalei",
""
],
[
"Shi",
"Yinghua",
""
]
] | In-house feeding system (IHF, a low dietary fiber source) may cause altered cecal microbiota composition and inflammatory responses in meat geese via increased endotoxemia (lipopolysaccharides) with reduced intestinal alkaline phosphatase (ALP) production. The effects of artificial pasture grazing system (AGF, a high dietary fiber source) on modulating gut microbiota architecture and gut barrier functions have not been investigated in meat geese. The intestinal ALP functions to regulate gut microbial homeostasis and barrier function appears to inhibit pro-inflammatory cytokines by reducing LPS-induced reactive oxygen species (ROS) production. The purpose of our study was to investigate whether this enzyme could play a critical role in attenuating ROS generation and then ROS facilitated NF-\k{appa}B pathway-induced systemic inflammation in meat geese. First, we assessed the impacts of IHF and AGF on gut microbial composition via 16 sRNA sequencing in meat geese. In the gut microbiota analysis, meat geese supplemented with pasture demonstrated a significant reduction in microbial richness and diversity compared to IHF meat geese demonstrating antimicrobial, antioxidation, and anti-inflammatory ability of AGF system. Second host markers analysis through protein expression of serum and cecal tissues and quantitative PCR of cecal tissues were evaluated. We confirmed a significant increase in intestinal ALP-induced Nrf2 signaling pathway representing LPS dephosphorylation mediated TLR4/MyD88 induced ROS reduction mechanisms in AGF meat geese. Further, the correlation analysis of top 44 host markers with gut microbiota shows that artificial pasture intake induced gut barrier functions via reducing ROS-mediated NF-\k{appa}B pathway-induced gut permeability, systemic inflammation, and aging phenotypes. |
2407.12548 | David Waxman | David Waxman | Exact path-integral representation of the Wright-Fisher model with
mutation and selection | 13 pages, 1 figure | null | null | null | q-bio.PE cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | The Wright-Fisher model describes a biological population containing a finite
number of individuals. In this work we consider a Wright-Fisher model for a
randomly mating population, where selection and mutation act at an unlinked
locus. The selection acting has a general form, and the locus may have two or
more alleles. We determine an exact representation of the time dependent
transition probability of such a model in terms of a path integral. Path
integrals were introduced in physics and mathematics, and have found numerous
applications in different fields, where a probability distribution, or closely
related object, is represented as a 'sum' of contributions over all paths or
trajectories between two points. Path integrals provide alternative
calculational routes to problems, and may be a source of new intuition and
suggest new approximations. For the case of two alleles, we relate the exact
Wright-Fisher path-integral result to the path-integral form of the transition
density under the diffusion approximation. We determine properties of the
Wright-Fisher transition probability for multiple alleles. We show how, in the
absence of mutation, the Wright-Fisher transition probability incorporates
phenomena such as fixation and loss.
| [
{
"created": "Wed, 17 Jul 2024 13:29:34 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Waxman",
"David",
""
]
] | The Wright-Fisher model describes a biological population containing a finite number of individuals. In this work we consider a Wright-Fisher model for a randomly mating population, where selection and mutation act at an unlinked locus. The selection acting has a general form, and the locus may have two or more alleles. We determine an exact representation of the time dependent transition probability of such a model in terms of a path integral. Path integrals were introduced in physics and mathematics, and have found numerous applications in different fields, where a probability distribution, or closely related object, is represented as a 'sum' of contributions over all paths or trajectories between two points. Path integrals provide alternative calculational routes to problems, and may be a source of new intuition and suggest new approximations. For the case of two alleles, we relate the exact Wright-Fisher path-integral result to the path-integral form of the transition density under the diffusion approximation. We determine properties of the Wright-Fisher transition probability for multiple alleles. We show how, in the absence of mutation, the Wright-Fisher transition probability incorporates phenomena such as fixation and loss. |
2102.02342 | Steffen Docken | Steffen S. Docken, Colleen E. Clancy, Timothy J. Lewis | Rate-dependent effects of lidocaine on cardiac dynamics: Development and
analysis of a low-dimensional drug-channel interaction model | 38 pages, 6 figures | null | 10.1371/journal.pcbi.1009145 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-dependent Na+ channel blockers are often prescribed to treat cardiac
arrhythmias, but many Na+ channel blockers are known to have pro-arrhythmic
side effects. While the anti and proarrhythmic potential of a Na+ channel
blocker is thought to depend on the characteristics of its rate-dependent
block, the mechanisms linking these two attributes are unclear. Furthermore,
how specific properties of rate-dependent block arise from the binding kinetics
of a particular drug is poorly understood. Here, we examine the rate-dependent
effects of the Na+ channel blocker lidocaine by constructing and analyzing a
novel drug-channel interaction model. First, we identify the predominant mode
of lidocaine binding in a 24 variable Markov model for lidocaine-Na+ channel
interaction by Moreno et al. We then develop a novel 3-variable lidocaine-Na+
channel interaction model that incorporates only the predominant mode of drug
binding. Our low-dimensional model replicates the extensive voltage-clamp data
used to parameterize the Moreno et al. model. Furthermore, the effects of
lidocaine on action potential upstroke velocity and conduction velocity in our
model are similar to those predicted by the Moreno et al. model. By exploiting
the low-dimensionality of our model, we derive an algebraic expression for
level of rate-dependent block as a function of pacing frequency, restitution
properties, diastolic and plateau potentials, and drug binding rate constants.
Our model predicts that the level of rate-dependent block is sensitive to
alterations in restitution properties and increases in diastolic potential, but
it is insensitive to variations in the shape of the action potential waveform
and lidocaine binding rates.
| [
{
"created": "Wed, 3 Feb 2021 23:52:34 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Docken",
"Steffen S.",
""
],
[
"Clancy",
"Colleen E.",
""
],
[
"Lewis",
"Timothy J.",
""
]
] | State-dependent Na+ channel blockers are often prescribed to treat cardiac arrhythmias, but many Na+ channel blockers are known to have pro-arrhythmic side effects. While the anti and proarrhythmic potential of a Na+ channel blocker is thought to depend on the characteristics of its rate-dependent block, the mechanisms linking these two attributes are unclear. Furthermore, how specific properties of rate-dependent block arise from the binding kinetics of a particular drug is poorly understood. Here, we examine the rate-dependent effects of the Na+ channel blocker lidocaine by constructing and analyzing a novel drug-channel interaction model. First, we identify the predominant mode of lidocaine binding in a 24 variable Markov model for lidocaine-Na+ channel interaction by Moreno et al. We then develop a novel 3-variable lidocaine-Na+ channel interaction model that incorporates only the predominant mode of drug binding. Our low-dimensional model replicates the extensive voltage-clamp data used to parameterize the Moreno et al. model. Furthermore, the effects of lidocaine on action potential upstroke velocity and conduction velocity in our model are similar to those predicted by the Moreno et al. model. By exploiting the low-dimensionality of our model, we derive an algebraic expression for level of rate-dependent block as a function of pacing frequency, restitution properties, diastolic and plateau potentials, and drug binding rate constants. Our model predicts that the level of rate-dependent block is sensitive to alterations in restitution properties and increases in diastolic potential, but it is insensitive to variations in the shape of the action potential waveform and lidocaine binding rates. |
1412.1013 | Alberto Calderone Dr. | Alberto Calderone | Assembling biological boolean networks using manually curated databases
and prediction algorithms | null | null | null | null | q-bio.MN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the large quantity of information available, thorough researches in
various biological databases are still needed in order to reconstruct and
understand the steps that lead to known or new phenomena. By using
protein-protein interaction networks and algorithms to extract relevant
interconnections among proteins of interest, it is possible to assemble
subnetworks from global interactomes. Using these extracted networks it is
possible to use algorithms to predict signal directions while activation and
inhibition effects can be predicted using RNA interference screenings. The
result of this approach is the automatic generation of boolean networks. This
way of modelling dynamical systems allows the discovery of steady states and
the prediction of stimuli response.
| [
{
"created": "Tue, 2 Dec 2014 18:36:09 GMT",
"version": "v1"
}
] | 2014-12-03 | [
[
"Calderone",
"Alberto",
""
]
] | Despite the large quantity of information available, thorough researches in various biological databases are still needed in order to reconstruct and understand the steps that lead to known or new phenomena. By using protein-protein interaction networks and algorithms to extract relevant interconnections among proteins of interest, it is possible to assemble subnetworks from global interactomes. Using these extracted networks it is possible to use algorithms to predict signal directions while activation and inhibition effects can be predicted using RNA interference screenings. The result of this approach is the automatic generation of boolean networks. This way of modelling dynamical systems allows the discovery of steady states and the prediction of stimuli response. |
2003.00728 | Fan Hu | Fan Hu, Jiaxin Jiang, Peng Yin | Prediction of Potential Commercially Available Inhibitors against
SARS-CoV-2 by Multi-Task Deep Learning Model | null | Biomolecules. 2022; 12(8):1156 | 10.3390/biom12081156 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outbreak of COVID-19 caused millions of deaths worldwide, and the number
of total infections is still rising. It is necessary to identify some
potentially effective drugs that can be used to prevent the development of
severe symptoms or even death for those infected. Fortunately, many efforts
have been made, and several effective drugs have been identified. The rapidly
increasing amount of data is of great help for training an effective and
specific deep learning model. In this study, we propose a multi-task deep
learning model for the purpose of screening commercially available and
effective inhibitors against SARS-CoV-2. First, we pretrained a model on
several heterogenous protein-ligand interaction datasets. The model achieved
competitive results on some benchmark datasets. Next, a coronavirus-specific
dataset was collected and used to fine-tune the model. Then, the fine-tuned
model was used to select commercially available drugs against SARS-CoV-2
protein targets. Overall, twenty compounds were listed as potential inhibitors.
We further explored the model interpretability and observed the predicted
important binding sites. Based on this prediction, molecular docking was also
performed to visualize the binding modes of the selected inhibitors.
| [
{
"created": "Mon, 2 Mar 2020 09:37:16 GMT",
"version": "v1"
},
{
"created": "Mon, 31 May 2021 02:16:08 GMT",
"version": "v2"
},
{
"created": "Sat, 3 Sep 2022 10:00:40 GMT",
"version": "v3"
}
] | 2022-09-07 | [
[
"Hu",
"Fan",
""
],
[
"Jiang",
"Jiaxin",
""
],
[
"Yin",
"Peng",
""
]
] | The outbreak of COVID-19 caused millions of deaths worldwide, and the number of total infections is still rising. It is necessary to identify some potentially effective drugs that can be used to prevent the development of severe symptoms or even death for those infected. Fortunately, many efforts have been made, and several effective drugs have been identified. The rapidly increasing amount of data is of great help for training an effective and specific deep learning model. In this study, we propose a multi-task deep learning model for the purpose of screening commercially available and effective inhibitors against SARS-CoV-2. First, we pretrained a model on several heterogenous protein-ligand interaction datasets. The model achieved competitive results on some benchmark datasets. Next, a coronavirus-specific dataset was collected and used to fine-tune the model. Then, the fine-tuned model was used to select commercially available drugs against SARS-CoV-2 protein targets. Overall, twenty compounds were listed as potential inhibitors. We further explored the model interpretability and observed the predicted important binding sites. Based on this prediction, molecular docking was also performed to visualize the binding modes of the selected inhibitors. |
1405.0041 | Artem Kaznatcheev | Artem Kaznatcheev, Marcel Montrey, Thomas R. Shultz | Evolving useful delusions: Subjectively rational selfishness leads to
objectively irrational cooperation | 6 pages, 2 figures, to appear at CogSci2014 | Proceedings of the 36th Annual Conference of the Cognitive Science
Society (2014) (pp. 731-736). Austin, TX: Cognitive Science Society | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a framework within evolutionary game theory for studying the
distinction between objective and subjective rationality and apply it to the
evolution of cooperation on 3-regular random graphs. In our simulations, agents
evolve misrepresentations of objective reality that help them cooperate and
maintain higher social welfare in the Prisoner's dilemma. These agents act
rationally on their subjective representations of the world, but irrationally
from the perspective of an external observer. We model misrepresentations as
subjective perceptions of payoffs and quasi-magical thinking as an inferential
bias, finding that the former is more conducive to cooperation. This highlights
the importance of internal representations, not just observed behavior, in
evolutionary thought. Our results provide support for the interface theory of
perception and suggest that the individual's interface can serve not only the
individual's aims, but also society as a whole, offering insight into social
phenomena such as religion.
| [
{
"created": "Wed, 30 Apr 2014 21:42:12 GMT",
"version": "v1"
}
] | 2021-07-02 | [
[
"Kaznatcheev",
"Artem",
""
],
[
"Montrey",
"Marcel",
""
],
[
"Shultz",
"Thomas R.",
""
]
] | We introduce a framework within evolutionary game theory for studying the distinction between objective and subjective rationality and apply it to the evolution of cooperation on 3-regular random graphs. In our simulations, agents evolve misrepresentations of objective reality that help them cooperate and maintain higher social welfare in the Prisoner's dilemma. These agents act rationally on their subjective representations of the world, but irrationally from the perspective of an external observer. We model misrepresentations as subjective perceptions of payoffs and quasi-magical thinking as an inferential bias, finding that the former is more conducive to cooperation. This highlights the importance of internal representations, not just observed behavior, in evolutionary thought. Our results provide support for the interface theory of perception and suggest that the individual's interface can serve not only the individual's aims, but also society as a whole, offering insight into social phenomena such as religion. |
2210.13526 | Baihan Lin | Baihan Lin | Computational Inference in Cognitive Science: Operational, Societal and
Ethical Considerations | null | null | null | null | q-bio.NC cs.AI cs.CL cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging research frontiers and computational advances have gradually
transformed cognitive science into a multidisciplinary and data-driven field.
As a result, there is a proliferation of cognitive theories investigated and
interpreted from different academic lens and in different levels of
abstraction. We formulate this applied aspect of this challenge as the
computational cognitive inference, and describe the major routes of
computational approaches. To balance the potential optimism alongside the speed
and scale of the data-driven era of cognitive science, we propose to inspect
this trend in more empirical terms by identifying the operational challenges,
societal impacts and ethical guidelines in conducting research and interpreting
results from the computational inference in cognitive science.
| [
{
"created": "Mon, 24 Oct 2022 18:27:27 GMT",
"version": "v1"
}
] | 2022-10-26 | [
[
"Lin",
"Baihan",
""
]
] | Emerging research frontiers and computational advances have gradually transformed cognitive science into a multidisciplinary and data-driven field. As a result, there is a proliferation of cognitive theories investigated and interpreted from different academic lens and in different levels of abstraction. We formulate this applied aspect of this challenge as the computational cognitive inference, and describe the major routes of computational approaches. To balance the potential optimism alongside the speed and scale of the data-driven era of cognitive science, we propose to inspect this trend in more empirical terms by identifying the operational challenges, societal impacts and ethical guidelines in conducting research and interpreting results from the computational inference in cognitive science. |
2303.00285 | Nicolas Robin | Nicolas Robin (ACTES), E. Hermand (URePSSS), V. Hatchi (ACTES), O. Hue
(ACTES) | Strat{\'e}gies de gestion de la chaleur et performances sportives de
haut niveau: {\'e}clairage psychophysiologique et recommandations
appliqu{\'e}es | Science & Sports, 2023 | null | 10.1016/j.scispo.2022.05.007 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objectives: This article sheds light on the different heat stress management
strategies, common or innovative, in order to analyze those that would be the
most suitable and effective for athletes who have to compete in humid and/or
hot environments.News: The Paris summer Olympics Games in 2024 will take place
from July 26th to September 8th with a high risk for athletes to practice their
sport at high temperatures, thereby imposing physiological (cardiovascular,
ventilatory, thermoregulatory, etc.) and psychological (early mental fatigue,
decreased motivation, discomfort, etc.) which can have a major negative impact
on their performance.Prospects and projects: To perform in a hot environment,
it is now recommended to use strategies, in particular active acclimatization
which promotes physiological but also psychological adaptations. Similarly,
fluid management and cooling techniques have potentially beneficial effects on
physiological factors but their psychological consequences are still poorly
understood and need to be investigated. Finally, mental strategies (goal
setting, mental imagery, positive self-talk, music, etc.) or cognitive training
in the heat can limit poor performance in this condition. The effects of
combining physical and mental techniques, as well as innovative strategies such
as cold suggestion, are also being investigated.Conclusion: For each strategies
presented, the scientific work has enabled the development of practical
recommendations for athletes, coaches and mental trainers in order to allow
them to physiologically and psychologically anticipate the effects of high
relative humidity and/or high temperature.
| [
{
"created": "Wed, 1 Mar 2023 07:25:02 GMT",
"version": "v1"
}
] | 2023-03-02 | [
[
"Robin",
"Nicolas",
"",
"ACTES"
],
[
"Hermand",
"E.",
"",
"URePSSS"
],
[
"Hatchi",
"V.",
"",
"ACTES"
],
[
"Hue",
"O.",
"",
"ACTES"
]
] | Objectives: This article sheds light on the different heat stress management strategies, common or innovative, in order to analyze those that would be the most suitable and effective for athletes who have to compete in humid and/or hot environments.News: The Paris summer Olympics Games in 2024 will take place from July 26th to September 8th with a high risk for athletes to practice their sport at high temperatures, thereby imposing physiological (cardiovascular, ventilatory, thermoregulatory, etc.) and psychological (early mental fatigue, decreased motivation, discomfort, etc.) which can have a major negative impact on their performance.Prospects and projects: To perform in a hot environment, it is now recommended to use strategies, in particular active acclimatization which promotes physiological but also psychological adaptations. Similarly, fluid management and cooling techniques have potentially beneficial effects on physiological factors but their psychological consequences are still poorly understood and need to be investigated. Finally, mental strategies (goal setting, mental imagery, positive self-talk, music, etc.) or cognitive training in the heat can limit poor performance in this condition. The effects of combining physical and mental techniques, as well as innovative strategies such as cold suggestion, are also being investigated.Conclusion: For each strategies presented, the scientific work has enabled the development of practical recommendations for athletes, coaches and mental trainers in order to allow them to physiologically and psychologically anticipate the effects of high relative humidity and/or high temperature. |
2207.06616 | Wengong Jin | Wengong Jin, Regina Barzilay, Tommi Jaakkola | Antibody-Antigen Docking and Design via Hierarchical Equivariant
Refinement | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational antibody design seeks to automatically create an antibody that
binds to an antigen. The binding affinity is governed by the 3D binding
interface where antibody residues (paratope) closely interact with antigen
residues (epitope). Thus, predicting 3D paratope-epitope complex (docking) is
the key to finding the best paratope. In this paper, we propose a new model
called Hierarchical Equivariant Refinement Network (HERN) for paratope docking
and design. During docking, HERN employs a hierarchical message passing network
to predict atomic forces and use them to refine a binding complex in an
iterative, equivariant manner. During generation, its autoregressive decoder
progressively docks generated paratopes and builds a geometric representation
of the binding interface to guide the next residue choice. Our results show
that HERN significantly outperforms prior state-of-the-art on paratope docking
and design benchmarks.
| [
{
"created": "Thu, 14 Jul 2022 02:13:25 GMT",
"version": "v1"
}
] | 2022-07-15 | [
[
"Jin",
"Wengong",
""
],
[
"Barzilay",
"Regina",
""
],
[
"Jaakkola",
"Tommi",
""
]
] | Computational antibody design seeks to automatically create an antibody that binds to an antigen. The binding affinity is governed by the 3D binding interface where antibody residues (paratope) closely interact with antigen residues (epitope). Thus, predicting 3D paratope-epitope complex (docking) is the key to finding the best paratope. In this paper, we propose a new model called Hierarchical Equivariant Refinement Network (HERN) for paratope docking and design. During docking, HERN employs a hierarchical message passing network to predict atomic forces and use them to refine a binding complex in an iterative, equivariant manner. During generation, its autoregressive decoder progressively docks generated paratopes and builds a geometric representation of the binding interface to guide the next residue choice. Our results show that HERN significantly outperforms prior state-of-the-art on paratope docking and design benchmarks. |
1911.06950 | Negin Zaraee | Negin Zaraee, Fulya Ekiz kanik, Abdul Muyeed Bhuiya, Emily S. Gong,
Matthew T. Geib, Nese Lortlar \"Unl\"u, Ayca Yalcin Ozkumur, Julia R. Dupuis,
M. Selim \"Unl\"u | Highly Sensitive and Label-free Digital Detection of Whole Cell E. coli
with Interferometric Reflectance Imaging | 15 pages, 6 figures, 1 table | null | 10.1016/j.bios.2020.112258 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial infectious diseases are a major threat to human health. Timely and
sensitive pathogenic bacteria detection is crucial in identifying the bacterial
contaminations and preventing the spread of infectious diseases. Due to
limitations of conventional bacteria detection techniques there have been
concerted research efforts towards development of new biosensors. Biosensors
offering label free, whole bacteria detection are highly desirable over those
relying on label based or pathogenic molecular components detection. The major
advantage is eliminating the additional time and cost required for labeling or
extracting the desired bacterial components. Here, we demonstrate rapid,
sensitive and label free E. coli detection utilizing interferometric
reflectance imaging enhancement allowing for visualizing individual pathogens
captured on the surface. Enabled by our ability to count individual bacteria on
a large sensor surface, we demonstrate a limit of detection of 2.2 CFU/ml from
a buffer solution with no sample preparation. To the best of our knowledge,
this high level of sensitivity for whole E. coli detection is unprecedented in
label free biosensing. The specificity of our biosensor is validated by
comparing the response to target bacteria E. coli and non target bacteria S.
aureus, K. pneumonia and P. aeruginosa. The biosensor performance in tap water
also proves that its detection capability is unaffected by the sample
complexity. Furthermore, our sensor platform provides high optical
magnification imaging and thus validation of recorded detection events as the
target bacteria based on morphological characterization. Therefore, our
sensitive and label free detection method offers new perspectives for direct
bacterial detection in real matrices and clinical samples.
| [
{
"created": "Sat, 16 Nov 2019 03:56:51 GMT",
"version": "v1"
}
] | 2020-05-26 | [
[
"Zaraee",
"Negin",
""
],
[
"kanik",
"Fulya Ekiz",
""
],
[
"Bhuiya",
"Abdul Muyeed",
""
],
[
"Gong",
"Emily S.",
""
],
[
"Geib",
"Matthew T.",
""
],
[
"Ünlü",
"Nese Lortlar",
""
],
[
"Ozkumur",
"Ayca Yalcin",
""
],
[
"Dupuis",
"Julia R.",
""
],
[
"Ünlü",
"M. Selim",
""
]
] | Bacterial infectious diseases are a major threat to human health. Timely and sensitive pathogenic bacteria detection is crucial in identifying the bacterial contaminations and preventing the spread of infectious diseases. Due to limitations of conventional bacteria detection techniques there have been concerted research efforts towards development of new biosensors. Biosensors offering label free, whole bacteria detection are highly desirable over those relying on label based or pathogenic molecular components detection. The major advantage is eliminating the additional time and cost required for labeling or extracting the desired bacterial components. Here, we demonstrate rapid, sensitive and label free E. coli detection utilizing interferometric reflectance imaging enhancement allowing for visualizing individual pathogens captured on the surface. Enabled by our ability to count individual bacteria on a large sensor surface, we demonstrate a limit of detection of 2.2 CFU/ml from a buffer solution with no sample preparation. To the best of our knowledge, this high level of sensitivity for whole E. coli detection is unprecedented in label free biosensing. The specificity of our biosensor is validated by comparing the response to target bacteria E. coli and non target bacteria S. aureus, K. pneumonia and P. aeruginosa. The biosensor performance in tap water also proves that its detection capability is unaffected by the sample complexity. Furthermore, our sensor platform provides high optical magnification imaging and thus validation of recorded detection events as the target bacteria based on morphological characterization. Therefore, our sensitive and label free detection method offers new perspectives for direct bacterial detection in real matrices and clinical samples. |
1606.05153 | Paolo Annibale | Carmine di Rienzo and Paolo Annibale | Comparing Single Molecule Tracking and correlative approaches: an
application to the datasets recently presented in Nature Methods by Chenuard
et al | 6 pages, 4 figures | null | 10.1364/OL.41.004503 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent efforts to survey the numerous softwares available to perform single
molecule tracking (SMT) highlighted a significant dependence of the outcomes on
the specific method used, and the limitation encountered by most techniques to
capture fast movements in a crowded environment. Other approaches to identify
the mode and rapidity of motion of fluorescently labeled biomolecules, that do
not relay on the localization and linking of the images of isolated single
molecules are, however, available.This direct comparison shows that correlative
imaging analysis approaches complement effectively current SMT methods in
circumstances when, due to either the density of the sample, the low signal to
noise ratio or molecular blinking, trajectory linking does not allow to capture
long-range or fast motion.
| [
{
"created": "Thu, 16 Jun 2016 11:55:09 GMT",
"version": "v1"
}
] | 2019-07-26 | [
[
"di Rienzo",
"Carmine",
""
],
[
"Annibale",
"Paolo",
""
]
] | Recent efforts to survey the numerous softwares available to perform single molecule tracking (SMT) highlighted a significant dependence of the outcomes on the specific method used, and the limitation encountered by most techniques to capture fast movements in a crowded environment. Other approaches to identify the mode and rapidity of motion of fluorescently labeled biomolecules, that do not relay on the localization and linking of the images of isolated single molecules are, however, available.This direct comparison shows that correlative imaging analysis approaches complement effectively current SMT methods in circumstances when, due to either the density of the sample, the low signal to noise ratio or molecular blinking, trajectory linking does not allow to capture long-range or fast motion. |
1307.4103 | Alexey Yanchukov | Alexey Yanchukov and Stephen R. Proulx | Migration-selection balance at multiple loci and selection on dominance
and recombination | includes 6 figures and a Supporting Information. Mathematica notebook
where the numerical results were obtained is available upon request | null | 10.1371/journal.pone.0088651 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A steady influx of a single deleterious multilocus genotype will impose
genetic load on the resident population and leave multiple descendants carrying
various numbers of the foreign alleles. Provided that the foreign types are
rare at equilibrium, and that all immigrant genes will eventually be eliminated
by selection, the population structure can be inferred explicitly from the
deterministic branching process taking place within a single immigrant lineage.
Unless the migration and recombination rates were high, this simple method was
a very close approximation to the simulated migration-selection balance with
all possible multilocus genotypes considered.
| [
{
"created": "Mon, 15 Jul 2013 21:06:44 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2013 02:02:43 GMT",
"version": "v2"
}
] | 2015-06-16 | [
[
"Yanchukov",
"Alexey",
""
],
[
"Proulx",
"Stephen R.",
""
]
] | A steady influx of a single deleterious multilocus genotype will impose genetic load on the resident population and leave multiple descendants carrying various numbers of the foreign alleles. Provided that the foreign types are rare at equilibrium, and that all immigrant genes will eventually be eliminated by selection, the population structure can be inferred explicitly from the deterministic branching process taking place within a single immigrant lineage. Unless the migration and recombination rates were high, this simple method was a very close approximation to the simulated migration-selection balance with all possible multilocus genotypes considered. |
2112.07150 | Sergey Lobov | Sergey A. Lobov, Alexey N. Mikhaylov, Ekaterina S. Berdnikova, Valeri
A. Makarov, Victor B. Kazantsev | Spatial computing in structured spiking neural networks with a robotic
embodiment | 14 pages, 6 figures | null | 10.3390/math11010234 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | One of the challenges of modern neuroscience is creating a "living computer"
based on neural networks grown in vitro. Such an artificial device is supposed
to perform neurocomputational tasks and interact with the environment when
embodied in a robot. Recent studies have identified the most critical
challenge, the search for a neural network architecture to implement
associative learning. This work proposes a model of modular architecture with
spiking neural networks connected by unidirectional couplings. We show that the
model enables training a neuro-robot according to Pavlovian conditioning. The
robot's performance in obstacle avoidance depends on the ratio of the weights
in inter-network couplings. We show that besides STDP, critical factors for
successful learning are synaptic and neuronal competitions. We use the recently
discovered shortest path rule to implement the synaptic competition. This
method is ready for experimental testing. Strong inhibitory couplings implement
the neuronal competition in the subnetwork responsible for the unconditional
response. Empirical testing of this approach requires a technique for growing
neural networks with a given ratio of excitatory and inhibitory neurons not
available yet. An alternative is building a hybrid system with in vitro neural
networks coupled through hardware memristive connections.
| [
{
"created": "Tue, 14 Dec 2021 04:12:54 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jan 2022 17:12:28 GMT",
"version": "v2"
}
] | 2023-01-04 | [
[
"Lobov",
"Sergey A.",
""
],
[
"Mikhaylov",
"Alexey N.",
""
],
[
"Berdnikova",
"Ekaterina S.",
""
],
[
"Makarov",
"Valeri A.",
""
],
[
"Kazantsev",
"Victor B.",
""
]
] | One of the challenges of modern neuroscience is creating a "living computer" based on neural networks grown in vitro. Such an artificial device is supposed to perform neurocomputational tasks and interact with the environment when embodied in a robot. Recent studies have identified the most critical challenge, the search for a neural network architecture to implement associative learning. This work proposes a model of modular architecture with spiking neural networks connected by unidirectional couplings. We show that the model enables training a neuro-robot according to Pavlovian conditioning. The robot's performance in obstacle avoidance depends on the ratio of the weights in inter-network couplings. We show that besides STDP, critical factors for successful learning are synaptic and neuronal competitions. We use the recently discovered shortest path rule to implement the synaptic competition. This method is ready for experimental testing. Strong inhibitory couplings implement the neuronal competition in the subnetwork responsible for the unconditional response. Empirical testing of this approach requires a technique for growing neural networks with a given ratio of excitatory and inhibitory neurons not available yet. An alternative is building a hybrid system with in vitro neural networks coupled through hardware memristive connections. |
2303.07498 | Shanjun Mao | Shan Tang, Shanjun Mao, Yangyang Chen, Falong Tan, Lihua Duan, Cong
Pian, Xiangxiang Zeng | LRBmat: A Novel Gut Microbial Interaction and Individual Heterogeneity
Inference Method for Colorectal Cancer | null | null | 10.1016/j.jtbi.2023.111538 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many diseases are considered to be closely related to the changes in the gut
microbial community, including colorectal cancer (CRC), which is one of the
most common cancers in the world. The diagnostic classification and etiological
analysis of CRC are two critical issues worthy of attention. Many methods adopt
gut microbiota to solve it, but few of them simultaneously take into account
the complex interactions and individual heterogeneity of gut microbiota, which
are two common and important issues in genetics and intestinal microbiology,
especially in high-dimensional cases. In this paper, a novel method with a
Binary matrix based on Logistic Regression (LRBmat) is proposed to deal with
the above problem. The binary matrix can directly weakened or avoided the
influence of heterogeneity, and also contain the information about gut
microbial interactions with any order. Moreover, LRBmat has a powerful
generalization, it can combine with any machine learning method and enhance
them. The real data analysis on CRC validates the proposed method, which has
the best classification performance compared with the state-of-the-art.
Furthermore, the association rules extracted from the binary matrix of the real
data align well with the biological properties and existing literatures, which
are helpful for the etiological analysis of CRC. The source codes for LRBmat
are available at https://github.com/tsnm1/LRBmat.
| [
{
"created": "Mon, 13 Mar 2023 22:13:37 GMT",
"version": "v1"
}
] | 2023-10-19 | [
[
"Tang",
"Shan",
""
],
[
"Mao",
"Shanjun",
""
],
[
"Chen",
"Yangyang",
""
],
[
"Tan",
"Falong",
""
],
[
"Duan",
"Lihua",
""
],
[
"Pian",
"Cong",
""
],
[
"Zeng",
"Xiangxiang",
""
]
] | Many diseases are considered to be closely related to the changes in the gut microbial community, including colorectal cancer (CRC), which is one of the most common cancers in the world. The diagnostic classification and etiological analysis of CRC are two critical issues worthy of attention. Many methods adopt gut microbiota to solve it, but few of them simultaneously take into account the complex interactions and individual heterogeneity of gut microbiota, which are two common and important issues in genetics and intestinal microbiology, especially in high-dimensional cases. In this paper, a novel method with a Binary matrix based on Logistic Regression (LRBmat) is proposed to deal with the above problem. The binary matrix can directly weakened or avoided the influence of heterogeneity, and also contain the information about gut microbial interactions with any order. Moreover, LRBmat has a powerful generalization, it can combine with any machine learning method and enhance them. The real data analysis on CRC validates the proposed method, which has the best classification performance compared with the state-of-the-art. Furthermore, the association rules extracted from the binary matrix of the real data align well with the biological properties and existing literatures, which are helpful for the etiological analysis of CRC. The source codes for LRBmat are available at https://github.com/tsnm1/LRBmat. |
2103.10472 | Jared Ostmeyer | Jared Ostmeyer, Scott Christley, Lindsay Cowell | Dynamic Kernel Matching for Non-conforming Data: A Case Study of T-cell
Receptor Datasets | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Most statistical classifiers are designed to find patterns in data where
numbers fit into rows and columns, like in a spreadsheet, but many kinds of
data do not conform to this structure. To uncover patterns in non-conforming
data, we describe an approach for modifying established statistical classifiers
to handle non-conforming data, which we call dynamic kernel matching (DKM). As
examples of non-conforming data, we consider (i) a dataset of T-cell receptor
(TCR) sequences labelled by disease antigen and (ii) a dataset of sequenced TCR
repertoires labelled by patient cytomegalovirus (CMV) serostatus, anticipating
that both datasets contain signatures for diagnosing disease. We successfully
fit statistical classifiers augmented with DKM to both datasets and report the
performance on holdout data using standard metrics and metrics allowing for
indeterminant diagnoses. Finally, we identify the patterns used by our
statistical classifiers to generate predictions and show that these patterns
agree with observations from experimental studies.
| [
{
"created": "Thu, 18 Mar 2021 18:39:14 GMT",
"version": "v1"
}
] | 2021-03-22 | [
[
"Ostmeyer",
"Jared",
""
],
[
"Christley",
"Scott",
""
],
[
"Cowell",
"Lindsay",
""
]
] | Most statistical classifiers are designed to find patterns in data where numbers fit into rows and columns, like in a spreadsheet, but many kinds of data do not conform to this structure. To uncover patterns in non-conforming data, we describe an approach for modifying established statistical classifiers to handle non-conforming data, which we call dynamic kernel matching (DKM). As examples of non-conforming data, we consider (i) a dataset of T-cell receptor (TCR) sequences labelled by disease antigen and (ii) a dataset of sequenced TCR repertoires labelled by patient cytomegalovirus (CMV) serostatus, anticipating that both datasets contain signatures for diagnosing disease. We successfully fit statistical classifiers augmented with DKM to both datasets and report the performance on holdout data using standard metrics and metrics allowing for indeterminant diagnoses. Finally, we identify the patterns used by our statistical classifiers to generate predictions and show that these patterns agree with observations from experimental studies. |
1804.00794 | Carina Curto | Carina Curto, Jesse Geneson, Katherine Morrison | Fixed points of competitive threshold-linear networks | 53 pages, 22 figures | null | null | null | q-bio.NC cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Threshold-linear networks (TLNs) are models of neural networks that consist
of simple, perceptron-like neurons and exhibit nonlinear dynamics that are
determined by the network's connectivity. The fixed points of a TLN, including
both stable and unstable equilibria, play a critical role in shaping its
emergent dynamics. In this work, we provide two novel characterizations for the
set of fixed points of a competitive TLN: the first is in terms of a simple
sign condition, while the second relies on the concept of domination. We apply
these results to a special family of TLNs, called combinatorial
threshold-linear networks (CTLNs), whose connectivity matrices are defined from
directed graphs. This leads us to prove a series of graph rules that enable one
to determine fixed points of a CTLN by analyzing the underlying graph.
Additionally, we study larger networks composed of smaller "building block"
subnetworks, and prove several theorems relating the fixed points of the full
network to those of its components. Our results provide the foundation for a
kind of "graphical calculus" to infer features of the dynamics from a network's
connectivity.
| [
{
"created": "Sun, 1 Apr 2018 18:37:05 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Aug 2018 17:56:10 GMT",
"version": "v2"
}
] | 2018-08-06 | [
[
"Curto",
"Carina",
""
],
[
"Geneson",
"Jesse",
""
],
[
"Morrison",
"Katherine",
""
]
] | Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics that are determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. Additionally, we study larger networks composed of smaller "building block" subnetworks, and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of "graphical calculus" to infer features of the dynamics from a network's connectivity. |
1705.07849 | Dimos Goundaroulis | Dimos Goundaroulis, Julien Dorier, Fabrizio Benedetti and Andrzej
Stasiak | Studies of global and local entanglements of individual protein chains
using the concept of knotoids | 9 pages, 8 figures with Supplementary Information | Scientific Reports 7, Article number: 6309 (2017) | 10.1038/s41598-017-06649-3 | null | q-bio.BM math.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study here global and local entanglements of open protein chains by
implementing the concept of knotoids. Knotoids have been introduced in 2012 by
Vladimir Turaev as a generalization of knots in 3-dimensional space. More
precisely, knotoids are diagrams representing projections of open curves in 3D
space, in contrast to knot diagrams which represent projections of closed
curves in 3D space. The intrinsic difference with classical knot theory is that
the generalization provided by knotoids admits non-trivial topological
entanglement of the open curves provided that their geometry is frozen as it is
the case for crystallized proteins. Consequently, our approach doesn't require
the closure of chains into loops which implies that the geometry of analysed
chains does not need to be changed by closure in order to characterize their
topology. Our study revealed that the knotoid approach detects protein regions
that were classified earlier as knotted and also new, topologically interesting
regions that we classify as pre-knotted.
| [
{
"created": "Mon, 22 May 2017 16:54:41 GMT",
"version": "v1"
}
] | 2018-06-06 | [
[
"Goundaroulis",
"Dimos",
""
],
[
"Dorier",
"Julien",
""
],
[
"Benedetti",
"Fabrizio",
""
],
[
"Stasiak",
"Andrzej",
""
]
] | We study here global and local entanglements of open protein chains by implementing the concept of knotoids. Knotoids have been introduced in 2012 by Vladimir Turaev as a generalization of knots in 3-dimensional space. More precisely, knotoids are diagrams representing projections of open curves in 3D space, in contrast to knot diagrams which represent projections of closed curves in 3D space. The intrinsic difference with classical knot theory is that the generalization provided by knotoids admits non-trivial topological entanglement of the open curves provided that their geometry is frozen as it is the case for crystallized proteins. Consequently, our approach doesn't require the closure of chains into loops which implies that the geometry of analysed chains does not need to be changed by closure in order to characterize their topology. Our study revealed that the knotoid approach detects protein regions that were classified earlier as knotted and also new, topologically interesting regions that we classify as pre-knotted. |
2207.10815 | Nikolai Slavov | Laurent Gatto, Ruedi Aebersold, Juergen Cox, Vadim Demichev, Jason
Derks, Edward Emmott, Alexander M. Franks, Alexander R. Ivanov, Ryan T.
Kelly, Luke Khoury, Andrew Leduc, Michael J. MacCoss, Peter Nemes, David H.
Perlman, Aleksandra A. Petelski, Christopher M. Rose, Erwin M. Schoof,
Jennifer Van Eyk, Christophe Vanderaa, John R. Yates III, and Nikolai Slavov | Initial recommendations for performing, benchmarking, and reporting
single-cell proteomics experiments | Supporting website: https://single-cell.net/guidelines | Nature Methods, 20, 375--386 (2023) | 10.1038/s41592-023-01785-3 | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Analyzing proteins from single cells by tandem mass spectrometry (MS) has
become technically feasible. While such analysis has the potential to
accurately quantify thousands of proteins across thousands of single cells, the
accuracy and reproducibility of the results may be undermined by numerous
factors affecting experimental design, sample preparation, data acquisition,
and data analysis. Broadly accepted community guidelines and standardized
metrics will enhance rigor, data quality, and alignment between laboratories.
Here we propose best practices, quality controls, and data reporting
recommendations to assist in the broad adoption of reliable quantitative
workflows for single-cell proteomics.
| [
{
"created": "Tue, 19 Jul 2022 12:19:10 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Sep 2022 14:11:48 GMT",
"version": "v2"
}
] | 2023-03-14 | [
[
"Gatto",
"Laurent",
""
],
[
"Aebersold",
"Ruedi",
""
],
[
"Cox",
"Juergen",
""
],
[
"Demichev",
"Vadim",
""
],
[
"Derks",
"Jason",
""
],
[
"Emmott",
"Edward",
""
],
[
"Franks",
"Alexander M.",
""
],
[
"Ivanov",
"Alexander R.",
""
],
[
"Kelly",
"Ryan T.",
""
],
[
"Khoury",
"Luke",
""
],
[
"Leduc",
"Andrew",
""
],
[
"MacCoss",
"Michael J.",
""
],
[
"Nemes",
"Peter",
""
],
[
"Perlman",
"David H.",
""
],
[
"Petelski",
"Aleksandra A.",
""
],
[
"Rose",
"Christopher M.",
""
],
[
"Schoof",
"Erwin M.",
""
],
[
"Van Eyk",
"Jennifer",
""
],
[
"Vanderaa",
"Christophe",
""
],
[
"Yates",
"John R.",
"III"
],
[
"Slavov",
"Nikolai",
""
]
] | Analyzing proteins from single cells by tandem mass spectrometry (MS) has become technically feasible. While such analysis has the potential to accurately quantify thousands of proteins across thousands of single cells, the accuracy and reproducibility of the results may be undermined by numerous factors affecting experimental design, sample preparation, data acquisition, and data analysis. Broadly accepted community guidelines and standardized metrics will enhance rigor, data quality, and alignment between laboratories. Here we propose best practices, quality controls, and data reporting recommendations to assist in the broad adoption of reliable quantitative workflows for single-cell proteomics. |
2103.03274 | Ilenna Jones | Ilenna Simone Jones and Konrad Paul Kording | Do biological constraints impair dendritic computation? | 36 pages, 12 figures | null | 10.1016/j.neuroscience.2021.07.036 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Computations on the dendritic trees of neurons have important constraints.
Voltage dependent conductances in dendrites are not similar to arbitrary
direct-current generation, they are the basis for dendritic nonlinearities and
they do not allow converting positive currents into negative currents. While it
has been speculated that the dendritic tree of a neuron can be seen as a
multi-layer neural network and it has been shown that such an architecture
could be computationally strong, we do not know if that computational strength
is preserved under these biological constraints. Here we simulate models of
dendritic computation with and without these constraints. We find that
dendritic model performance on interesting machine learning tasks is not hurt
by these constraints but may benefit from them. Our results suggest that single
real dendritic trees may be able to learn a surprisingly broad range of tasks.
| [
{
"created": "Thu, 4 Mar 2021 19:17:30 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Aug 2021 14:59:12 GMT",
"version": "v2"
}
] | 2021-08-12 | [
[
"Jones",
"Ilenna Simone",
""
],
[
"Kording",
"Konrad Paul",
""
]
] | Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks. |
2310.13318 | Parisa Ahmadi Ghomroudi | Alessandro Grecucci, Parisa Ahmadi Ghomroudi, Bianca Monachesi, Irene
Messina | The neural signature of inner peace: morphometric differences between
high and low accepters | 48 pages, 6 figures, 3 tables | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Acceptance is an adaptive emotion regulation strategy characterized by an
open and non-judgmental attitude toward mental and sensory experiences. While a
few studies have investigated the neural correlates of acceptance in task-based
fMRI studies, a gap remains in the scientific literature in dispositional use
of acceptance, and how this is sedimented at a structural level. Therefore, the
aim of the present study is to investigate the neural and psychological
differences between infrequent acceptance users (i.e., low accepters) and
frequent users (i.e., high accepters). Another question is whether high and low
accepters differ in personality traits and emotional intelligence. To this aim,
we applied, for the first time, a data fusion unsupervised machine learning
approach (mCCA-jICA) to the gray matter (GM) and white matter (WM) of high
accepters (N = 50), and low accepters (N = 78) to possibly find joint GM-WM
differences in both modalities. Our results show that two covarying GM-WM
networks separate high from low accepters. The first network showed decreased
GM-WM concentration in a fronto-temporal-parietal circuit largely overlapping
with the Default Mode Network, while the second network showed increased GM-WM
concentration in portions of the orbito-frontal, temporal, and parietal areas,
related to a Central Executive Network. At the psychological level, the high
accepters display higher openness to experience compared to low accepters.
Overall, our findings suggest that high accepters compared to low accepters
differ in neural and psychological mechanisms. These findings confirm and
extend previous studies on the relevance of acceptance as a strategy associated
with well-being.
| [
{
"created": "Fri, 20 Oct 2023 07:23:25 GMT",
"version": "v1"
}
] | 2023-10-23 | [
[
"Grecucci",
"Alessandro",
""
],
[
"Ghomroudi",
"Parisa Ahmadi",
""
],
[
"Monachesi",
"Bianca",
""
],
[
"Messina",
"Irene",
""
]
] | Acceptance is an adaptive emotion regulation strategy characterized by an open and non-judgmental attitude toward mental and sensory experiences. While a few studies have investigated the neural correlates of acceptance in task-based fMRI studies, a gap remains in the scientific literature in dispositional use of acceptance, and how this is sedimented at a structural level. Therefore, the aim of the present study is to investigate the neural and psychological differences between infrequent acceptance users (i.e., low accepters) and frequent users (i.e., high accepters). Another question is whether high and low accepters differ in personality traits and emotional intelligence. To this aim, we applied, for the first time, a data fusion unsupervised machine learning approach (mCCA-jICA) to the gray matter (GM) and white matter (WM) of high accepters (N = 50), and low accepters (N = 78) to possibly find joint GM-WM differences in both modalities. Our results show that two covarying GM-WM networks separate high from low accepters. The first network showed decreased GM-WM concentration in a fronto-temporal-parietal circuit largely overlapping with the Default Mode Network, while the second network showed increased GM-WM concentration in portions of the orbito-frontal, temporal, and parietal areas, related to a Central Executive Network. At the psychological level, the high accepters display higher openness to experience compared to low accepters. Overall, our findings suggest that high accepters compared to low accepters differ in neural and psychological mechanisms. These findings confirm and extend previous studies on the relevance of acceptance as a strategy associated with well-being. |
2005.00913 | Sabrina Streipert | Jerzy Filar and Sabrina Streipert | Square Root Laws in Structured Fisheries | 7 figures, 9 pages main text, 6 pages supplementary material | null | 10.1016/j.jtbi.2022.111199 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the term net-proliferation in the context of fisheries and
establish relations between the proliferation and net-proliferation that are
economically and sustainably favored. The resulting square root laws are
analytically derived for species following the Beverton-Holt recurrence but, we
show, can also serve as reference points for other models. The practical
relevance of these analytically derived square root laws is tested on the the
Barramundi fishery in the Southern Gulf of Carpentaria, Australia. A
Beverton-Holt model, including stochasticity to account for model uncertainty,
is fitted to a time series of catch and abundance index for this fishery.
Simulations show, that despite the stochasticity, the population levels remain
sustainable under the square root law. The application, with its inherited
model uncertainty, sparks a risk sensitivity analysis regarding the probability
of populations falling below an unsustainable threshold. Characterization of
such sensitivity helps in the understanding of both dangers of overfishing and
potential remedies.
| [
{
"created": "Sat, 2 May 2020 20:04:27 GMT",
"version": "v1"
}
] | 2022-10-24 | [
[
"Filar",
"Jerzy",
""
],
[
"Streipert",
"Sabrina",
""
]
] | We introduce the term net-proliferation in the context of fisheries and establish relations between the proliferation and net-proliferation that are economically and sustainably favored. The resulting square root laws are analytically derived for species following the Beverton-Holt recurrence but, we show, can also serve as reference points for other models. The practical relevance of these analytically derived square root laws is tested on the the Barramundi fishery in the Southern Gulf of Carpentaria, Australia. A Beverton-Holt model, including stochasticity to account for model uncertainty, is fitted to a time series of catch and abundance index for this fishery. Simulations show, that despite the stochasticity, the population levels remain sustainable under the square root law. The application, with its inherited model uncertainty, sparks a risk sensitivity analysis regarding the probability of populations falling below an unsustainable threshold. Characterization of such sensitivity helps in the understanding of both dangers of overfishing and potential remedies. |
2312.16600 | Jinxian Wang | Weikang Jiang, Jinxian Wang, Jihong Guan and Shuigeng Zhou | scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive
Learning | null | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-cell RNA sequencing (scRNA-seq) enables researchers to analyze gene
expression at single-cell level. One important task in scRNA-seq data analysis
is unsupervised clustering, which helps identify distinct cell types, laying
down the foundation for other downstream analysis tasks. In this paper, we
propose a novel method called Cluster-aware Iterative Contrastive Learning
(CICL in short) for scRNA-seq data clustering, which utilizes an iterative
representation learning and clustering framework to progressively learn the
clustering structure of scRNA-seq data with a cluster-aware contrastive loss.
CICL consists of a Transformer encoder, a clustering head, a projection head
and a contrastive loss module. First, CICL extracts the feature vectors of the
original and augmented data by the Transformer encoder. Then, it computes the
clustering centroids by K-means and employs the student t-distribution to
assign pseudo-labels to all cells in the clustering head. The projection-head
uses a Multi-Layer Perceptron (MLP) to obtain projections of the augmented
data. At last, both pseudo-labels and projections are used in the contrastive
loss to guide the model training. Such a process goes iteratively so that the
clustering result becomes better and better. Extensive experiments on 25 real
world scRNA-seq datasets show that CICL outperforms the SOTA methods.
Concretely, CICL surpasses the existing methods by from 14% to 280%, and from
5% to 133% on average in terms of performance metrics ARI and NMI respectively.
| [
{
"created": "Wed, 27 Dec 2023 14:50:59 GMT",
"version": "v1"
}
] | 2023-12-29 | [
[
"Jiang",
"Weikang",
""
],
[
"Wang",
"Jinxian",
""
],
[
"Guan",
"Jihong",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | Single-cell RNA sequencing (scRNA-seq) enables researchers to analyze gene expression at single-cell level. One important task in scRNA-seq data analysis is unsupervised clustering, which helps identify distinct cell types, laying down the foundation for other downstream analysis tasks. In this paper, we propose a novel method called Cluster-aware Iterative Contrastive Learning (CICL in short) for scRNA-seq data clustering, which utilizes an iterative representation learning and clustering framework to progressively learn the clustering structure of scRNA-seq data with a cluster-aware contrastive loss. CICL consists of a Transformer encoder, a clustering head, a projection head and a contrastive loss module. First, CICL extracts the feature vectors of the original and augmented data by the Transformer encoder. Then, it computes the clustering centroids by K-means and employs the student t-distribution to assign pseudo-labels to all cells in the clustering head. The projection-head uses a Multi-Layer Perceptron (MLP) to obtain projections of the augmented data. At last, both pseudo-labels and projections are used in the contrastive loss to guide the model training. Such a process goes iteratively so that the clustering result becomes better and better. Extensive experiments on 25 real world scRNA-seq datasets show that CICL outperforms the SOTA methods. Concretely, CICL surpasses the existing methods by from 14% to 280%, and from 5% to 133% on average in terms of performance metrics ARI and NMI respectively. |
2006.06622 | R.K. Brojen Singh | Athokpam Langlen Chanu and R.K. Brojen Singh | Stochastic approach to study control strategies of Covid-19 pandemic in
India | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | India is one of the worst affected countries by the Covid-19 pandemic at
present. We studied publicly available data of the Covid-19 patients in India
and analyzed possible impacts of quarantine and social distancing within the
stochastic framework of the SEQIR model to illustrate the controlling strategy
of the pandemic. Our simulation results clearly show that proper quarantine and
social distancing should be maintained from an early time just at the start of
the pandemic and should be continued till its end to effectively control the
pandemic. This calls for a more socially disciplined lifestyle in this
perspective in future. The demographic stochasticity, which is quite visible in
the system dynamics, has a critical role in regulating and controlling the
pandemic.
| [
{
"created": "Thu, 11 Jun 2020 17:21:57 GMT",
"version": "v1"
}
] | 2020-06-12 | [
[
"Chanu",
"Athokpam Langlen",
""
],
[
"Singh",
"R. K. Brojen",
""
]
] | India is one of the worst affected countries by the Covid-19 pandemic at present. We studied publicly available data of the Covid-19 patients in India and analyzed possible impacts of quarantine and social distancing within the stochastic framework of the SEQIR model to illustrate the controlling strategy of the pandemic. Our simulation results clearly show that proper quarantine and social distancing should be maintained from an early time just at the start of the pandemic and should be continued till its end to effectively control the pandemic. This calls for a more socially disciplined lifestyle in this perspective in future. The demographic stochasticity, which is quite visible in the system dynamics, has a critical role in regulating and controlling the pandemic. |
1911.12755 | Yujiang Wang | Jona Carmon, Jil Heege, Joe H Necus, Thomas W Owen, Gordon Pipa,
Marcus Kaiser, Peter N Taylor, Yujiang Wang | Reliability and comparability of human brain structural covariance
networks | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural covariance analysis is a widely used structural MRI analysis
method which characterises the co-relations of morphology between brain regions
over a group of subjects. To our knowledge, little has been investigated in
terms of the comparability of results between different data sets or the
reliability of results over the same subjects in different rescan sessions,
image resolutions, or FreeSurfer versions.
In terms of comparability, our results show substantial differences in the
structural covariance matrix between data sets of age- and sex-matched healthy
human adults. These differences persist after site correction, they are
exacerbated by low sample sizes, and they are most pronounced when using
average cortical thickness as a morphological measure. Down-stream graph
theoretic analyses further show statistically significant differences.
In terms of reliability, substantial differences were also found when
comparing repeated scan sessions of the same subjects, and image resolutions
and FreeSurfer versions of the same image. We could further estimate the
relative measurement error and showed that it is largest when using thickness.
With simulated data, we argue that cortical thickness is least reliable because
of larger relative measurement errors.
Practically, we make the following recommendations (1) pooling subjects
across sites into one group should be avoided, particularly if sites differ in
image resolutions, demographics, or preprocessing; (2) surface area and volume
should be preferred as morphological measures over cortical thickness; (3) a
large number of subjects should be used to estimate structural covariance; (4)
measurement error should be assessed where repeated measurements are available;
(5) if combining sites is critical, univariate site-correction is insufficient,
but error covariance should be explicitly measured and modelled.
| [
{
"created": "Thu, 28 Nov 2019 15:43:03 GMT",
"version": "v1"
},
{
"created": "Thu, 28 May 2020 20:38:20 GMT",
"version": "v2"
}
] | 2020-06-01 | [
[
"Carmon",
"Jona",
""
],
[
"Heege",
"Jil",
""
],
[
"Necus",
"Joe H",
""
],
[
"Owen",
"Thomas W",
""
],
[
"Pipa",
"Gordon",
""
],
[
"Kaiser",
"Marcus",
""
],
[
"Taylor",
"Peter N",
""
],
[
"Wang",
"Yujiang",
""
]
] | Structural covariance analysis is a widely used structural MRI analysis method which characterises the co-relations of morphology between brain regions over a group of subjects. To our knowledge, little has been investigated in terms of the comparability of results between different data sets or the reliability of results over the same subjects in different rescan sessions, image resolutions, or FreeSurfer versions. In terms of comparability, our results show substantial differences in the structural covariance matrix between data sets of age- and sex-matched healthy human adults. These differences persist after site correction, they are exacerbated by low sample sizes, and they are most pronounced when using average cortical thickness as a morphological measure. Down-stream graph theoretic analyses further show statistically significant differences. In terms of reliability, substantial differences were also found when comparing repeated scan sessions of the same subjects, and image resolutions and FreeSurfer versions of the same image. We could further estimate the relative measurement error and showed that it is largest when using thickness. With simulated data, we argue that cortical thickness is least reliable because of larger relative measurement errors. Practically, we make the following recommendations (1) pooling subjects across sites into one group should be avoided, particularly if sites differ in image resolutions, demographics, or preprocessing; (2) surface area and volume should be preferred as morphological measures over cortical thickness; (3) a large number of subjects should be used to estimate structural covariance; (4) measurement error should be assessed where repeated measurements are available; (5) if combining sites is critical, univariate site-correction is insufficient, but error covariance should be explicitly measured and modelled. |
1707.06353 | Boris Gutman | Dmitry Petrov, Boris A. Gutman, Shih-Hua (Julie) Yu, Theo G.M. van
Erp, Jessica A. Turner, Lianne Schmaal, Dick Veltman, Lei Wang, Kathryn
Alpert, Dmitry Isaev, Artemis Zavaliangos-Petropulu, Christopher R.K. Ching,
Vince Calhoun, David Glahn, Theodore D. Satterthwaite, Ole Andreas Andreasen,
Stefan Borgwardt, Fleur Howells, Nynke Groenewold, Aristotle Voineskos,
Joaquim Radua, Steven G. Potkin, Benedicto Crespo-Facorro, Diana
Tordesillas-Gutierrez, Li Shen, Irina Lebedeva, Gianfranco Spalletta, Gary
Donohoe, Peter Kochunov, Pedro G.P. Rosa, Anthony James, Udo Dannlowski,
Bernhard T. Baune, Andre Aleman, Ian H. Gotlib, Henrik Walter, Martin Walter,
Jair C. Soares, Stefan Ehrlich, Ruben C. Gur, N. Trung Doan, Ingrid Agartz,
Lars T. Westlye, Fabienne Harrisberger, Anita Riecher-Rossler, Anne Uhlmann,
Dan J. Stein, Erin W. Dickie, Edith Pomarol-Clotet, Paola Fuentes-Claramonte,
Erick Jorge Canales-Rodriguez, Raymond Salvador, Alexander J. Huang, Roberto
Roiz-Santianez, Shan Cong, Alexander Tomyshev, Fabrizio Piras, Daniela
Vecchio, Nerisa Banaj, Valentina Ciullo, Elliot Hong, Geraldo Busatto, Marcus
V. Zanetti, Mauricio H. Serpa, Simon Cervenka, Sinead Kelly, Dominik
Grotegerd, Matthew D. Sacchet, Ilya M. Veer, Meng Li, Mon-Ju Wu, Benson
Irungu, Esther Walton, and Paul M. Thompson | Machine Learning for Large-Scale Quality Control of 3D Shape Models in
Neuroimaging | Arxiv version of the MICCAI 2017 Machine Learning in Medical Imaging
workshop paper | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As very large studies of complex neuroimaging phenotypes become more common,
human quality assessment of MRI-derived data remains one of the last major
bottlenecks. Few attempts have so far been made to address this issue with
machine learning. In this work, we optimize predictive models of quality for
meshes representing deep brain structure shapes. We use standard vertex-wise
and global shape features computed homologously across 19 cohorts and over 7500
human-rated subjects, training kernelized Support Vector Machine and Gradient
Boosted Decision Trees classifiers to detect meshes of failing quality. Our
models generalize across datasets and diseases, reducing human workload by
30-70\%, or equivalently hundreds of human rater hours for datasets of
comparable size, with recall rates approaching inter-rater reliability.
| [
{
"created": "Thu, 20 Jul 2017 02:57:28 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jul 2017 21:44:19 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jul 2017 18:08:56 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Aug 2017 22:59:36 GMT",
"version": "v4"
}
] | 2017-08-09 | [
[
"Petrov",
"Dmitry",
"",
"Julie"
],
[
"Gutman",
"Boris A.",
"",
"Julie"
],
[
"Shih-Hua",
"",
"",
"Julie"
],
[
"Yu",
"",
""
],
[
"van Erp",
"Theo G. M.",
""
],
[
"Turner",
"Jessica A.",
""
],
[
"Schmaal",
"Lianne",
""
],
[
"Veltman",
"Dick",
""
],
[
"Wang",
"Lei",
""
],
[
"Alpert",
"Kathryn",
""
],
[
"Isaev",
"Dmitry",
""
],
[
"Zavaliangos-Petropulu",
"Artemis",
""
],
[
"Ching",
"Christopher R. K.",
""
],
[
"Calhoun",
"Vince",
""
],
[
"Glahn",
"David",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Andreasen",
"Ole Andreas",
""
],
[
"Borgwardt",
"Stefan",
""
],
[
"Howells",
"Fleur",
""
],
[
"Groenewold",
"Nynke",
""
],
[
"Voineskos",
"Aristotle",
""
],
[
"Radua",
"Joaquim",
""
],
[
"Potkin",
"Steven G.",
""
],
[
"Crespo-Facorro",
"Benedicto",
""
],
[
"Tordesillas-Gutierrez",
"Diana",
""
],
[
"Shen",
"Li",
""
],
[
"Lebedeva",
"Irina",
""
],
[
"Spalletta",
"Gianfranco",
""
],
[
"Donohoe",
"Gary",
""
],
[
"Kochunov",
"Peter",
""
],
[
"Rosa",
"Pedro G. P.",
""
],
[
"James",
"Anthony",
""
],
[
"Dannlowski",
"Udo",
""
],
[
"Baune",
"Bernhard T.",
""
],
[
"Aleman",
"Andre",
""
],
[
"Gotlib",
"Ian H.",
""
],
[
"Walter",
"Henrik",
""
],
[
"Walter",
"Martin",
""
],
[
"Soares",
"Jair C.",
""
],
[
"Ehrlich",
"Stefan",
""
],
[
"Gur",
"Ruben C.",
""
],
[
"Doan",
"N. Trung",
""
],
[
"Agartz",
"Ingrid",
""
],
[
"Westlye",
"Lars T.",
""
],
[
"Harrisberger",
"Fabienne",
""
],
[
"Riecher-Rossler",
"Anita",
""
],
[
"Uhlmann",
"Anne",
""
],
[
"Stein",
"Dan J.",
""
],
[
"Dickie",
"Erin W.",
""
],
[
"Pomarol-Clotet",
"Edith",
""
],
[
"Fuentes-Claramonte",
"Paola",
""
],
[
"Canales-Rodriguez",
"Erick Jorge",
""
],
[
"Salvador",
"Raymond",
""
],
[
"Huang",
"Alexander J.",
""
],
[
"Roiz-Santianez",
"Roberto",
""
],
[
"Cong",
"Shan",
""
],
[
"Tomyshev",
"Alexander",
""
],
[
"Piras",
"Fabrizio",
""
],
[
"Vecchio",
"Daniela",
""
],
[
"Banaj",
"Nerisa",
""
],
[
"Ciullo",
"Valentina",
""
],
[
"Hong",
"Elliot",
""
],
[
"Busatto",
"Geraldo",
""
],
[
"Zanetti",
"Marcus V.",
""
],
[
"Serpa",
"Mauricio H.",
""
],
[
"Cervenka",
"Simon",
""
],
[
"Kelly",
"Sinead",
""
],
[
"Grotegerd",
"Dominik",
""
],
[
"Sacchet",
"Matthew D.",
""
],
[
"Veer",
"Ilya M.",
""
],
[
"Li",
"Meng",
""
],
[
"Wu",
"Mon-Ju",
""
],
[
"Irungu",
"Benson",
""
],
[
"Walton",
"Esther",
""
],
[
"Thompson",
"Paul M.",
""
]
] | As very large studies of complex neuroimaging phenotypes become more common, human quality assessment of MRI-derived data remains one of the last major bottlenecks. Few attempts have so far been made to address this issue with machine learning. In this work, we optimize predictive models of quality for meshes representing deep brain structure shapes. We use standard vertex-wise and global shape features computed homologously across 19 cohorts and over 7500 human-rated subjects, training kernelized Support Vector Machine and Gradient Boosted Decision Trees classifiers to detect meshes of failing quality. Our models generalize across datasets and diseases, reducing human workload by 30-70\%, or equivalently hundreds of human rater hours for datasets of comparable size, with recall rates approaching inter-rater reliability. |
1603.06374 | Hendrik Richter | Hendrik Richter | Analyzing coevolutionary games with dynamic fitness landscapes | null | Proc. IEEE Congress on Evolutionary Computation, IEEE CEC 2016,
(Ed.: Y. S. Ong), IEEE Press, Piscataway, NJ, 2016, 610-616 | null | null | q-bio.PE cs.GT cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coevolutionary games cast players that may change their strategies as well as
their networks of interaction. In this paper a framework is introduced for
describing coevolutionary game dynamics by landscape models. It is shown that
coevolutionary games invoke dynamic landscapes. Numerical experiments are shown
for a prisoner's dilemma (PD) and a snow drift (SD) game that both use either
birth-death (BD) or death-birth (DB) strategy updating. The resulting
landscapes are analyzed with respect to modality and ruggedness
| [
{
"created": "Mon, 21 Mar 2016 10:13:12 GMT",
"version": "v1"
}
] | 2016-11-30 | [
[
"Richter",
"Hendrik",
""
]
] | Coevolutionary games cast players that may change their strategies as well as their networks of interaction. In this paper a framework is introduced for describing coevolutionary game dynamics by landscape models. It is shown that coevolutionary games invoke dynamic landscapes. Numerical experiments are shown for a prisoner's dilemma (PD) and a snow drift (SD) game that both use either birth-death (BD) or death-birth (DB) strategy updating. The resulting landscapes are analyzed with respect to modality and ruggedness |
q-bio/0512034 | Harald Atmanspacher | H. Atmanspacher and P. beim Graben | Contextual Emergence of Mental States from Neurodynamics | 20 pages, no figures, accepted for publication in Chaos and
Complexity Letters | null | null | null | q-bio.NC | null | The emergence of mental states from neural states by partitioning the neural
phase space is analyzed in terms of symbolic dynamics. Well-defined mental
states provide contexts inducing a criterion of structural stability for the
neurodynamics that can be implemented by particular partitions. This leads to
distinguished subshifts of finite type that are either cyclic or irreducible.
Cyclic shifts correspond to asymptotically stable fixed points or limit tori
whereas irreducible shifts are obtained from generating partitions of mixing
hyperbolic systems. These stability criteria are applied to the discussion of
neural correlates of consiousness, to the definition of macroscopic neural
states, and to aspects of the symbol grounding problem. In particular, it is
shown that compatible mental descriptions, topologically equivalent to the
neurodynamical description, emerge if the partition of the neural phase space
is generating. If this is not the case, mental descriptions are incompatible or
complementary. Consequences of this result for an integration or unification of
cognitive science or psychology, respectively, will be indicated.
| [
{
"created": "Mon, 19 Dec 2005 15:10:04 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Atmanspacher",
"H.",
""
],
[
"Graben",
"P. beim",
""
]
] | The emergence of mental states from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-defined mental states provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of finite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable fixed points or limit tori whereas irreducible shifts are obtained from generating partitions of mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the definition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or unification of cognitive science or psychology, respectively, will be indicated. |
2402.05016 | Osho Rawal | Osho Rawal, Berk Turhan, Irene Font Peradejordi, Shreya Chandrasekar,
Selim Kalayci, Sacha Gnjatic, Jeffrey Johnson, Mehdi Bouhaddou, Zeynep H.
G\"um\"u\c{s} | PhosNetVis: a web-based tool for fast kinase-substrate enrichment
analysis and interactive 2D/3D network visualizations of phosphoproteomics
data | Added new author, added references, changed fig1 and fig4 | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Protein phosphorylation involves the reversible modification of a protein
(substrate) residue by another protein (kinase). Liquid chromatography-mass
spectrometry studies are rapidly generating massive protein phosphorylation
datasets across multiple conditions. Researchers then must infer kinases
responsible for changes in phosphosites of each substrate. However, tools that
infer kinase-substrate interactions (KSIs) are not optimized to interactively
explore the resulting large and complex networks, significant phosphosites, and
states. There is thus an unmet need for a tool that facilitates user-friendly
analysis, interactive exploration, visualization, and communication of
phosphoproteomics datasets. We present PhosNetVis, a web-based tool for
researchers of all computational skill levels to easily infer, generate and
interactively explore KSI networks in 2D or 3D by streamlining
phosphoproteomics data analysis steps within a single tool. PhostNetVis lowers
barriers for researchers in rapidly generating high-quality visualizations to
gain biological insights from their phosphoproteomics datasets. It is available
at: https://gumuslab.github.io/PhosNetVis/
| [
{
"created": "Wed, 7 Feb 2024 16:34:10 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 16:00:45 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jul 2024 19:30:05 GMT",
"version": "v3"
}
] | 2024-07-08 | [
[
"Rawal",
"Osho",
""
],
[
"Turhan",
"Berk",
""
],
[
"Peradejordi",
"Irene Font",
""
],
[
"Chandrasekar",
"Shreya",
""
],
[
"Kalayci",
"Selim",
""
],
[
"Gnjatic",
"Sacha",
""
],
[
"Johnson",
"Jeffrey",
""
],
[
"Bouhaddou",
"Mehdi",
""
],
[
"Gümüş",
"Zeynep H.",
""
]
] | Protein phosphorylation involves the reversible modification of a protein (substrate) residue by another protein (kinase). Liquid chromatography-mass spectrometry studies are rapidly generating massive protein phosphorylation datasets across multiple conditions. Researchers then must infer kinases responsible for changes in phosphosites of each substrate. However, tools that infer kinase-substrate interactions (KSIs) are not optimized to interactively explore the resulting large and complex networks, significant phosphosites, and states. There is thus an unmet need for a tool that facilitates user-friendly analysis, interactive exploration, visualization, and communication of phosphoproteomics datasets. We present PhosNetVis, a web-based tool for researchers of all computational skill levels to easily infer, generate and interactively explore KSI networks in 2D or 3D by streamlining phosphoproteomics data analysis steps within a single tool. PhostNetVis lowers barriers for researchers in rapidly generating high-quality visualizations to gain biological insights from their phosphoproteomics datasets. It is available at: https://gumuslab.github.io/PhosNetVis/ |
1501.04665 | Mike Steel Prof. | Elliott Sober and Mike Steel | Similarities as Evidence for Common Ancestry -- A Likelihood
Epistemology | 23 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Darwin claims in the {\em Origin} that similarity is evidence for common
ancestry, but that adaptive similarities are "almost valueless" as evidence.
This claim seems reasonable for some adaptive similarities but not for others.
Here we clarify and evaluate these and related matters by using the law of
likelihood as an analytic tool and by considering mathematical models of three
evolutionary processes -- directional selection, stabilizing selection, and
drift. Our results apply both to Darwin's theory of evolution and to modern
evolutionary biology.
| [
{
"created": "Mon, 19 Jan 2015 22:52:37 GMT",
"version": "v1"
}
] | 2015-01-21 | [
[
"Sober",
"Elliott",
""
],
[
"Steel",
"Mike",
""
]
] | Darwin claims in the {\em Origin} that similarity is evidence for common ancestry, but that adaptive similarities are "almost valueless" as evidence. This claim seems reasonable for some adaptive similarities but not for others. Here we clarify and evaluate these and related matters by using the law of likelihood as an analytic tool and by considering mathematical models of three evolutionary processes -- directional selection, stabilizing selection, and drift. Our results apply both to Darwin's theory of evolution and to modern evolutionary biology. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.