id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2102.03823 | Jacek Mi\c{e}kisz | Jacek Mi\c{e}kisz and Marek Bodnar | Evolution of populations with strategy-dependent time delays | null | Phys. Rev. E 103, 012414 (2021) | 10.1103/PhysRevE.103.012414 | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | We study effects of strategy-dependent time delays on equilibria of evolving
populations. It is well known that time delays may cause oscillations in
dynamical systems. Here we report a novel behavior. We show that microscopic
models of evolutionary games with strategy-dependent time delays lead to a new
type of replicator dynamics. It describes the time evolution of fractions of
the population playing given strategies and the size of the population. Unlike
in all previous models, stationary states of such dynamics depend continuously
on time delays. We show that in games with an interior stationary state (a
globally asymptotically stable equilibrium in the standard replicator
dynamics), at certain time delays, it may disappear or there may appear another
interior stationary state. In the Prisoner's Dilemma game, for time delays of
cooperation smaller than time delays of defection, there appears an ustable
interior equilibrium and therefore for some initial conditions, the population
converges to the homogeneous state with just cooperators.
| [
{
"created": "Sun, 7 Feb 2021 15:42:32 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Miȩkisz",
"Jacek",
""
],
[
"Bodnar",
"Marek",
""
]
] | We study effects of strategy-dependent time delays on equilibria of evolving populations. It is well known that time delays may cause oscillations in dynamical systems. Here we report a novel behavior. We show that microscopic models of evolutionary games with strategy-dependent time delays lead to a new type of replicator dynamics. It describes the time evolution of fractions of the population playing given strategies and the size of the population. Unlike in all previous models, stationary states of such dynamics depend continuously on time delays. We show that in games with an interior stationary state (a globally asymptotically stable equilibrium in the standard replicator dynamics), at certain time delays, it may disappear or there may appear another interior stationary state. In the Prisoner's Dilemma game, for time delays of cooperation smaller than time delays of defection, there appears an ustable interior equilibrium and therefore for some initial conditions, the population converges to the homogeneous state with just cooperators. |
2012.15209 | Pablo Guillermo Bolcatto | Nadia L. Barreiro, Tzype Govezensky, Pablo G. Bolcatto, Rafael A.
Barrio | Detecting infected asymptomatic cases in a stochastic model for spread
of Covid-19. The case of Argentina | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | We have studied the dynamic evolution of the Covid-19 pandemic in Argentina.
The marked heterogeneity in population density and the very extensive geography
of the country becomes a challenge itself. Standard compartment models fail
when they are implemented in the Argentina case. We extended a previous
successful model to describe the geographical spread of the AH1N1 influenza
epidemic of 2009 in two essential ways: we added a stochastic local mobility
mechanism, and we introduced a new compartment in order to take into account
the isolation of infected asymptomatic detected people. Two fundamental
parameters drive the dynamics: the time elapsed between contagious and
isolation of infected individuals ($\alpha$) and the ratio of people isolated
over the total infected ones ($p$). The evolution is more sensitive to the
$p-$parameter. The model not only reproduces the real data but also predicts
the second wave before the former vanishes. This effect is intrinsic of
extensive countries with heterogeneous population density and interconnection.
The model presented here becomes a good predictor of the effects of public
policies as, for instance, the unavoidable vaccination campaigns starting at
present in the world.
| [
{
"created": "Wed, 30 Dec 2020 16:06:43 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Barreiro",
"Nadia L.",
""
],
[
"Govezensky",
"Tzype",
""
],
[
"Bolcatto",
"Pablo G.",
""
],
[
"Barrio",
"Rafael A.",
""
]
] | We have studied the dynamic evolution of the Covid-19 pandemic in Argentina. The marked heterogeneity in population density and the very extensive geography of the country becomes a challenge itself. Standard compartment models fail when they are implemented in the Argentina case. We extended a previous successful model to describe the geographical spread of the AH1N1 influenza epidemic of 2009 in two essential ways: we added a stochastic local mobility mechanism, and we introduced a new compartment in order to take into account the isolation of infected asymptomatic detected people. Two fundamental parameters drive the dynamics: the time elapsed between contagious and isolation of infected individuals ($\alpha$) and the ratio of people isolated over the total infected ones ($p$). The evolution is more sensitive to the $p-$parameter. The model not only reproduces the real data but also predicts the second wave before the former vanishes. This effect is intrinsic of extensive countries with heterogeneous population density and interconnection. The model presented here becomes a good predictor of the effects of public policies as, for instance, the unavoidable vaccination campaigns starting at present in the world. |
1511.09367 | E. Hern\'andez-Lemus | E. Hern\'andez-Lemus, K. Baca-L\'opez, R. Lemus, R. Garc\'ia-Herrera | The role of master regulators in gene regulatory networks | 8 pages, 4 figures | Papers in Physics 7, 070011 (2015) | 10.4279/PIP.070011 | null | q-bio.MN nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Gene regulatory networks present a wide variety of dynamical responses to
intrinsic and extrinsic perturbations. Arguably, one of the most important of
such coordinated responses is the one of amplification cascades, in which
activation of a few key-responsive transcription factors (termed master
regulators, MRs) lead to a large series of transcriptional activation events.
This is so since master regulators are transcription factors controlling the
expression of other transcription factor molecules and so on. MRs hold a
central position related to transcriptional dynamics and control of gene
regulatory networks and are often involved in complex feedback and feedforward
loops inducing non-trivial dynamics. Recent studies have pointed out to the
myocyte enhancing factor 2C (MEF2C, also known as MADS box transcription
enhancer factor 2, polypeptide C) as being one of such master regulators
involved in the pathogenesis of primary breast cancer. In this work, we perform
an integrative genomic analysis of the transcriptional regulation activity of
MEF2C and its target genes to evaluate to what extent are these molecules
inducing collective responses leading to gene expression deregulation and
carcinogenesis. We also analyzed a number of induced dynamic responses, in
particular those associated with transcriptional bursts, and nonlinear
cascading to evaluate the influence they may have in malignant phenotypes and
cancer.
| [
{
"created": "Sat, 17 Oct 2015 22:32:18 GMT",
"version": "v1"
}
] | 2015-12-01 | [
[
"Hernández-Lemus",
"E.",
""
],
[
"Baca-López",
"K.",
""
],
[
"Lemus",
"R.",
""
],
[
"García-Herrera",
"R.",
""
]
] | Gene regulatory networks present a wide variety of dynamical responses to intrinsic and extrinsic perturbations. Arguably, one of the most important of such coordinated responses is the one of amplification cascades, in which activation of a few key-responsive transcription factors (termed master regulators, MRs) lead to a large series of transcriptional activation events. This is so since master regulators are transcription factors controlling the expression of other transcription factor molecules and so on. MRs hold a central position related to transcriptional dynamics and control of gene regulatory networks and are often involved in complex feedback and feedforward loops inducing non-trivial dynamics. Recent studies have pointed out to the myocyte enhancing factor 2C (MEF2C, also known as MADS box transcription enhancer factor 2, polypeptide C) as being one of such master regulators involved in the pathogenesis of primary breast cancer. In this work, we perform an integrative genomic analysis of the transcriptional regulation activity of MEF2C and its target genes to evaluate to what extent are these molecules inducing collective responses leading to gene expression deregulation and carcinogenesis. We also analyzed a number of induced dynamic responses, in particular those associated with transcriptional bursts, and nonlinear cascading to evaluate the influence they may have in malignant phenotypes and cancer. |
2009.03450 | Nicolas Franco | Nicolas Franco | Covid-19 Belgium: Extended SEIR-QD model with nursing homes and
long-term scenarios-based forecasts | 21 pages, 13 figures, minor revision | Epidemics 37 (2021) 100490 | 10.1016/j.epidem.2021.100490 | null | q-bio.PE physics.soc-ph stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Following the spread of the COVID-19 pandemic and pending the establishment
of vaccination campaigns, several non pharmaceutical interventions such as
partial and full lockdown, quarantine and measures of physical distancing have
been imposed in order to reduce the spread of the disease and to lift the
pressure on healthcare system. Mathematical models are important tools for
estimating the impact of these interventions, for monitoring the current
evolution of the epidemic at a national level and for estimating the potential
long-term consequences of relaxation of measures. In this paper, we model the
evolution of the COVID-19 epidemic in Belgium with a deterministic
age-structured extended compartmental model. Our model takes special
consideration for nursing homes which are modelled as separate entities from
the general population in order to capture the specific delay and dynamics
within these entities. The model integrates social contact data and is fitted
on hospitalisations data (admission and discharge), on the daily number of
COVID-19 deaths (with a distinction between general population and nursing home
related deaths) and results from serological studies, with a sensitivity
analysis based on a Bayesian approach. We present the situation as in November
2020 with the estimation of some characteristics of the COVID-19 deduced from
the model. We also present several mid-term and long-term projections based on
scenarios of reinforcement or relaxation of social contacts for different
general sectors, with a lot of uncertainties remaining.
| [
{
"created": "Mon, 7 Sep 2020 23:02:49 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Nov 2020 18:33:22 GMT",
"version": "v2"
},
{
"created": "Wed, 19 May 2021 20:03:48 GMT",
"version": "v3"
},
{
"created": "Mon, 30 Aug 2021 12:42:48 GMT",
"version": "v4"
}
] | 2021-09-08 | [
[
"Franco",
"Nicolas",
""
]
] | Following the spread of the COVID-19 pandemic and pending the establishment of vaccination campaigns, several non pharmaceutical interventions such as partial and full lockdown, quarantine and measures of physical distancing have been imposed in order to reduce the spread of the disease and to lift the pressure on healthcare system. Mathematical models are important tools for estimating the impact of these interventions, for monitoring the current evolution of the epidemic at a national level and for estimating the potential long-term consequences of relaxation of measures. In this paper, we model the evolution of the COVID-19 epidemic in Belgium with a deterministic age-structured extended compartmental model. Our model takes special consideration for nursing homes which are modelled as separate entities from the general population in order to capture the specific delay and dynamics within these entities. The model integrates social contact data and is fitted on hospitalisations data (admission and discharge), on the daily number of COVID-19 deaths (with a distinction between general population and nursing home related deaths) and results from serological studies, with a sensitivity analysis based on a Bayesian approach. We present the situation as in November 2020 with the estimation of some characteristics of the COVID-19 deduced from the model. We also present several mid-term and long-term projections based on scenarios of reinforcement or relaxation of social contacts for different general sectors, with a lot of uncertainties remaining. |
2012.00157 | Zachary Kilpatrick PhD | Subekshya Bidari and Zachary P Kilpatrick | Hive geometry shapes the recruitment rate of honeybee colonies | 32 pages; 10 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Honey bees make decisions regarding foraging and nest-site selection in
groups ranging from hundreds to thousands of individuals. To effectively make
these decisions bees need to communicate within a spatially distributed group.
However, the spatiotemporal dynamics of honey bee communication have been
mostly overlooked in models of collective decisions, focusing primarily on mean
field models of opinion dynamics. We analyze how the spatial properties of the
nest or hive, and the movement of individuals with different belief states
(uncommitted or committed) therein affect the rate of information transmission
using spatially-extended models of collective decision-making within a hive.
Honeybees waggle-dance to recruit conspecifics with an intensity that is a
threshold nonlinear function of the waggler concentration. Our models range
from treating the hive as a chain of discrete patches to a continuous line
(long narrow hive). The combination of population-thresholded recruitment and
compartmentalized populations generates tradeoffs between rapid information
propagation with strong population dispersal and recruitment failures resulting
from excessive population diffusion and also creates an effective colony-level
signal-detection mechanism whereby recruitment to low quality objectives is
blocked.
| [
{
"created": "Mon, 30 Nov 2020 23:10:54 GMT",
"version": "v1"
}
] | 2020-12-02 | [
[
"Bidari",
"Subekshya",
""
],
[
"Kilpatrick",
"Zachary P",
""
]
] | Honey bees make decisions regarding foraging and nest-site selection in groups ranging from hundreds to thousands of individuals. To effectively make these decisions bees need to communicate within a spatially distributed group. However, the spatiotemporal dynamics of honey bee communication have been mostly overlooked in models of collective decisions, focusing primarily on mean field models of opinion dynamics. We analyze how the spatial properties of the nest or hive, and the movement of individuals with different belief states (uncommitted or committed) therein affect the rate of information transmission using spatially-extended models of collective decision-making within a hive. Honeybees waggle-dance to recruit conspecifics with an intensity that is a threshold nonlinear function of the waggler concentration. Our models range from treating the hive as a chain of discrete patches to a continuous line (long narrow hive). The combination of population-thresholded recruitment and compartmentalized populations generates tradeoffs between rapid information propagation with strong population dispersal and recruitment failures resulting from excessive population diffusion and also creates an effective colony-level signal-detection mechanism whereby recruitment to low quality objectives is blocked. |
1708.02809 | Tiffany Leung | Tiffany Leung, Barry D Hughes, Federico Frascoli, Patricia T Campbell,
James M McCaw | Infection-acquired versus vaccine-acquired immunity in an SIRWS model | 21 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite high vaccine coverage, pertussis has re-emerged as a public health
concern in many countries. One hypothesis posed for re-emergence is the waning
of immunity. In some disease systems, the process of waning immunity can be
non-linear, involving a complex relationship between the duration of immunity
and subsequent boosting of immunity through asymptomatic re-exposure.
We present and analyse a model of infectious disease transmission to examine
the interplay between infection and immunity. By allowing the duration of
infection-acquired immunity to differ from that of vaccine-acquired immunity,
we explore the impact of the difference in durations on long-term disease
patterns and prevalence of infection.
Our model demonstrates that vaccination may induce cyclic behaviour, and its
ability to reduce the infection prevalence increases with both the duration of
infection-acquired immunity and duration of vaccine-acquired immunity. We find
that increasing vaccine coverage, while capable of leading to an increase in
overall transmission, always results in a reduction in prevalence of primary
infections, with epidemic cycles characterised by a longer interepidemic period
and taller peaks.
Our results show that the epidemiological patterns of an infectious disease
may change considerably when the duration of vaccine-acquired immunity differs
from that of infection-acquired immunity. Our study highlights that for any
particular disease and associated vaccine, a detailed understanding of the
duration of protection and how that duration is influenced by infection
prevalence is important as we seek to optimise vaccination strategies.
| [
{
"created": "Wed, 9 Aug 2017 12:29:19 GMT",
"version": "v1"
}
] | 2017-08-10 | [
[
"Leung",
"Tiffany",
""
],
[
"Hughes",
"Barry D",
""
],
[
"Frascoli",
"Federico",
""
],
[
"Campbell",
"Patricia T",
""
],
[
"McCaw",
"James M",
""
]
] | Despite high vaccine coverage, pertussis has re-emerged as a public health concern in many countries. One hypothesis posed for re-emergence is the waning of immunity. In some disease systems, the process of waning immunity can be non-linear, involving a complex relationship between the duration of immunity and subsequent boosting of immunity through asymptomatic re-exposure. We present and analyse a model of infectious disease transmission to examine the interplay between infection and immunity. By allowing the duration of infection-acquired immunity to differ from that of vaccine-acquired immunity, we explore the impact of the difference in durations on long-term disease patterns and prevalence of infection. Our model demonstrates that vaccination may induce cyclic behaviour, and its ability to reduce the infection prevalence increases with both the duration of infection-acquired immunity and duration of vaccine-acquired immunity. We find that increasing vaccine coverage, while capable of leading to an increase in overall transmission, always results in a reduction in prevalence of primary infections, with epidemic cycles characterised by a longer interepidemic period and taller peaks. Our results show that the epidemiological patterns of an infectious disease may change considerably when the duration of vaccine-acquired immunity differs from that of infection-acquired immunity. Our study highlights that for any particular disease and associated vaccine, a detailed understanding of the duration of protection and how that duration is influenced by infection prevalence is important as we seek to optimise vaccination strategies. |
2008.03881 | Ulises Pereira | Xiao-Jing Wang, Ulises Pereira, Marcello G.P. Rosa, Henry Kennedy | Brain Connectomes Come of Age | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Databases of directed- and weighted- connectivity for mouse, macaque and
marmoset monkeys, have recently become available and begun to be used to build
dynamical models. A hierarchical organization can be defined based on laminar
patterns of cortical connections, possibly improved by thalamocortical
projections. A large-scale model of the macaque cortex endowed with a laminar
structure accounts for layer-dependent and frequency-modulated interplays
between bottom-up and top-down processes. Signal propagation in a version of
the model with spiking neurons displays a threshold of stimulus amplitude for
the activity to gain access to the prefrontal cortex, reminiscent of the
ignition phenomenon associated with conscious perception. These two examples
illustrate how connectomics may inform theory leading to discoveries.
Computational modeling raises open questions for future empirical research, in
a back-and-forth collaboration of experimentalists and theorists.
| [
{
"created": "Mon, 10 Aug 2020 03:20:08 GMT",
"version": "v1"
}
] | 2020-08-11 | [
[
"Wang",
"Xiao-Jing",
""
],
[
"Pereira",
"Ulises",
""
],
[
"Rosa",
"Marcello G. P.",
""
],
[
"Kennedy",
"Henry",
""
]
] | Databases of directed- and weighted- connectivity for mouse, macaque and marmoset monkeys, have recently become available and begun to be used to build dynamical models. A hierarchical organization can be defined based on laminar patterns of cortical connections, possibly improved by thalamocortical projections. A large-scale model of the macaque cortex endowed with a laminar structure accounts for layer-dependent and frequency-modulated interplays between bottom-up and top-down processes. Signal propagation in a version of the model with spiking neurons displays a threshold of stimulus amplitude for the activity to gain access to the prefrontal cortex, reminiscent of the ignition phenomenon associated with conscious perception. These two examples illustrate how connectomics may inform theory leading to discoveries. Computational modeling raises open questions for future empirical research, in a back-and-forth collaboration of experimentalists and theorists. |
1603.09397 | Frank Wen | Frank Wen, Trevor Bedford, Sarah Cobey | Explaining the geographic origins of seasonal influenza A (H3N2) | Included analyses of more complex metapopulations. Added
clarifications to the text | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most antigenically novel and evolutionarily successful strains of seasonal
influenza A (H3N2) originate in East, South, and Southeast Asia. To understand
this pattern, we simulated the ecological and evolutionary dynamics of
influenza in a host metapopulation representing the temperate north, tropics,
and temperate south. Although seasonality and air traffic are frequently used
to explain global migratory patterns of influenza, we find that other factors
may have a comparable or greater impact. Notably, a region's basic reproductive
number ($R_0$) strongly affects the antigenic evolution of its viral population
and the probability that its strains will spread and fix globally: a 17-28%
higher $R_0$ in one region can explain the observed patterns. Seasonality, in
contrast, increases the probability that a tropical (less seasonal) population
will export evolutionarily successful strains but alone does not predict that
these strains will be antigenically advanced. The relative sizes of different
host populations, their birth and death rates, and the region in which H3N2
first appears affect influenza's phylogeography in different but relatively
minor ways. These results suggest general principles that dictate the spatial
dynamics of antigenically evolving pathogens and offer predictions for how
changes in human ecology might affect influenza evolution.
| [
{
"created": "Wed, 30 Mar 2016 22:05:39 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2016 02:48:34 GMT",
"version": "v2"
}
] | 2016-08-31 | [
[
"Wen",
"Frank",
""
],
[
"Bedford",
"Trevor",
""
],
[
"Cobey",
"Sarah",
""
]
] | Most antigenically novel and evolutionarily successful strains of seasonal influenza A (H3N2) originate in East, South, and Southeast Asia. To understand this pattern, we simulated the ecological and evolutionary dynamics of influenza in a host metapopulation representing the temperate north, tropics, and temperate south. Although seasonality and air traffic are frequently used to explain global migratory patterns of influenza, we find that other factors may have a comparable or greater impact. Notably, a region's basic reproductive number ($R_0$) strongly affects the antigenic evolution of its viral population and the probability that its strains will spread and fix globally: a 17-28% higher $R_0$ in one region can explain the observed patterns. Seasonality, in contrast, increases the probability that a tropical (less seasonal) population will export evolutionarily successful strains but alone does not predict that these strains will be antigenically advanced. The relative sizes of different host populations, their birth and death rates, and the region in which H3N2 first appears affect influenza's phylogeography in different but relatively minor ways. These results suggest general principles that dictate the spatial dynamics of antigenically evolving pathogens and offer predictions for how changes in human ecology might affect influenza evolution. |
1512.03955 | Rui Li | Rui Li, Robert Perneczky, Alexander Drzezga, Stefan Kramer (for the
Alzheimer's Disease Neuroimaging Initiative) | Survival analysis, the infinite Gaussian mixture model, FDG-PET and
non-imaging data in the prediction of progression from mild cognitive
impairment | null | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method to discover interesting brain regions in [18F]
fluorodeoxyglucose positron emission tomography (PET) scans, showing also the
benefits when PET scans are in combined use with non-imaging variables. The
discriminative brain regions facilitate a better understanding of Alzheimer's
disease (AD) progression, and they can also be used for predicting conversion
from mild cognitive impairment (MCI) to AD. A survival analysis(Cox regression)
and infinite Gaussian mixture model (IGMM) are introduced to identify the
informative brain regions, which can be further used to make a prediction of
conversion (in two years) from MCI to AD using only the baseline PET scan.
Further, the predictive accuracy can be enhanced when non-imaging variables are
used together with identified informative brain voxels. The results suggest
that PET scan imaging data is more predictive than other non-imaging data,
revealing even better performance when both imaging and non-imaging data are
combined.
| [
{
"created": "Sat, 12 Dec 2015 20:01:57 GMT",
"version": "v1"
}
] | 2019-08-14 | [
[
"Li",
"Rui",
"",
"for the\n Alzheimer's Disease Neuroimaging Initiative"
],
[
"Perneczky",
"Robert",
"",
"for the\n Alzheimer's Disease Neuroimaging Initiative"
],
[
"Drzezga",
"Alexander",
"",
"for the\n Alzheimer's Disease Neuroimaging Initiative"
],
... | We present a method to discover interesting brain regions in [18F] fluorodeoxyglucose positron emission tomography (PET) scans, showing also the benefits when PET scans are in combined use with non-imaging variables. The discriminative brain regions facilitate a better understanding of Alzheimer's disease (AD) progression, and they can also be used for predicting conversion from mild cognitive impairment (MCI) to AD. A survival analysis(Cox regression) and infinite Gaussian mixture model (IGMM) are introduced to identify the informative brain regions, which can be further used to make a prediction of conversion (in two years) from MCI to AD using only the baseline PET scan. Further, the predictive accuracy can be enhanced when non-imaging variables are used together with identified informative brain voxels. The results suggest that PET scan imaging data is more predictive than other non-imaging data, revealing even better performance when both imaging and non-imaging data are combined. |
2407.03381 | Devam Mondal | Devam Mondal, Atharva Inamdar | SeqMate: A Novel Large Language Model Pipeline for Automating RNA
Sequencing | null | null | null | null | q-bio.GN cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | RNA sequencing techniques, like bulk RNA-seq and Single Cell (sc) RNA-seq,
are critical tools for the biologist looking to analyze the genetic
activity/transcriptome of a tissue or cell during an experimental procedure.
Platforms like Illumina's next-generation sequencing (NGS) are used to produce
the raw data for this experimental procedure. This raw FASTQ data must then be
prepared via a complex series of data manipulations by bioinformaticians. This
process currently takes place on an unwieldy textual user interface like a
terminal/command line that requires the user to install and import multiple
program packages, preventing the untrained biologist from initiating data
analysis. Open-source platforms like Galaxy have produced a more user-friendly
pipeline, yet the visual interface remains cluttered and highly technical,
remaining uninviting for the natural scientist. To address this, SeqMate is a
user-friendly tool that allows for one-click analytics by utilizing the power
of a large language model (LLM) to automate both data preparation and analysis
(differential expression, trajectory analysis, etc). Furthermore, by utilizing
the power of generative AI, SeqMate is also capable of analyzing such findings
and producing written reports of upregulated/downregulated/user-prompted genes
with sources cited from known repositories like PubMed, PDB, and Uniprot.
| [
{
"created": "Tue, 2 Jul 2024 20:28:30 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Mondal",
"Devam",
""
],
[
"Inamdar",
"Atharva",
""
]
] | RNA sequencing techniques, like bulk RNA-seq and Single Cell (sc) RNA-seq, are critical tools for the biologist looking to analyze the genetic activity/transcriptome of a tissue or cell during an experimental procedure. Platforms like Illumina's next-generation sequencing (NGS) are used to produce the raw data for this experimental procedure. This raw FASTQ data must then be prepared via a complex series of data manipulations by bioinformaticians. This process currently takes place on an unwieldy textual user interface like a terminal/command line that requires the user to install and import multiple program packages, preventing the untrained biologist from initiating data analysis. Open-source platforms like Galaxy have produced a more user-friendly pipeline, yet the visual interface remains cluttered and highly technical, remaining uninviting for the natural scientist. To address this, SeqMate is a user-friendly tool that allows for one-click analytics by utilizing the power of a large language model (LLM) to automate both data preparation and analysis (differential expression, trajectory analysis, etc). Furthermore, by utilizing the power of generative AI, SeqMate is also capable of analyzing such findings and producing written reports of upregulated/downregulated/user-prompted genes with sources cited from known repositories like PubMed, PDB, and Uniprot. |
q-bio/0510034 | Partha Mitra | Partha P. Mitra | Bending a slab of neural tissue | 6 pages, 2 figures | null | null | null | q-bio.TO physics.bio-ph q-bio.NC | null | In comparative and developmental neuroanatomy one encounters questions
regarding the deformation of neural tissue under stress. The motivation of this
note is an observation (Barbas {\it et al}) that at cortical folds or gyri, the
layers of neural tissue show relative thickening or thinning of upper or deep
layers. In general, the material properties of a slab of neural tissue are not
known, and even if known, would probably lead to a difficult problem in
elasticity theory. Here a simple argument is presented to show that bending an
elastic slab should produce a relative thickening of the layers on the inside
of the bend. The argument is based on the incompressibility of the material and
should therefore be fairly robust.
| [
{
"created": "Mon, 17 Oct 2005 16:39:28 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2006 04:23:49 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Mitra",
"Partha P.",
""
]
] | In comparative and developmental neuroanatomy one encounters questions regarding the deformation of neural tissue under stress. The motivation of this note is an observation (Barbas {\it et al}) that at cortical folds or gyri, the layers of neural tissue show relative thickening or thinning of upper or deep layers. In general, the material properties of a slab of neural tissue are not known, and even if known, would probably lead to a difficult problem in elasticity theory. Here a simple argument is presented to show that bending an elastic slab should produce a relative thickening of the layers on the inside of the bend. The argument is based on the incompressibility of the material and should therefore be fairly robust. |
1701.04171 | Hyekyoung Lee | Hyekyoung Lee, Zhiwei Ma, Yuan Wang, Moo K. Chung | Topological Distances between Networks and Its Application to Brain
Imaging | 24 pages, 12 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper surveys various distance measures for networks and graphs that
were introduced in persistent homology. The scope of the paper is limited to
network distances that were actually used in brain networks but the methods can
be easily adapted to any weighted graph in other fields. The network version of
Gromov-Hausdorff, bottleneck, kernel distances are introduced. We also
introduce a recently developed KS-test like distance based on monotonic
topology features such as the zeroth Betti number. Numerous toy examples and
the result of applying many different distances to the brain networks of
different clinical status and populations are given.
| [
{
"created": "Mon, 16 Jan 2017 05:19:44 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2017 02:34:59 GMT",
"version": "v2"
}
] | 2017-07-13 | [
[
"Lee",
"Hyekyoung",
""
],
[
"Ma",
"Zhiwei",
""
],
[
"Wang",
"Yuan",
""
],
[
"Chung",
"Moo K.",
""
]
] | This paper surveys various distance measures for networks and graphs that were introduced in persistent homology. The scope of the paper is limited to network distances that were actually used in brain networks but the methods can be easily adapted to any weighted graph in other fields. The network version of Gromov-Hausdorff, bottleneck, kernel distances are introduced. We also introduce a recently developed KS-test like distance based on monotonic topology features such as the zeroth Betti number. Numerous toy examples and the result of applying many different distances to the brain networks of different clinical status and populations are given. |
2201.00399 | Chandre Dharma-wardana | M. W. C. Dharma-wardana (NRC Canada) | Comment on "Water sources and kidney function: investigating chronic
kidney disease of unknown etiology in a prospective study", by P. Vlahos et
al | one figure, pdf file | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vlahos et al., Ref. 1, NPJ Clean water. 4, 50 (2021) have reported the
presence of pesticide contamination above safe levels in a "single time-point
analysis" of well water in a region in Sri Lanka where chronic kidney disease
of unknown etiology (CKDu) is endemic. They conclude "that agrochemical use in
paddy and other agricultural practices ... of the Green Revolution in Sri Lanka
may now be contributing to ill health, rapid progression of disease, and
mortality". The authors also propose "reducing ... agrochemical contaminants in
Sri Lanka and other tropical countries to reduce ... CKDu". These conclusions,
based on what they call a "single time-point analysis", tantamount to an
identification of the etiology of CKDu are unsupported by the evidence
presented by Vlahos et al. They do not satisfy, say, even the simplest of
Bradford-Hill criteria for causation. In particular, (i) similar but
non-persistent pesticide excesses have been detected sporadically in most parts
of the country including where there is no CKDu; (ii) the pesticides reported
in (1) cause both hepatotoxicity and nephrotoxicity; the latter with glomerular
damage while CKDu is associated with tubulo-interstitial damage where no
hepatotoxic symptoms have been reported; (iii) the pesticides detected have
short half-lives and are used over short periods during farming; so the one
time-point analysis is inadequate and misleading; (iv) farming communities that
use pesticides in the same way but remain essentially without CKDu are found to
exist adjacent to communities with CKDu; (v) the CKDu prevalence seems to
correlate with local geomorphology but without correlation to agriculture which
is practiced in most parts of the country. .
| [
{
"created": "Tue, 28 Dec 2021 16:52:22 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jan 2022 15:57:11 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jan 2022 01:43:38 GMT",
"version": "v3"
}
] | 2022-01-19 | [
[
"Dharma-wardana",
"M. W. C.",
"",
"NRC Canada"
]
] | Vlahos et al., Ref. 1, NPJ Clean water. 4, 50 (2021) have reported the presence of pesticide contamination above safe levels in a "single time-point analysis" of well water in a region in Sri Lanka where chronic kidney disease of unknown etiology (CKDu) is endemic. They conclude "that agrochemical use in paddy and other agricultural practices ... of the Green Revolution in Sri Lanka may now be contributing to ill health, rapid progression of disease, and mortality". The authors also propose "reducing ... agrochemical contaminants in Sri Lanka and other tropical countries to reduce ... CKDu". These conclusions, based on what they call a "single time-point analysis", tantamount to an identification of the etiology of CKDu are unsupported by the evidence presented by Vlahos et al. They do not satisfy, say, even the simplest of Bradford-Hill criteria for causation. In particular, (i) similar but non-persistent pesticide excesses have been detected sporadically in most parts of the country including where there is no CKDu; (ii) the pesticides reported in (1) cause both hepatotoxicity and nephrotoxicity; the latter with glomerular damage while CKDu is associated with tubulo-interstitial damage where no hepatotoxic symptoms have been reported; (iii) the pesticides detected have short half-lives and are used over short periods during farming; so the one time-point analysis is inadequate and misleading; (iv) farming communities that use pesticides in the same way but remain essentially without CKDu are found to exist adjacent to communities with CKDu; (v) the CKDu prevalence seems to correlate with local geomorphology but without correlation to agriculture which is practiced in most parts of the country. . |
2306.08005 | Caterina Mazzetti | Caterina Mazzetti, Alessandro Sarti, Giovanna Citti | A sub-Riemannian model of the functional architecture of M1 for arm
movement direction | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper we propose a neurogeometrical model of the behaviour of cells
of the arm area of the primary motor cortex (M1). We mathematically express the
hypercolumnar organization of M1 discovered by Georgopoulos, as a fiber bundle,
as in classical sub-riemannian models of the visual cortex (Hoffmann, Petitot,
Citti-Sarti). On this structure, we consider the selective tuning of M1 neurons
of kinematic variables of positions and directions of movement. We then extend
this model to encode the notion of fragments of movements introduced by
Hatsopoulos. In our approach fragments are modelled as integral curves of
vector fields in a suitable sub-Riemannian space. These fragments are in good
agreements with movement decomposition from neural activity data. Here, we
recover these patterns through a spectral clustering algorithm in the
subriemannian structure we introduced, and compare our results with the
neurophysiological ones of Kadmon-Harpaz et al.
| [
{
"created": "Tue, 13 Jun 2023 08:03:03 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Mazzetti",
"Caterina",
""
],
[
"Sarti",
"Alessandro",
""
],
[
"Citti",
"Giovanna",
""
]
] | In this paper we propose a neurogeometrical model of the behaviour of cells of the arm area of the primary motor cortex (M1). We mathematically express the hypercolumnar organization of M1 discovered by Georgopoulos, as a fiber bundle, as in classical sub-riemannian models of the visual cortex (Hoffmann, Petitot, Citti-Sarti). On this structure, we consider the selective tuning of M1 neurons of kinematic variables of positions and directions of movement. We then extend this model to encode the notion of fragments of movements introduced by Hatsopoulos. In our approach fragments are modelled as integral curves of vector fields in a suitable sub-Riemannian space. These fragments are in good agreements with movement decomposition from neural activity data. Here, we recover these patterns through a spectral clustering algorithm in the subriemannian structure we introduced, and compare our results with the neurophysiological ones of Kadmon-Harpaz et al. |
2311.01648 | Swadesh Pal | Swadesh Pal, Malay Banerjee, Roderick Melnik | The Role of Soil Surface in a Sustainable Semiarid Ecosystem | 21 pages, 6 figures, 1 table | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | Patterns in a semiarid ecosystem are important because they directly and
indirectly affect ecological processes, biodiversity, and ecosystem resilience.
Understanding the causes and effects of these patterns is critical for
long-term land surface management and conservation efforts in semiarid regions,
which are especially sensitive to climate change and human-caused disturbances.
It is known that there is a regular connection between the vegetation and the
living species in a habitat since some animals evolved to live in a semiarid
ecosystem and rely on plants for food. In this work, we have constructed a
coupled mathematical model to connect the water resource, vegetation and living
organisms and have investigated how the soil surface affects the resulting
patterns for the long term. This study contributes to a better understanding of
ecological patterns and processes in semiarid environments by shedding light on
the complex interaction mechanisms that depend on the structure of semiarid
ecosystems. The findings provide further critical insight into the influence of
efforts for improving ecosystem resilience and adjusting to the challenges
posed by climate change and human activities.
| [
{
"created": "Fri, 3 Nov 2023 00:42:48 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Dec 2023 18:46:43 GMT",
"version": "v2"
}
] | 2023-12-05 | [
[
"Pal",
"Swadesh",
""
],
[
"Banerjee",
"Malay",
""
],
[
"Melnik",
"Roderick",
""
]
] | Patterns in a semiarid ecosystem are important because they directly and indirectly affect ecological processes, biodiversity, and ecosystem resilience. Understanding the causes and effects of these patterns is critical for long-term land surface management and conservation efforts in semiarid regions, which are especially sensitive to climate change and human-caused disturbances. It is known that there is a regular connection between the vegetation and the living species in a habitat since some animals evolved to live in a semiarid ecosystem and rely on plants for food. In this work, we have constructed a coupled mathematical model to connect the water resource, vegetation and living organisms and have investigated how the soil surface affects the resulting patterns for the long term. This study contributes to a better understanding of ecological patterns and processes in semiarid environments by shedding light on the complex interaction mechanisms that depend on the structure of semiarid ecosystems. The findings provide further critical insight into the influence of efforts for improving ecosystem resilience and adjusting to the challenges posed by climate change and human activities. |
2402.04422 | Chuang Li | Chuang Li, Yichen Wei, Chao Qin, Shifan Chen, Xiaolong Shao | Immunogenic cell death triggered by pathogen ligands via host germ
line-encoded receptors | 30 pages, 3 figures | null | null | null | q-bio.MN q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | The strategic induction of cell death serves as a crucial immune defense
mechanism for the eradication of pathogenic infections within host cells.
Investigating the molecular mechanisms underlying immunogenic cell pathways has
significantly enhanced our understanding of the host's immunity. This review
provides a comprehensive overview of the immunogenic cell death mechanisms
triggered by pathogen infections, focusing on the critical role of pattern
recognition receptors. In response to infections, host cells dictate a variety
of cell death pathways, including apoptosis, pyroptosis, necrosis, and
lysosomal cell death, which are essential for amplifying immune responses and
controlling pathogen dissemination. Key components of these mechanisms are host
cellular receptors that recognize pathogen-associated ligands. These receptors
activate downstream signaling cascades, leading to the expression of
immunoregulatory genes and the production of antimicrobial cytokines and
chemokines. Particularly, the inflammasome, a multi-protein complex, plays a
pivotal role in these responses by processing pro-inflammatory cytokines and
inducing pyroptotic cell death. Pathogens, in turn, have evolved strategies to
manipulate these cell death pathways, either by inhibiting them to facilitate
their replication or by triggering them to evade host defenses. This dynamic
interplay between host immune mechanisms and pathogen strategies highlights the
intricate co-evolution of microbial virulence and host immunity.
| [
{
"created": "Tue, 6 Feb 2024 21:48:01 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Li",
"Chuang",
""
],
[
"Wei",
"Yichen",
""
],
[
"Qin",
"Chao",
""
],
[
"Chen",
"Shifan",
""
],
[
"Shao",
"Xiaolong",
""
]
] | The strategic induction of cell death serves as a crucial immune defense mechanism for the eradication of pathogenic infections within host cells. Investigating the molecular mechanisms underlying immunogenic cell pathways has significantly enhanced our understanding of the host's immunity. This review provides a comprehensive overview of the immunogenic cell death mechanisms triggered by pathogen infections, focusing on the critical role of pattern recognition receptors. In response to infections, host cells dictate a variety of cell death pathways, including apoptosis, pyroptosis, necrosis, and lysosomal cell death, which are essential for amplifying immune responses and controlling pathogen dissemination. Key components of these mechanisms are host cellular receptors that recognize pathogen-associated ligands. These receptors activate downstream signaling cascades, leading to the expression of immunoregulatory genes and the production of antimicrobial cytokines and chemokines. Particularly, the inflammasome, a multi-protein complex, plays a pivotal role in these responses by processing pro-inflammatory cytokines and inducing pyroptotic cell death. Pathogens, in turn, have evolved strategies to manipulate these cell death pathways, either by inhibiting them to facilitate their replication or by triggering them to evade host defenses. This dynamic interplay between host immune mechanisms and pathogen strategies highlights the intricate co-evolution of microbial virulence and host immunity. |
0909.4248 | Rui Dilao | Rui Dil\~ao, Daniele Muraro | Emergent thresholds in genetic regulatory networks: Protein patterning
in Drosophila morphogenesis | 28 pages, 9 figures | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general methodology in order to build mathematical models of
genetic regulatory networks. This approach is based on the mass action law and
on the Jacob and Monod operon model. The mathematical models are built
symbolically by the \emph{Mathematica} software package \emph{GeneticNetworks}.
This package accepts as input the interaction graphs of the transcriptional
activators and repressors and, as output, gives the mathematical model in the
form of a system of ordinary differential equations. All the relevant
biological parameters are chosen automatically by the software. Within this
framework, we show that threshold effects in biology emerge from the catalytic
properties of genes and its associated conservation laws. We apply this
methodology to the segment patterning in \emph{Drosophila} early development
and we calibrate and validate the genetic transcriptional network responsible
for the patterning of the gap proteins Hunchback and Knirps, along the
antero-posterior axis of the \emph{Drosophila} embryo. This shows that
patterning at the gap genes stage is a consequence of the relations between the
transcriptional regulators and their initial conditions along the embryo.
| [
{
"created": "Wed, 23 Sep 2009 16:20:58 GMT",
"version": "v1"
}
] | 2009-09-24 | [
[
"Dilão",
"Rui",
""
],
[
"Muraro",
"Daniele",
""
]
] | We present a general methodology in order to build mathematical models of genetic regulatory networks. This approach is based on the mass action law and on the Jacob and Monod operon model. The mathematical models are built symbolically by the \emph{Mathematica} software package \emph{GeneticNetworks}. This package accepts as input the interaction graphs of the transcriptional activators and repressors and, as output, gives the mathematical model in the form of a system of ordinary differential equations. All the relevant biological parameters are chosen automatically by the software. Within this framework, we show that threshold effects in biology emerge from the catalytic properties of genes and its associated conservation laws. We apply this methodology to the segment patterning in \emph{Drosophila} early development and we calibrate and validate the genetic transcriptional network responsible for the patterning of the gap proteins Hunchback and Knirps, along the antero-posterior axis of the \emph{Drosophila} embryo. This shows that patterning at the gap genes stage is a consequence of the relations between the transcriptional regulators and their initial conditions along the embryo. |
1202.5434 | Geraldo A. Barbosa | Geraldo A. Barbosa | Can humans see beyond intensity images? | 8 pages, 5 figures | null | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human's visual system detect intensity images. Quite interesting,
detector systems have shown the existence of different kind of images. Among
them, images obtained by two detectors (detector array or spatially scanning
detector) capturing signals within short window times may reveal a "hidden"
image not contained in either isolated detector: Information on this image
depend on the two detectors simultaneously. In general, they are called
"high-order" images because they may depend on more than two electric fields.
Intensity images depend on the square of magnitude of the light's electric
field. Can the human visual sensory system perceive high-order images as well?
This paper proposes a way to test this idea. A positive answer could give new
insights on the "visual-conscience" machinery, opening a new sensory channel
for humans. Applications could be devised, e.g., head position sensing, privacy
in communications at visual ranges and many others.
| [
{
"created": "Fri, 24 Feb 2012 12:38:09 GMT",
"version": "v1"
}
] | 2012-02-27 | [
[
"Barbosa",
"Geraldo A.",
""
]
] | The human's visual system detect intensity images. Quite interesting, detector systems have shown the existence of different kind of images. Among them, images obtained by two detectors (detector array or spatially scanning detector) capturing signals within short window times may reveal a "hidden" image not contained in either isolated detector: Information on this image depend on the two detectors simultaneously. In general, they are called "high-order" images because they may depend on more than two electric fields. Intensity images depend on the square of magnitude of the light's electric field. Can the human visual sensory system perceive high-order images as well? This paper proposes a way to test this idea. A positive answer could give new insights on the "visual-conscience" machinery, opening a new sensory channel for humans. Applications could be devised, e.g., head position sensing, privacy in communications at visual ranges and many others. |
1811.03954 | Philippe Arnaud | Franck Court (IGMM), Philippe Arnaud (GReD) | An annotated list of bivalent chromatin regions in human ES cells: a new
tool for cancer epigenetic research | null | Oncotarget, Impact journals, 2017, 8, pp.4110 - 4124 | null | null | q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CpG islands (CGI) marked by bivalent chromatin in stem cells are believed to
be more prone to aberrant DNA methylation in tumor cells. The robustness and
genome-wide extent of this instructive program in different cancer types remain
to be determined. To address this issue we developed a user-friendly approach
to integrate the stem cell chromatin signature in customized DNA methylation
analyses. We used publicly available ChIP-sequencing datasets of several human
embryonic stem cell (hESC) lines to determine the extent of bivalent chromatin
genome-wide. We then created annotated lists of high-confidence bivalent,
H3K4me3-only and H3K27me3-only chromatin regions. The main features of bivalent
regions included localization in CGI/promoters, depletion in retroelements and
enrichment in specific histone modifications, including the poorly
characterized H3K23me2 mark. Moreover, bivalent promoters could be classified
in three clusters based on PRC2 and PolII complexes occupancy. Genes with
bivalent promoters of the PRC2-defined cluster displayed the lowest expression
upon differentiation. As proof-of-concept, we assessed the DNA methylation
pattern of eight types of tumors and confirmed that aberrant cancer-associated
DNA hypermethylation preferentially targets CGI characterized by bivalent
chromatin in hESCs. We also found that such aberrant DNA hypermethylation
affected particularly bivalent CGI/promoters associated with genes that tend to
remain repressed upon differentiation. Strikingly, bivalent CGI were the most
affected by aberrant DNA hypermethylation in both CpG Island Methylator
Phenotype-positive (CIMP+) and CIMP-negative tumors, suggesting that, besides
transcriptional silencing in the pre-tumorigenic cells, the bivalent chromatin
signature in hESCs is a key determinant of the instructive program for aberrant
DNA methylation.
| [
{
"created": "Fri, 9 Nov 2018 15:07:24 GMT",
"version": "v1"
}
] | 2018-11-12 | [
[
"Court",
"Franck",
"",
"IGMM"
],
[
"Arnaud",
"Philippe",
"",
"GReD"
]
] | CpG islands (CGI) marked by bivalent chromatin in stem cells are believed to be more prone to aberrant DNA methylation in tumor cells. The robustness and genome-wide extent of this instructive program in different cancer types remain to be determined. To address this issue we developed a user-friendly approach to integrate the stem cell chromatin signature in customized DNA methylation analyses. We used publicly available ChIP-sequencing datasets of several human embryonic stem cell (hESC) lines to determine the extent of bivalent chromatin genome-wide. We then created annotated lists of high-confidence bivalent, H3K4me3-only and H3K27me3-only chromatin regions. The main features of bivalent regions included localization in CGI/promoters, depletion in retroelements and enrichment in specific histone modifications, including the poorly characterized H3K23me2 mark. Moreover, bivalent promoters could be classified in three clusters based on PRC2 and PolII complexes occupancy. Genes with bivalent promoters of the PRC2-defined cluster displayed the lowest expression upon differentiation. As proof-of-concept, we assessed the DNA methylation pattern of eight types of tumors and confirmed that aberrant cancer-associated DNA hypermethylation preferentially targets CGI characterized by bivalent chromatin in hESCs. We also found that such aberrant DNA hypermethylation affected particularly bivalent CGI/promoters associated with genes that tend to remain repressed upon differentiation. Strikingly, bivalent CGI were the most affected by aberrant DNA hypermethylation in both CpG Island Methylator Phenotype-positive (CIMP+) and CIMP-negative tumors, suggesting that, besides transcriptional silencing in the pre-tumorigenic cells, the bivalent chromatin signature in hESCs is a key determinant of the instructive program for aberrant DNA methylation. |
1906.02411 | Michael Hendriksen | Michael Hendriksen and Andrew Francis | A partial order and cluster-similarity metric on rooted phylogenetic
trees | 23 pages, 11 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metrics on rooted phylogenetic trees are integral to a number of areas of
phylogenetic analysis. Cluster-similarity metrics have recently been introduced
in order to limit skew in the distribution of distances, and to ensure that
trees in the neighbourhood of each other have similar hierarchies. In the
present paper we introduce a new cluster-similarity metric on rooted
phylogenetic tree space that has an associated local operation, allowing for
easy calculation of neighbourhoods, a trait that is desirable for MCMC
calculations. The metric is defined by the distance on the Hasse diagram
induced by a partial order on the set of rooted phylogenetic trees, itself
based on the notion of a hierarchy-preserving map between trees. The partial
order we introduce is a refinement of the well-known refinement order on
hierarchies. Both the partial order and the hierarchy-preserving maps may also
be of independent interest.
| [
{
"created": "Thu, 6 Jun 2019 04:33:18 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Nov 2019 01:13:27 GMT",
"version": "v2"
}
] | 2019-11-26 | [
[
"Hendriksen",
"Michael",
""
],
[
"Francis",
"Andrew",
""
]
] | Metrics on rooted phylogenetic trees are integral to a number of areas of phylogenetic analysis. Cluster-similarity metrics have recently been introduced in order to limit skew in the distribution of distances, and to ensure that trees in the neighbourhood of each other have similar hierarchies. In the present paper we introduce a new cluster-similarity metric on rooted phylogenetic tree space that has an associated local operation, allowing for easy calculation of neighbourhoods, a trait that is desirable for MCMC calculations. The metric is defined by the distance on the Hasse diagram induced by a partial order on the set of rooted phylogenetic trees, itself based on the notion of a hierarchy-preserving map between trees. The partial order we introduce is a refinement of the well-known refinement order on hierarchies. Both the partial order and the hierarchy-preserving maps may also be of independent interest. |
1806.04752 | Sima Sarv Ahrabi | Sima Sarv Ahrabi | Optimal control in cancer immunotherapy by the application of particle
swarm optimization | 12 pages, 6 figures | null | 10.1007/s00285-020-01525-7 | null | q-bio.QM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, a well-known mathematical model of cancer immunotherapy is
discussed and used to represent therapeutic protocols for cancer treatment. The
optimal control problem is formulated based on the Pontryagin maximum principle
to deal with adoptive cellular immunotherapy, then the problem has been solved
by the application of particle swarm optimization (PSO) in combination with
regular methods of solutions to optimal control problems. The results are
compared with those of other researchers. It is explained how the PSO algorithm
could be enlisted to obtain the optimal controls, then the obtained optimal
controls are demonstrated to be more appropriate to the elimination of cancer
cells by using fewer amounts of external sources of medicine.
| [
{
"created": "Fri, 8 Jun 2018 14:01:38 GMT",
"version": "v1"
}
] | 2020-08-04 | [
[
"Ahrabi",
"Sima Sarv",
""
]
] | In this article, a well-known mathematical model of cancer immunotherapy is discussed and used to represent therapeutic protocols for cancer treatment. The optimal control problem is formulated based on the Pontryagin maximum principle to deal with adoptive cellular immunotherapy, then the problem has been solved by the application of particle swarm optimization (PSO) in combination with regular methods of solutions to optimal control problems. The results are compared with those of other researchers. It is explained how the PSO algorithm could be enlisted to obtain the optimal controls, then the obtained optimal controls are demonstrated to be more appropriate to the elimination of cancer cells by using fewer amounts of external sources of medicine. |
1701.06615 | Philipp Altrock | Philipp M. Altrock and Arne Traulsen and Martin A. Nowak | Evolutionary games on cycles with strong selection | 5 Figures. Accepted for publication (Physical Review E) | Physical Review E 95, 022407 (2017) | 10.1103/PhysRevE.95.022407 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary games on graphs describe how strategic interactions and
population structure determine evolutionary success, quantified by the
probability that a single mutant takes over a population. Graph structures,
compared to the well-mixed case, can act as amplifiers or suppressors of
selection by increasing or decreasing the fixation probability of a beneficial
mutant. Properties of the associated mean fixation times can be more intricate,
especially when selection is strong. The intuition is that fixation of a
beneficial mutant happens fast (in a dominance game), that fixation takes very
long (in a coexistence game), and that strong selection eliminates demographic
noise. Here we show that these intuitions can be misleading in structured
populations. We analyze mean fixation times on the cycle graph under strong
frequency-dependent selection for two different microscopic evolutionary update
rules (death-birth and birth-death). We establish exact analytical results for
fixation times under strong selection, and show that there are coexistence
games in which fixation occurs in time polynomial in population size. Depending
on the underlying game, we observe inherence of demographic noise even under
strong selection, if the process is driven by random death before selection for
birth of an offspring (death-birth update). In contrast, if selection for an
offspring occurs before random removal (birth-death update), strong selection
can remove demographic noise almost entirely.
| [
{
"created": "Mon, 23 Jan 2017 20:14:21 GMT",
"version": "v1"
}
] | 2017-05-08 | [
[
"Altrock",
"Philipp M.",
""
],
[
"Traulsen",
"Arne",
""
],
[
"Nowak",
"Martin A.",
""
]
] | Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast (in a dominance game), that fixation takes very long (in a coexistence game), and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection, and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection, if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), strong selection can remove demographic noise almost entirely. |
2005.04203 | Vanessa Bielefeldt Leotti | Nicole Machado Utpott, Vanessa Bielefeldt Leotti, Laura Bannach Jardim | Equaliza\c{c}\~ao das escalas NESSCA e SARA utilizando a Teoria da
Resposta ao Item na avalia\c{c}\~ao do comprometimento pela doen\c{c}a de
Machado-Joseph | in Portuguese | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Background: Scale equating is a statistical technique used to establish
equivalence relations between different scales. Its use is quite popular in
educational evaluation, however, unusual in the health area, where scales of
measures are tools that integrate clinical practice. With the use of different
scales, there is a difficulty in comparing scientific results, such as NESSCA
and SARA scales, tools for assessing the commitment to Machado-Joseph disease
(SCA3/MJD). Objective: Explore the method of scale equating and demonstrate its
application through NESSCA and SARA scales, using the Item Response Theory
(IRT) approach in assessing SCA3/MJD commitment. Methods: Data came from 227
patients from the Hospital de Cl\'inicas de Porto Alegre with SCA3/MJD who have
complete measures for NESSCA and/or SARA scales. The equating design used is
that of non-equivalent groups with common items, with separate calibration. The
IRT model used in the estimation of the parameters was the generalized partial
credit, for NESSCA and SARA. The linear transformation was performed using the
Mean/Mean, Mean/Sigma, Haebara and StokingLord methods and the equation of the
true score was applied to obtain an estimated relationship between the scores
of the scales. Results: Difference between NESSCA score estimated by SARA and
observed NESSCA score has shown median of 0.82 points, by Mean/Sigma method.
This was the best method of linear transformation among the tested.
Conclusions: This study extended the use of scale equating under IRT approach
to health outcomes and established an equivalence relationship between NESSCA
and SARA scores, making the comparison between patients and scientific results
feasible.
| [
{
"created": "Fri, 24 Apr 2020 20:31:44 GMT",
"version": "v1"
}
] | 2020-05-11 | [
[
"Utpott",
"Nicole Machado",
""
],
[
"Leotti",
"Vanessa Bielefeldt",
""
],
[
"Jardim",
"Laura Bannach",
""
]
] | Background: Scale equating is a statistical technique used to establish equivalence relations between different scales. Its use is quite popular in educational evaluation, however, unusual in the health area, where scales of measures are tools that integrate clinical practice. With the use of different scales, there is a difficulty in comparing scientific results, such as NESSCA and SARA scales, tools for assessing the commitment to Machado-Joseph disease (SCA3/MJD). Objective: Explore the method of scale equating and demonstrate its application through NESSCA and SARA scales, using the Item Response Theory (IRT) approach in assessing SCA3/MJD commitment. Methods: Data came from 227 patients from the Hospital de Cl\'inicas de Porto Alegre with SCA3/MJD who have complete measures for NESSCA and/or SARA scales. The equating design used is that of non-equivalent groups with common items, with separate calibration. The IRT model used in the estimation of the parameters was the generalized partial credit, for NESSCA and SARA. The linear transformation was performed using the Mean/Mean, Mean/Sigma, Haebara and StokingLord methods and the equation of the true score was applied to obtain an estimated relationship between the scores of the scales. Results: Difference between NESSCA score estimated by SARA and observed NESSCA score has shown median of 0.82 points, by Mean/Sigma method. This was the best method of linear transformation among the tested. Conclusions: This study extended the use of scale equating under IRT approach to health outcomes and established an equivalence relationship between NESSCA and SARA scores, making the comparison between patients and scientific results feasible. |
1202.4811 | Simon Gravel | Simon Gravel | Population genetics models of local ancestry | 25 pages with 7 figures; Genetics: Published online before print
April 4, 2012 | null | 10.1534/genetics.112.139808 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Migrations have played an important role in shaping the genetic diversity of
human populations. Understanding genomic data thus requires careful modeling of
historical gene flow. Here we consider the effect of relatively recent
population structure and gene flow, and interpret genomes of individuals that
have ancestry from multiple source populations as mosaics of segments
originating from each population. We propose general and tractable models for
describing the evolution of these patterns of local ancestry and their impact
on genetic diversity. We focus on the length distribution of continuous
ancestry tracts, and the variance in total ancestry proportions among
individuals. The proposed models offer improved agreement with Wright-Fisher
simulation data when compared to state-of-the art models, and can be used to
infer various demographic parameters in gene flow models. Considering HapMap
African-American (ASW) data, we find that a model with two distinct phases of
`European' gene flow significantly improves the modeling of both tract lengths
and ancestry variances.
| [
{
"created": "Wed, 22 Feb 2012 02:58:18 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Apr 2012 17:53:37 GMT",
"version": "v2"
}
] | 2012-04-26 | [
[
"Gravel",
"Simon",
""
]
] | Migrations have played an important role in shaping the genetic diversity of human populations. Understanding genomic data thus requires careful modeling of historical gene flow. Here we consider the effect of relatively recent population structure and gene flow, and interpret genomes of individuals that have ancestry from multiple source populations as mosaics of segments originating from each population. We propose general and tractable models for describing the evolution of these patterns of local ancestry and their impact on genetic diversity. We focus on the length distribution of continuous ancestry tracts, and the variance in total ancestry proportions among individuals. The proposed models offer improved agreement with Wright-Fisher simulation data when compared to state-of-the art models, and can be used to infer various demographic parameters in gene flow models. Considering HapMap African-American (ASW) data, we find that a model with two distinct phases of `European' gene flow significantly improves the modeling of both tract lengths and ancestry variances. |
2102.01883 | Sergiy Shelyag | Maia Angelova, Philip M. Holloway, Sergiy Shelyag, Sutharshan
Rajasegarar, and H.G. Laurie Rauch | Effect of stress on cardiorespiratory synchronization of Ironmen
athletes | Accepted for publication in Frontiers in Physiology -- Fractal and
Network Physiology | null | 10.3389/fphys.2021.612245 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to investigate the cardiorespiratory synchronization
in athletes subjected to extreme physical stress combined with a cognitive
stress tasks. ECG and respiration were measured in 14 athletes before and after
the Ironmen competition. Stroop test was applied between the measurements
before and after the Ironmen competition to induce cognitive stress.
Synchrogram and empirical mode decomposition analysis were used for the first
time to investigate the effects of physical stress, induced by the Ironmen
competition, on the phase synchronization of the cardiac and respiratory
systems of Ironmen athletes before and after the competition. A cognitive
stress task (Stroop test) was performed both pre- and post-Ironman event in
order to prevent the athletes from cognitively controlling their breathing
rates. Our analysis showed that cardiorespiratory synchronization increased
post-Ironman race compared to pre-Ironman. The results suggest that the amount
of stress the athletes are recovering from post-competition is greater than the
effects of the Stroop test. This indicates that the recovery phase after the
competition is more important for restoring and maintaining homeostasis, which
could be another reason for stronger synchronization.
| [
{
"created": "Wed, 3 Feb 2021 05:33:54 GMT",
"version": "v1"
}
] | 2021-03-03 | [
[
"Angelova",
"Maia",
""
],
[
"Holloway",
"Philip M.",
""
],
[
"Shelyag",
"Sergiy",
""
],
[
"Rajasegarar",
"Sutharshan",
""
],
[
"Rauch",
"H. G. Laurie",
""
]
] | The aim of this paper is to investigate the cardiorespiratory synchronization in athletes subjected to extreme physical stress combined with a cognitive stress tasks. ECG and respiration were measured in 14 athletes before and after the Ironmen competition. Stroop test was applied between the measurements before and after the Ironmen competition to induce cognitive stress. Synchrogram and empirical mode decomposition analysis were used for the first time to investigate the effects of physical stress, induced by the Ironmen competition, on the phase synchronization of the cardiac and respiratory systems of Ironmen athletes before and after the competition. A cognitive stress task (Stroop test) was performed both pre- and post-Ironman event in order to prevent the athletes from cognitively controlling their breathing rates. Our analysis showed that cardiorespiratory synchronization increased post-Ironman race compared to pre-Ironman. The results suggest that the amount of stress the athletes are recovering from post-competition is greater than the effects of the Stroop test. This indicates that the recovery phase after the competition is more important for restoring and maintaining homeostasis, which could be another reason for stronger synchronization. |
1508.01155 | Linglin Yu | Linglin Yu, Mingyang Lu, Tianwu Zang and Jianpeng Ma | OPUS-Beta: A Statistical Potential for Beta-Sheet Contact Pattern in
Proteins | 6 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing an accurate scoring function is essential for successfully
predicting protein structures. In this study, we developed a statistical
potential function, called OPUS-Beta, for energetically evaluating beta-sheet
contact pattern (the entire residue-residue beta-contacts of a protein)
independent of the atomic coordinate information. The OPUS-Beta potential
contains five terms, i.e., a self-packing term, a pairwise inter-strand packing
term, a pairwise intra-strand packing term, a lattice term and a
hydrogen-bonding term. The results show that, in recognizing the native
beta-contact pattern from decoys, OPUS-Beta potential outperforms the existing
methods in literature, especially in combination with a method using
2D-recursive neural networks (about 5% and 23% improvements in top-1 and top-5
selections). We expect OPUS-Beta potential to be useful in beta-sheet modeling
for proteins.
| [
{
"created": "Mon, 3 Aug 2015 20:34:51 GMT",
"version": "v1"
}
] | 2015-08-06 | [
[
"Yu",
"Linglin",
""
],
[
"Lu",
"Mingyang",
""
],
[
"Zang",
"Tianwu",
""
],
[
"Ma",
"Jianpeng",
""
]
] | Developing an accurate scoring function is essential for successfully predicting protein structures. In this study, we developed a statistical potential function, called OPUS-Beta, for energetically evaluating beta-sheet contact pattern (the entire residue-residue beta-contacts of a protein) independent of the atomic coordinate information. The OPUS-Beta potential contains five terms, i.e., a self-packing term, a pairwise inter-strand packing term, a pairwise intra-strand packing term, a lattice term and a hydrogen-bonding term. The results show that, in recognizing the native beta-contact pattern from decoys, OPUS-Beta potential outperforms the existing methods in literature, especially in combination with a method using 2D-recursive neural networks (about 5% and 23% improvements in top-1 and top-5 selections). We expect OPUS-Beta potential to be useful in beta-sheet modeling for proteins. |
1801.07649 | Derin Sevenler | Derin Sevenler, George Daaboul, Fulya Ekiz-Kanik and M. Selim Unlu | A digital microarray using interferometric detection of plasmonic
nanorod labels | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA and protein microarrays are a high-throughput technology that allow the
simultaneous quantification of tens of thousands of different biomolecular
species. The mediocre sensitivity and dynamic range of traditional fluorescence
microarrays compared to other techniques have been the technology's Achilles'
Heel, and prevented their adoption for many biomedical and clinical diagnostic
applications. Previous work to enhance the sensitivity of microarray readout to
the single-molecule ('digital') regime have either required signal amplifying
chemistry or sacrificed throughput, nixing the platform's primary advantages.
Here, we report the development of a digital microarray which extends both the
sensitivity and dynamic range of microarrays by about three orders of
magnitude. This technique uses functionalized gold nanorods as single-molecule
labels and an interferometric scanner which can rapidly enumerate individual
nanorods by imaging them with a 10x objective lens. This approach does not
require any chemical enhancement such as silver deposition, and scans arrays
with a throughput similar to commercial fluorescence devices. By combining
single-nanoparticle enumeration and ensemble measurements of spots when the
particles are very dense, this system achieves a dynamic range of about one
million directly from a single scan.
| [
{
"created": "Tue, 23 Jan 2018 16:43:56 GMT",
"version": "v1"
}
] | 2018-01-24 | [
[
"Sevenler",
"Derin",
""
],
[
"Daaboul",
"George",
""
],
[
"Ekiz-Kanik",
"Fulya",
""
],
[
"Unlu",
"M. Selim",
""
]
] | DNA and protein microarrays are a high-throughput technology that allow the simultaneous quantification of tens of thousands of different biomolecular species. The mediocre sensitivity and dynamic range of traditional fluorescence microarrays compared to other techniques have been the technology's Achilles' Heel, and prevented their adoption for many biomedical and clinical diagnostic applications. Previous work to enhance the sensitivity of microarray readout to the single-molecule ('digital') regime have either required signal amplifying chemistry or sacrificed throughput, nixing the platform's primary advantages. Here, we report the development of a digital microarray which extends both the sensitivity and dynamic range of microarrays by about three orders of magnitude. This technique uses functionalized gold nanorods as single-molecule labels and an interferometric scanner which can rapidly enumerate individual nanorods by imaging them with a 10x objective lens. This approach does not require any chemical enhancement such as silver deposition, and scans arrays with a throughput similar to commercial fluorescence devices. By combining single-nanoparticle enumeration and ensemble measurements of spots when the particles are very dense, this system achieves a dynamic range of about one million directly from a single scan. |
2005.07550 | Samuel W.K. Wong | Samuel W.K. Wong | Assessing the impacts of mutations to the structure of COVID-19 spike
protein via sequential Monte Carlo | 15 pages, 4 figures | Journal of Data Science, 2020, 18(3): 511-525 | 10.6339/JDS.202007_18(3).0017 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins play a key role in facilitating the infectiousness of the 2019 novel
coronavirus. A specific spike protein enables this virus to bind to human
cells, and a thorough understanding of its 3-dimensional structure is therefore
critical for developing effective therapeutic interventions. However, its
structure may continue to evolve over time as a result of mutations. In this
paper, we use a data science perspective to study the potential structural
impacts due to ongoing mutations in its amino acid sequence. To do so, we
identify a key segment of the protein and apply a sequential Monte Carlo
sampling method to detect possible changes to the space of low-energy
conformations for different amino acid sequences. Such computational approaches
can further our understanding of this protein structure and complement
laboratory efforts.
| [
{
"created": "Fri, 1 May 2020 16:01:27 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jun 2020 15:48:00 GMT",
"version": "v2"
}
] | 2020-10-28 | [
[
"Wong",
"Samuel W. K.",
""
]
] | Proteins play a key role in facilitating the infectiousness of the 2019 novel coronavirus. A specific spike protein enables this virus to bind to human cells, and a thorough understanding of its 3-dimensional structure is therefore critical for developing effective therapeutic interventions. However, its structure may continue to evolve over time as a result of mutations. In this paper, we use a data science perspective to study the potential structural impacts due to ongoing mutations in its amino acid sequence. To do so, we identify a key segment of the protein and apply a sequential Monte Carlo sampling method to detect possible changes to the space of low-energy conformations for different amino acid sequences. Such computational approaches can further our understanding of this protein structure and complement laboratory efforts. |
1704.07259 | Ankit Gupta | Ankit Gupta, Jan Mikelson and Mustafa Khammash | A finite state projection algorithm for the stationary solution of the
chemical master equation | 8 figures | null | 10.1063/1.5006484 | null | q-bio.QM math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The chemical master equation (CME) is frequently used in systems biology to
quantify the effects of stochastic fluctuations that arise due to biomolecular
species with low copy numbers. The CME is a system of ordinary differential
equations that describes the evolution of probability density for each
population vector in the state-space of the stochastic reaction dynamics. For
many examples of interest, this state-space is infinite, making it difficult to
obtain exact solutions of the CME. To deal with this problem, the Finite State
Projection (FSP) algorithm was developed by Munsky and Khammash (Jour. Chem.
Phys. 2006), to provide approximate solutions to the CME by truncating the
state-space. The FSP works well for finite time-periods but it cannot be used
for estimating the stationary solutions of CMEs, which are often of interest in
systems biology. The aim of this paper is to develop a version of FSP which we
refer to as the stationary FSP (sFSP) that allows one to obtain accurate
approximations of the stationary solutions of a CME by solving a finite
linear-algebraic system that yields the stationary distribution of a
continuous-time Markov chain over the truncated state-space. We derive bounds
for the approximation error incurred by sFSP and we establish that under
certain stability conditions, these errors can be made arbitrarily small by
appropriately expanding the truncated state-space. We provide several examples
to illustrate our sFSP method and demonstrate its efficiency in estimating the
stationary distributions. In particular, we show that using a quantised tensor
train (QTT) implementation of our sFSP method, problems admitting more than 100
million states can be efficiently solved.
| [
{
"created": "Mon, 24 Apr 2017 14:41:38 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2017 14:56:25 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Sep 2017 15:52:32 GMT",
"version": "v3"
}
] | 2017-10-25 | [
[
"Gupta",
"Ankit",
""
],
[
"Mikelson",
"Jan",
""
],
[
"Khammash",
"Mustafa",
""
]
] | The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash (Jour. Chem. Phys. 2006), to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantised tensor train (QTT) implementation of our sFSP method, problems admitting more than 100 million states can be efficiently solved. |
0708.0171 | Jean-Philippe Vert | Pierre Mah\'e (XRCE), Jean-Philippe Vert (CB) | Virtual screening with support vector machines and structure kernels | null | null | null | null | q-bio.QM cs.LG | null | Support vector machines and kernel methods have recently gained considerable
attention in chemoinformatics. They offer generally good performance for
problems of supervised classification or regression, and provide a flexible and
computationally efficient framework to include relevant information and prior
knowledge about the data and problems to be handled. In particular, with kernel
methods molecules do not need to be represented and stored explicitly as
vectors or fingerprints, but only to be compared to each other through a
comparison function technically called a kernel. While classical kernels can be
used to compare vector or fingerprint representations of molecules, completely
new kernels were developed in the recent years to directly compare the 2D or 3D
structures of molecules, without the need for an explicit vectorization step
through the extraction of molecular descriptors. While still in their infancy,
these approaches have already demonstrated their relevance on several toxicity
prediction and structure-activity relationship problems.
| [
{
"created": "Wed, 1 Aug 2007 19:13:52 GMT",
"version": "v1"
}
] | 2007-08-02 | [
[
"Mahé",
"Pierre",
"",
"XRCE"
],
[
"Vert",
"Jean-Philippe",
"",
"CB"
]
] | Support vector machines and kernel methods have recently gained considerable attention in chemoinformatics. They offer generally good performance for problems of supervised classification or regression, and provide a flexible and computationally efficient framework to include relevant information and prior knowledge about the data and problems to be handled. In particular, with kernel methods molecules do not need to be represented and stored explicitly as vectors or fingerprints, but only to be compared to each other through a comparison function technically called a kernel. While classical kernels can be used to compare vector or fingerprint representations of molecules, completely new kernels were developed in the recent years to directly compare the 2D or 3D structures of molecules, without the need for an explicit vectorization step through the extraction of molecular descriptors. While still in their infancy, these approaches have already demonstrated their relevance on several toxicity prediction and structure-activity relationship problems. |
1211.6666 | Fabien Campagne | Kevin C. Dorff, Nyasha Chambwe, Zachary Zeno, Rita Shaknovich and
Fabien Campagne | GobyWeb: simplified management and analysis of gene expression and DNA
methylation sequencing data | main manuscript: 17 pages, 4 tables, 6 figures. Supplementary
material: 4 figures. Comment on this manuscript on Twitter or Google+ with
this handle: #GobyWebPaper | null | 10.1371/journal.pone.0069666 | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present GobyWeb, a web-based system to facilitate the management and
analysis of high-throughput sequencing (HTS) projects. The software provides
integrated support for a broad set of HTS analyses and offers a simple plugin
extension mechanism. Analyses currently supported include quantification of
gene expression for messenger and small RNA sequencing, estimation of DNA
methylation (i.e., reduced bisulfite sequencing and whole genome methyl-seq),
or the detection of pathogens in sequenced data. In contrast to many analysis
pipelines developed for analysis of HTS data, GobyWeb requires significantly
less storage space, runs analyses efficiently on a parallel grid, scales
gracefully to process tens or hundreds of multi-gigabyte samples, yet can be
used effectively by researchers who are comfortable using a web browser.
GobyWeb can be obtained at http://gobyweb.campagnelab.org and is freely
available for non-commercial use.
| [
{
"created": "Wed, 28 Nov 2012 17:12:44 GMT",
"version": "v1"
}
] | 2015-06-12 | [
[
"Dorff",
"Kevin C.",
""
],
[
"Chambwe",
"Nyasha",
""
],
[
"Zeno",
"Zachary",
""
],
[
"Shaknovich",
"Rita",
""
],
[
"Campagne",
"Fabien",
""
]
] | We present GobyWeb, a web-based system to facilitate the management and analysis of high-throughput sequencing (HTS) projects. The software provides integrated support for a broad set of HTS analyses and offers a simple plugin extension mechanism. Analyses currently supported include quantification of gene expression for messenger and small RNA sequencing, estimation of DNA methylation (i.e., reduced bisulfite sequencing and whole genome methyl-seq), or the detection of pathogens in sequenced data. In contrast to many analysis pipelines developed for analysis of HTS data, GobyWeb requires significantly less storage space, runs analyses efficiently on a parallel grid, scales gracefully to process tens or hundreds of multi-gigabyte samples, yet can be used effectively by researchers who are comfortable using a web browser. GobyWeb can be obtained at http://gobyweb.campagnelab.org and is freely available for non-commercial use. |
1405.0760 | Arindam RoyChoudhury | Arindam RoyChoudhury | Consistency of the Maximum Likelihood Estimator of Evolutionary Tree | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximum likelihood estimation (MLE) methods are widely used for evolutionary
tree. As evolutionary tree is not a smooth parameter, the consistency of its
MLE has been a topic of debate. It has been noted without proof that the
classical proof of consistency by Wald holds for the MLE of evolutionary tree.
Other proofs of consistency under various models were also proposed. Here we
will discuss some shortcomings in some of these proofs and comment on the
applicability of Wald's proof.
| [
{
"created": "Mon, 5 May 2014 00:56:15 GMT",
"version": "v1"
}
] | 2014-05-06 | [
[
"RoyChoudhury",
"Arindam",
""
]
] | Maximum likelihood estimation (MLE) methods are widely used for evolutionary tree. As evolutionary tree is not a smooth parameter, the consistency of its MLE has been a topic of debate. It has been noted without proof that the classical proof of consistency by Wald holds for the MLE of evolutionary tree. Other proofs of consistency under various models were also proposed. Here we will discuss some shortcomings in some of these proofs and comment on the applicability of Wald's proof. |
2002.06563 | Liu Hong | Liangrong Peng, Wuyue Yang, Dongyan Zhang, Changjing Zhuge, Liu Hong | Epidemic analysis of COVID-19 in China by dynamical modeling | 11 pages, 6 figures, 1 table | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outbreak of novel coronavirus-caused pneumonia (COVID-19) in Wuhan has
attracted worldwide attention. Here, we propose a generalized SEIR model to
analyze this epidemic. Based on the public data of National Health Commission
of China from Jan. 20th to Feb. 9th, 2020, we reliably estimate key epidemic
parameters and make predictions on the inflection point and possible ending
time for 5 different regions. According to optimistic estimation, the epidemics
in Beijing and Shanghai will end soon within two weeks, while for most part of
China, including the majority of cities in Hubei province, the success of
anti-epidemic will be no later than the middle of March. The situation in Wuhan
is still very severe, at least based on public data until Feb. 15th. We expect
it will end up at the beginning of April. Moreover, by inverse inference, we
find the outbreak of COVID-19 in Mainland, Hubei province and Wuhan all can be
dated back to the end of December 2019, and the doubling time is around two
days at the early stage.
| [
{
"created": "Sun, 16 Feb 2020 12:16:17 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jun 2020 12:26:16 GMT",
"version": "v2"
}
] | 2020-06-26 | [
[
"Peng",
"Liangrong",
""
],
[
"Yang",
"Wuyue",
""
],
[
"Zhang",
"Dongyan",
""
],
[
"Zhuge",
"Changjing",
""
],
[
"Hong",
"Liu",
""
]
] | The outbreak of novel coronavirus-caused pneumonia (COVID-19) in Wuhan has attracted worldwide attention. Here, we propose a generalized SEIR model to analyze this epidemic. Based on the public data of National Health Commission of China from Jan. 20th to Feb. 9th, 2020, we reliably estimate key epidemic parameters and make predictions on the inflection point and possible ending time for 5 different regions. According to optimistic estimation, the epidemics in Beijing and Shanghai will end soon within two weeks, while for most part of China, including the majority of cities in Hubei province, the success of anti-epidemic will be no later than the middle of March. The situation in Wuhan is still very severe, at least based on public data until Feb. 15th. We expect it will end up at the beginning of April. Moreover, by inverse inference, we find the outbreak of COVID-19 in Mainland, Hubei province and Wuhan all can be dated back to the end of December 2019, and the doubling time is around two days at the early stage. |
1910.14623 | Anja F\"ullgrabe | Anja F\"ullgrabe, Nancy George, Matthew Green, Parisa Nejad, Bruce
Aronow, Laura Clarke, Silvie Korena Fexova, Clay Fischer, Mallory Ann
Freeberg, Laura Huerta, Norman Morrison, Richard H. Scheuermann, Deanne
Taylor, Nicole Vasilevsky, Nils Gehlenborg, John Marioni, Sarah Teichmann,
Alvis Brazma, Irene Papatheodorou | Guidelines for reporting single-cell RNA-Seq experiments | null | null | 10.1038/s41587-020-00744-z | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Single-cell RNA-Sequencing (scRNA-Seq) has undergone major technological
advances in recent years, enabling the conception of various organism-level
cell atlassing projects. With increasing numbers of datasets being deposited in
public archives, there is a need to address the challenges of enabling the
reproducibility of such data sets. Here, we describe guidelines for a minimum
set of metadata to sufficiently describe scRNA-Seq experiments, ensuring
reproducibility of data analyses.
| [
{
"created": "Thu, 31 Oct 2019 17:09:40 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Füllgrabe",
"Anja",
""
],
[
"George",
"Nancy",
""
],
[
"Green",
"Matthew",
""
],
[
"Nejad",
"Parisa",
""
],
[
"Aronow",
"Bruce",
""
],
[
"Clarke",
"Laura",
""
],
[
"Fexova",
"Silvie Korena",
""
],
[
... | Single-cell RNA-Sequencing (scRNA-Seq) has undergone major technological advances in recent years, enabling the conception of various organism-level cell atlassing projects. With increasing numbers of datasets being deposited in public archives, there is a need to address the challenges of enabling the reproducibility of such data sets. Here, we describe guidelines for a minimum set of metadata to sufficiently describe scRNA-Seq experiments, ensuring reproducibility of data analyses. |
2210.01769 | Sikun Lin | Sikun Lin, Thomas Sprague, Ambuj K Singh | Mind Reader: Reconstructing complex images from brain activities | null | null | null | null | q-bio.NC cs.CV cs.HC cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding how the brain encodes external stimuli and how these stimuli
can be decoded from the measured brain activities are long-standing and
challenging questions in neuroscience. In this paper, we focus on
reconstructing the complex image stimuli from fMRI (functional magnetic
resonance imaging) signals. Unlike previous works that reconstruct images with
single objects or simple shapes, our work aims to reconstruct image stimuli
that are rich in semantics, closer to everyday scenes, and can reveal more
perspectives. However, data scarcity of fMRI datasets is the main obstacle to
applying state-of-the-art deep learning models to this problem. We find that
incorporating an additional text modality is beneficial for the reconstruction
problem compared to directly translating brain signals to images. Therefore,
the modalities involved in our method are: (i) voxel-level fMRI signals, (ii)
observed images that trigger the brain signals, and (iii) textual description
of the images. To further address data scarcity, we leverage an aligned
vision-language latent space pre-trained on massive datasets. Instead of
training models from scratch to find a latent space shared by the three
modalities, we encode fMRI signals into this pre-aligned latent space. Then,
conditioned on embeddings in this space, we reconstruct images with a
generative model. The reconstructed images from our pipeline balance both
naturalness and fidelity: they are photo-realistic and capture the ground truth
image contents well.
| [
{
"created": "Fri, 30 Sep 2022 06:32:46 GMT",
"version": "v1"
}
] | 2022-10-05 | [
[
"Lin",
"Sikun",
""
],
[
"Sprague",
"Thomas",
""
],
[
"Singh",
"Ambuj K",
""
]
] | Understanding how the brain encodes external stimuli and how these stimuli can be decoded from the measured brain activities are long-standing and challenging questions in neuroscience. In this paper, we focus on reconstructing the complex image stimuli from fMRI (functional magnetic resonance imaging) signals. Unlike previous works that reconstruct images with single objects or simple shapes, our work aims to reconstruct image stimuli that are rich in semantics, closer to everyday scenes, and can reveal more perspectives. However, data scarcity of fMRI datasets is the main obstacle to applying state-of-the-art deep learning models to this problem. We find that incorporating an additional text modality is beneficial for the reconstruction problem compared to directly translating brain signals to images. Therefore, the modalities involved in our method are: (i) voxel-level fMRI signals, (ii) observed images that trigger the brain signals, and (iii) textual description of the images. To further address data scarcity, we leverage an aligned vision-language latent space pre-trained on massive datasets. Instead of training models from scratch to find a latent space shared by the three modalities, we encode fMRI signals into this pre-aligned latent space. Then, conditioned on embeddings in this space, we reconstruct images with a generative model. The reconstructed images from our pipeline balance both naturalness and fidelity: they are photo-realistic and capture the ground truth image contents well. |
0710.1889 | Arvind Rao | Arvind Rao, Alfred O. Hero, David J. States, James Douglas Engel | Understanding Transcriptional Regulation Using De-novo Sequence Motif
Discovery, Network Inference and Interactome Data | 25 pages, 9 figs | null | null | null | q-bio.GN | null | Gene regulation is a complex process involving the role of several genomic
elements which work in concert to drive spatio-temporal expression. The
experimental characterization of gene regulatory elements is a very complex and
resource-intensive process. One of the major goals in computational biology is
the \textit{in-silico} annotation of previously uncharacterized elements using
results from the subset of known, previously annotated, regulatory elements.
The recent results of the ENCODE project (\emph{http://encode.nih.gov})
presented in-depth analysis of such functional (regulatory) non-coding elements
for 1% of the human genome. It is hoped that the results obtained on this
subset can be scaled to the rest of the genome. This is an extremely important
effort which will enable faster dissection of other functional elements in key
biological processes such as disease progression and organ development
(\cite{Kleinjan2005},\cite{Lieb2006}. The computational annotation of these
hitherto uncharacterized regions would require an identification of features
that have good predictive value.
In this work, we study transcriptional regulation as a problem in
heterogeneous data integration, across sequence, expression and interactome
level attributes. Using the example of the \textit{Gata2} gene and its recently
discovered urogenital enhancers \cite{Khandekar2004} as a case study, we
examine the predictive value of various high throughput functional genomic
assays (from projects like ENCODE and SymAtlas) in characterizing these
enhancers and their regulatory role. Observing results from the application of
modern statistical learning methodologies for each of these data modalities, we
propose a set of features that are most discriminatory to find these enhancers.
| [
{
"created": "Tue, 9 Oct 2007 23:14:29 GMT",
"version": "v1"
}
] | 2007-10-11 | [
[
"Rao",
"Arvind",
""
],
[
"Hero",
"Alfred O.",
""
],
[
"States",
"David J.",
""
],
[
"Engel",
"James Douglas",
""
]
] | Gene regulation is a complex process involving the role of several genomic elements which work in concert to drive spatio-temporal expression. The experimental characterization of gene regulatory elements is a very complex and resource-intensive process. One of the major goals in computational biology is the \textit{in-silico} annotation of previously uncharacterized elements using results from the subset of known, previously annotated, regulatory elements. The recent results of the ENCODE project (\emph{http://encode.nih.gov}) presented in-depth analysis of such functional (regulatory) non-coding elements for 1% of the human genome. It is hoped that the results obtained on this subset can be scaled to the rest of the genome. This is an extremely important effort which will enable faster dissection of other functional elements in key biological processes such as disease progression and organ development (\cite{Kleinjan2005},\cite{Lieb2006}. The computational annotation of these hitherto uncharacterized regions would require an identification of features that have good predictive value. In this work, we study transcriptional regulation as a problem in heterogeneous data integration, across sequence, expression and interactome level attributes. Using the example of the \textit{Gata2} gene and its recently discovered urogenital enhancers \cite{Khandekar2004} as a case study, we examine the predictive value of various high throughput functional genomic assays (from projects like ENCODE and SymAtlas) in characterizing these enhancers and their regulatory role. Observing results from the application of modern statistical learning methodologies for each of these data modalities, we propose a set of features that are most discriminatory to find these enhancers. |
2103.06120 | Glenn Young | Glenn Young, Pengcheng Xiao, Kenneth Newcomb, Edwin Michael | Interplay between COVID-19 vaccines and social measures for ending the
SARS-CoV-2 pandemic | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The development and authorization of COVID-19 vaccines has provided the
clearest path forward to eliminate community spread hence end the ongoing
SARS-CoV-2 pandemic. However, the limited pace at which the vaccine can be
administered motivates the question, to what extent must we continue to adhere
to social intervention measures such as mask wearing and social distancing? To
address this question, we develop a mathematical model of COVID-19 spread
incorporating both vaccine dynamics and socio-epidemiological parameters. We
use this model to study two important measures of disease control and
eradication, the effective reproductive number $R_t$ and the peak intensive
care unit (ICU) caseload, over three key parameters: social measure adherence,
vaccination rate, and vaccination coverage. Our results suggest that, due to
the slow pace of vaccine administration, social measures must be maintained by
a large proportion of the population until a sufficient proportion of the
population becomes vaccinated for the pandemic to be eradicated. By contrast,
with reduced adherence to social measures, hospital ICU cases will greatly
exceed capacity, resulting in increased avoidable loss of life. These findings
highlight the complex interplays involved between vaccination and social
protective measures, and indicate the practical importance of continuing with
extent social measures while vaccines are scaled up to allow the development of
the herd immunity needed to end or control SARS-CoV-2 sustainably.
| [
{
"created": "Sat, 6 Mar 2021 19:43:52 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Young",
"Glenn",
""
],
[
"Xiao",
"Pengcheng",
""
],
[
"Newcomb",
"Kenneth",
""
],
[
"Michael",
"Edwin",
""
]
] | The development and authorization of COVID-19 vaccines has provided the clearest path forward to eliminate community spread hence end the ongoing SARS-CoV-2 pandemic. However, the limited pace at which the vaccine can be administered motivates the question, to what extent must we continue to adhere to social intervention measures such as mask wearing and social distancing? To address this question, we develop a mathematical model of COVID-19 spread incorporating both vaccine dynamics and socio-epidemiological parameters. We use this model to study two important measures of disease control and eradication, the effective reproductive number $R_t$ and the peak intensive care unit (ICU) caseload, over three key parameters: social measure adherence, vaccination rate, and vaccination coverage. Our results suggest that, due to the slow pace of vaccine administration, social measures must be maintained by a large proportion of the population until a sufficient proportion of the population becomes vaccinated for the pandemic to be eradicated. By contrast, with reduced adherence to social measures, hospital ICU cases will greatly exceed capacity, resulting in increased avoidable loss of life. These findings highlight the complex interplays involved between vaccination and social protective measures, and indicate the practical importance of continuing with extent social measures while vaccines are scaled up to allow the development of the herd immunity needed to end or control SARS-CoV-2 sustainably. |
2401.04873 | Hue Sun Chan | Yi-Hsuan Lin, Tae Hun Kim, Suman Das, Tanmoy Pal, Jonas Wess\'en, Atul
Kaushik Rangadurai, Lewis E. Kay, Julie D. Forman-Kay, Hue Sun Chan | Electrostatics of Salt-Dependent Reentrant Phase Behaviors Highlights
Diverse Roles of ATP in Biomolecular Condensates | 67 pages, 2 main-text tables, 8 main-text figures, 6 supporting
figures, 155 references. Submitted to eLife | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Liquid-liquid phase separation (LLPS) involving intrinsically disordered
protein regions (IDRs) is a major physical mechanism for biological
membraneless compartmentalization. The multifaceted electrostatic effects in
these biomolecular condensates are exemplified here by experimental and
theoretical investigations of the different salt- and ATP-dependent LLPSs of an
IDR of messenger RNA-regulating protein Caprin1 and its phosphorylated variant
pY-Caprin1, exhibiting, e.g., reentrant behaviors in some instances but not
others. Experimental data are rationalized by physical modeling using
analytical theory, molecular dynamics, and polymer field-theoretic simulations,
indicating in general that interchain salt bridges enhance LLPS of
polyelectrolytes such as Caprin1 and that the high valency of ATP-magnesium is
a significant factor for its colocalization with the condensed phases, as
similar trends are observed for several other IDRs. Our findings underscore the
role of biomolecular condensates in modulating ion concentrations and its
functional ramifications.
| [
{
"created": "Wed, 10 Jan 2024 01:57:05 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 17:54:42 GMT",
"version": "v2"
}
] | 2024-06-19 | [
[
"Lin",
"Yi-Hsuan",
""
],
[
"Kim",
"Tae Hun",
""
],
[
"Das",
"Suman",
""
],
[
"Pal",
"Tanmoy",
""
],
[
"Wessén",
"Jonas",
""
],
[
"Rangadurai",
"Atul Kaushik",
""
],
[
"Kay",
"Lewis E.",
""
],
[
"For... | Liquid-liquid phase separation (LLPS) involving intrinsically disordered protein regions (IDRs) is a major physical mechanism for biological membraneless compartmentalization. The multifaceted electrostatic effects in these biomolecular condensates are exemplified here by experimental and theoretical investigations of the different salt- and ATP-dependent LLPSs of an IDR of messenger RNA-regulating protein Caprin1 and its phosphorylated variant pY-Caprin1, exhibiting, e.g., reentrant behaviors in some instances but not others. Experimental data are rationalized by physical modeling using analytical theory, molecular dynamics, and polymer field-theoretic simulations, indicating in general that interchain salt bridges enhance LLPS of polyelectrolytes such as Caprin1 and that the high valency of ATP-magnesium is a significant factor for its colocalization with the condensed phases, as similar trends are observed for several other IDRs. Our findings underscore the role of biomolecular condensates in modulating ion concentrations and its functional ramifications. |
2205.08540 | Baibhab Chatterjee | Baibhab Chatterjee, Mayukh Nath, Gaurav Kumar K, Shulan Xiao, Krishna
Jayant, Shreyas Sen | Bi-Phasic Quasistatic Brain Communication for Fully Untethered Connected
Brain Implants | 22 pages | null | null | null | q-bio.NC cs.SY eess.SP eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wireless communication using electro-magnetic (EM) fields acts as the
backbone for information exchange among wearable devices around the human body.
However, for Implanted devices, EM fields incur high amount of absorption in
the tissue, while alternative modes of transmission including ultrasound,
optical and magneto-electric methods result in large amount of transduction
losses due to conversion of one form of energy to another, thereby increasing
the overall end-to-end energy loss. To solve the challenge of powering and
communication in a brain implant with low end-end channel loss, we present
Bi-Phasic Quasistatic Brain Communication (BP-QBC), achieving < 60dB worst-case
end-to-end channel loss at a channel length of 55mm, by avoiding the
transduction losses during field-modality conversion. BP-QBC utilizes dipole
coupling based signal transmission within the brain tissue using differential
excitation in the transmitter and differential signal pick-up at the receiver,
and offers 41X lower power w.r.t. traditional Galvanic Human Body Communication
at a carrier frequency of 1MHz, by blocking any DC current paths through the
brain tissue. Since the electrical signal transfer through the human tissue is
electro-quasistatic up to several 10's of MHz range, BP-QBC allows a scalable
(bps-10Mbps) duty-cycled uplink from the implant to an external wearable. The
power consumption in the BP-QBC TX is only 0.52uW at 1Mbps (with 1% duty
cycling), which is within the range of harvested body-coupled power in the
downlink from an external wearable to the brain implant. Furthermore, BP-QBC
eliminates the need for sub-cranial repeaters, as it utilizes quasi-static
electrical signals, thereby avoiding any transduction losses. Such low
end-to-end channel loss with high data rates would find applications in
neuroscience, brain-machine interfaces, electroceuticals and connected
healthcare.
| [
{
"created": "Wed, 18 May 2022 06:11:13 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Oct 2022 08:29:05 GMT",
"version": "v2"
},
{
"created": "Wed, 31 May 2023 20:19:49 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Jul 2023 15:54:59 GMT",
"version": "v4"
}
] | 2023-07-06 | [
[
"Chatterjee",
"Baibhab",
""
],
[
"Nath",
"Mayukh",
""
],
[
"K",
"Gaurav Kumar",
""
],
[
"Xiao",
"Shulan",
""
],
[
"Jayant",
"Krishna",
""
],
[
"Sen",
"Shreyas",
""
]
] | Wireless communication using electro-magnetic (EM) fields acts as the backbone for information exchange among wearable devices around the human body. However, for Implanted devices, EM fields incur high amount of absorption in the tissue, while alternative modes of transmission including ultrasound, optical and magneto-electric methods result in large amount of transduction losses due to conversion of one form of energy to another, thereby increasing the overall end-to-end energy loss. To solve the challenge of powering and communication in a brain implant with low end-end channel loss, we present Bi-Phasic Quasistatic Brain Communication (BP-QBC), achieving < 60dB worst-case end-to-end channel loss at a channel length of 55mm, by avoiding the transduction losses during field-modality conversion. BP-QBC utilizes dipole coupling based signal transmission within the brain tissue using differential excitation in the transmitter and differential signal pick-up at the receiver, and offers 41X lower power w.r.t. traditional Galvanic Human Body Communication at a carrier frequency of 1MHz, by blocking any DC current paths through the brain tissue. Since the electrical signal transfer through the human tissue is electro-quasistatic up to several 10's of MHz range, BP-QBC allows a scalable (bps-10Mbps) duty-cycled uplink from the implant to an external wearable. The power consumption in the BP-QBC TX is only 0.52uW at 1Mbps (with 1% duty cycling), which is within the range of harvested body-coupled power in the downlink from an external wearable to the brain implant. Furthermore, BP-QBC eliminates the need for sub-cranial repeaters, as it utilizes quasi-static electrical signals, thereby avoiding any transduction losses. Such low end-to-end channel loss with high data rates would find applications in neuroscience, brain-machine interfaces, electroceuticals and connected healthcare. |
1912.08163 | Yuri Morokov | Yu. N. Morokov | Simple mathematical model of aging | 8 pages, 1 figure | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple mathematical model of the aging process for long-lived organisms is
considered. The key point in this model is the assumption that the body does
not have internal clocks that count out the chronological time at scales of
decades. At these scales, we may limit ourselves by empirical consideration
only the background (smoothed, averaged) processes. The body is dealing with
internal biological factors, which can be considered as the biological clocks
in suitable parameterization of corresponding variables. The dynamics of these
variables is described using a system of autonomous ODEs. A particular
representation of the right-hand side of equations in the form of quadratic
polynomials is considered. In the simplest case of one variable we deal with a
logistic equation, which has an analytical solution. Such quadratic model is
justified if it is used to predict the dynamic of aging process for relatively
small time intervals. However, since a well-defined biological interpretation
can be given for the quadratic right-hand side of the equations, we can expect
that the area of applicability of this simplified empirical model can extend to
relatively large time intervals. The considered model is, in our opinion, a
good and simple mathematical framework for organizing experimental data. The
model may be useful to quantify and arrange the chains of causal relationships
that may appear significant for the development of the aging process.
| [
{
"created": "Tue, 17 Dec 2019 18:03:35 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Dec 2019 15:16:49 GMT",
"version": "v2"
}
] | 2019-12-30 | [
[
"Morokov",
"Yu. N.",
""
]
] | A simple mathematical model of the aging process for long-lived organisms is considered. The key point in this model is the assumption that the body does not have internal clocks that count out the chronological time at scales of decades. At these scales, we may limit ourselves by empirical consideration only the background (smoothed, averaged) processes. The body is dealing with internal biological factors, which can be considered as the biological clocks in suitable parameterization of corresponding variables. The dynamics of these variables is described using a system of autonomous ODEs. A particular representation of the right-hand side of equations in the form of quadratic polynomials is considered. In the simplest case of one variable we deal with a logistic equation, which has an analytical solution. Such quadratic model is justified if it is used to predict the dynamic of aging process for relatively small time intervals. However, since a well-defined biological interpretation can be given for the quadratic right-hand side of the equations, we can expect that the area of applicability of this simplified empirical model can extend to relatively large time intervals. The considered model is, in our opinion, a good and simple mathematical framework for organizing experimental data. The model may be useful to quantify and arrange the chains of causal relationships that may appear significant for the development of the aging process. |
2001.00122 | A. Rebei | Adnan Rebei | Entropic Decision Making | 84 pages, 22 figures | null | null | null | q-bio.NC econ.GN q-fin.EC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using results from neurobiology on perceptual decision making and value-based
decision making, the problem of decision making between lotteries is
reformulated in an abstract space where uncertain prospects are mapped to
corresponding active neuronal representations. This mapping allows us to
maximize non-extensive entropy in the new space with some constraints instead
of a utility function. To achieve good agreements with behavioral data, the
constraints must include at least constraints on the weighted average of the
stimulus and on its variance. Both constraints are supported by the
adaptability of neuronal responses to an external stimulus. By analogy with
thermodynamic and information engines, we discuss the dynamics of choice
between two lotteries as they are being processed simultaneously in the brain
by rate equations that describe the transfer of attention between lotteries and
within the various prospects of each lottery. This model is able to give new
insights on risk aversion and on behavioral anomalies not accounted for by
Prospect Theory.
| [
{
"created": "Wed, 1 Jan 2020 01:37:20 GMT",
"version": "v1"
}
] | 2020-01-03 | [
[
"Rebei",
"Adnan",
""
]
] | Using results from neurobiology on perceptual decision making and value-based decision making, the problem of decision making between lotteries is reformulated in an abstract space where uncertain prospects are mapped to corresponding active neuronal representations. This mapping allows us to maximize non-extensive entropy in the new space with some constraints instead of a utility function. To achieve good agreements with behavioral data, the constraints must include at least constraints on the weighted average of the stimulus and on its variance. Both constraints are supported by the adaptability of neuronal responses to an external stimulus. By analogy with thermodynamic and information engines, we discuss the dynamics of choice between two lotteries as they are being processed simultaneously in the brain by rate equations that describe the transfer of attention between lotteries and within the various prospects of each lottery. This model is able to give new insights on risk aversion and on behavioral anomalies not accounted for by Prospect Theory. |
1912.06896 | Hamid Rahkooy | Alexandru Iosif and Hamid Rahkooy | Analysis of the Conradi-Kahle Algorithm for Detecting Binomiality on
Biological Models | null | null | null | null | q-bio.MN cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the Conradi-Kahle Algorithm for detecting binomiality. We present
experiments using two implementations of the algorithm in Macaulay2 and Maple
on biological models and assess the performance of the algorithm on these
models. We compare the two implementations with each other and with Gr\"obner
bases computations up to their performance on these biological models.
| [
{
"created": "Sat, 14 Dec 2019 18:11:26 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"Iosif",
"Alexandru",
""
],
[
"Rahkooy",
"Hamid",
""
]
] | We analyze the Conradi-Kahle Algorithm for detecting binomiality. We present experiments using two implementations of the algorithm in Macaulay2 and Maple on biological models and assess the performance of the algorithm on these models. We compare the two implementations with each other and with Gr\"obner bases computations up to their performance on these biological models. |
1701.01746 | Dongya Jia | Dongya Jia, Mohit Kumar Jolly, Satyendra C. Tripathi, Petra Den
Hollander, Bin Huang, Mingyang Lu, Muge Celiktas, Esmeralda Ramirez-Pe\~na,
Eshel Ben-Jacob, Jos\'e N. Onuchic, Samir M. Hanash, Sendurai A. Mani,
Herbert Levine | Distinguishing Mechanisms Underlying EMT Tristability | null | null | null | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The Epithelial-Mesenchymal Transition (EMT) endows
epithelial-looking cells with enhanced migratory ability during embryonic
development and tissue repair. EMT can also be co-opted by cancer cells to
acquire metastatic potential and drug-resistance. Recent research has argued
that epithelial (E) cells can undergo either a partial EMT to attain a hybrid
epithelial/mesenchymal (E/M) phenotype that typically displays collective
migration, or a complete EMT to adopt a mesenchymal (M) phenotype that shows
individual migration. The core EMT regulatory network -
miR-34/SNAIL/miR-200/ZEB1 - has been identified by various studies, but how
this network regulates the transitions among the E, E/M, and M phenotypes
remains controversial. Two major mathematical models - ternary chimera switch
(TCS) and cascading bistable switches (CBS) - that both focus on the
miR-34/SNAIL/miR-200/ZEB1 network, have been proposed to elucidate the EMT
dynamics, but a detailed analysis of how well either or both of these two
models can capture recent experimental observations about EMT dynamics remains
to be done. Results: Here, via an integrated experimental and theoretical
approach, we first show that both these two models can be used to understand
the two-step transition of EMT - E-E/M-M, the different responses of SNAIL and
ZEB1 to exogenous TGF-b and the irreversibility of complete EMT. Next, we
present new experimental results that tend to discriminate between these two
models. We show that ZEB1 is present at intermediate levels in the hybrid E/M
H1975 cells, and that in HMLE cells, overexpression of SNAIL is not sufficient
to initiate EMT in the absence of ZEB1 and FOXC2. Conclusions: These
experimental results argue in favor of the TCS model proposing that
miR-200/ZEB1 behaves as a three-way decision-making switch enabling transitions
among the E, hybrid E/M and M phenotypes.
| [
{
"created": "Fri, 6 Jan 2017 19:57:18 GMT",
"version": "v1"
}
] | 2017-01-10 | [
[
"Jia",
"Dongya",
""
],
[
"Jolly",
"Mohit Kumar",
""
],
[
"Tripathi",
"Satyendra C.",
""
],
[
"Hollander",
"Petra Den",
""
],
[
"Huang",
"Bin",
""
],
[
"Lu",
"Mingyang",
""
],
[
"Celiktas",
"Muge",
""
],
... | Background: The Epithelial-Mesenchymal Transition (EMT) endows epithelial-looking cells with enhanced migratory ability during embryonic development and tissue repair. EMT can also be co-opted by cancer cells to acquire metastatic potential and drug-resistance. Recent research has argued that epithelial (E) cells can undergo either a partial EMT to attain a hybrid epithelial/mesenchymal (E/M) phenotype that typically displays collective migration, or a complete EMT to adopt a mesenchymal (M) phenotype that shows individual migration. The core EMT regulatory network - miR-34/SNAIL/miR-200/ZEB1 - has been identified by various studies, but how this network regulates the transitions among the E, E/M, and M phenotypes remains controversial. Two major mathematical models - ternary chimera switch (TCS) and cascading bistable switches (CBS) - that both focus on the miR-34/SNAIL/miR-200/ZEB1 network, have been proposed to elucidate the EMT dynamics, but a detailed analysis of how well either or both of these two models can capture recent experimental observations about EMT dynamics remains to be done. Results: Here, via an integrated experimental and theoretical approach, we first show that both these two models can be used to understand the two-step transition of EMT - E-E/M-M, the different responses of SNAIL and ZEB1 to exogenous TGF-b and the irreversibility of complete EMT. Next, we present new experimental results that tend to discriminate between these two models. We show that ZEB1 is present at intermediate levels in the hybrid E/M H1975 cells, and that in HMLE cells, overexpression of SNAIL is not sufficient to initiate EMT in the absence of ZEB1 and FOXC2. Conclusions: These experimental results argue in favor of the TCS model proposing that miR-200/ZEB1 behaves as a three-way decision-making switch enabling transitions among the E, hybrid E/M and M phenotypes. |
1606.07269 | Martin Lopez-Garcia | Martin Lopez-Garcia, Maria Nowicka, Claus Bendtsen, Grant Lythe,
Sreenivasan Ponnambalam, Carmen Molina-Paris | Stochastic models of the binding kinetics of VEGF-A to VEGFR1 and VEGFR2
in endothelial cells | null | null | null | null | q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vascular endothelial growth factor receptors (VEGFRs) are receptor tyrosine
kinases (RTKs) that regulate proliferation, migration, angiogenesis and
vascular permeability of endothelial cells. VEGFR1 and VEGFR2 bind vascular
endothelial growth factors (VEGFs), inducing receptor dimerisation and
activation, characterised by phosphorylation of tyrosine residues in their
cytoplasmic domain. Although experimental evidence suggests that RTK signalling
occurs both on the plasma membrane and intra-cellularly, and reveals the role
of endocytosis in RTK signal transduction, we still lack knowledge of VEGFR
phosphorylation-site use and of the spatiotemporal regulation of VEGFR
signalling. In this paper, we introduce four stochastic mathematical models to
study the binding kinetics of vascular endothelial growth factor VEGF-A to
VEGFR1 and VEGFR2, and phosphorylation. The formation of phosphorylated dimers
on the cell surface is a two-step process: diffusive transport and binding. The
first two of our models only consider VEGFR2, which allows us to introduce new
stochastic descriptors making use of a matrix-analytic approach. The two
remaining models describe the competition of VEGFR1 and VEGFR2 for ligand
availability, and are analysed making use of Gillespie simulations and the van
Kampen approximation. Under the hypothesis that bound phosphorylated receptor
dimers are the signalling units, we study the time to reach a threshold number
of such complexes. Our results indicate that the presence of VEGFR1 does not
only affect the timescale to reach a given signalling threshold, but it also
affects the maximum attainable threshold. This result is consistent with the
conjectured role of VEGFR1 as a decoy receptor, that prevents VEGF-A binding to
VEGFR2, and thus, VEGFR2 attaining suitable phosphorylation levels. We identify
an optimum range of ligand concentration for sustained dimer phosphorylation.
| [
{
"created": "Thu, 23 Jun 2016 11:14:12 GMT",
"version": "v1"
}
] | 2016-06-24 | [
[
"Lopez-Garcia",
"Martin",
""
],
[
"Nowicka",
"Maria",
""
],
[
"Bendtsen",
"Claus",
""
],
[
"Lythe",
"Grant",
""
],
[
"Ponnambalam",
"Sreenivasan",
""
],
[
"Molina-Paris",
"Carmen",
""
]
] | Vascular endothelial growth factor receptors (VEGFRs) are receptor tyrosine kinases (RTKs) that regulate proliferation, migration, angiogenesis and vascular permeability of endothelial cells. VEGFR1 and VEGFR2 bind vascular endothelial growth factors (VEGFs), inducing receptor dimerisation and activation, characterised by phosphorylation of tyrosine residues in their cytoplasmic domain. Although experimental evidence suggests that RTK signalling occurs both on the plasma membrane and intra-cellularly, and reveals the role of endocytosis in RTK signal transduction, we still lack knowledge of VEGFR phosphorylation-site use and of the spatiotemporal regulation of VEGFR signalling. In this paper, we introduce four stochastic mathematical models to study the binding kinetics of vascular endothelial growth factor VEGF-A to VEGFR1 and VEGFR2, and phosphorylation. The formation of phosphorylated dimers on the cell surface is a two-step process: diffusive transport and binding. The first two of our models only consider VEGFR2, which allows us to introduce new stochastic descriptors making use of a matrix-analytic approach. The two remaining models describe the competition of VEGFR1 and VEGFR2 for ligand availability, and are analysed making use of Gillespie simulations and the van Kampen approximation. Under the hypothesis that bound phosphorylated receptor dimers are the signalling units, we study the time to reach a threshold number of such complexes. Our results indicate that the presence of VEGFR1 does not only affect the timescale to reach a given signalling threshold, but it also affects the maximum attainable threshold. This result is consistent with the conjectured role of VEGFR1 as a decoy receptor, that prevents VEGF-A binding to VEGFR2, and thus, VEGFR2 attaining suitable phosphorylation levels. We identify an optimum range of ligand concentration for sustained dimer phosphorylation. |
1701.07861 | Iaroslav Ispolatov | Michael Doebeli and Iaroslav Ispolatov | Diversity and coevolutionary dynamics in high-dimensional phenotype
spaces | 49 pages, 6 figures, and 5 videos, please open pdf with Acrobat to
see the embedded movies | The American Naturalist 2017 189:2, 105-120 | 10.1086/689891 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study macroevolutionary dynamics by extending microevolutionary
competition models to long time scales. It has been shown that for a general
class of competition models, gradual evolutionary change in continuous
phenotypes (evolutionary dynamics) can be non-stationary and even chaotic when
the dimension of the phenotype space in which the evolutionary dynamics unfold
is high. It has also been shown that evolutionary diversification can occur
along non-equilibrium trajectories in phenotype space. We combine these lines
of thinking by studying long-term coevolutionary dynamics of emerging lineages
in multi-dimensional phenotype spaces. We use a statistical approach to
investigate the evolutionary dynamics of many different systems. We find: 1)
for a given dimension of phenotype space, the coevolutionary dynamics tends to
be fast and non-stationary for an intermediate number of coexisting lineages,
but tends to stabilize as the evolving communities reach a saturation level of
diversity; and 2) the amount of diversity at the saturation level increases
rapidly (exponentially) with the dimension of phenotype space. These results
have implications for theoretical perspectives on major macroevolutionary
patterns such as adaptive radiation, long-term temporal patterns of phenotypic
changes, and the evolution of diversity.
| [
{
"created": "Thu, 26 Jan 2017 20:08:18 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Feb 2017 03:18:51 GMT",
"version": "v2"
}
] | 2017-02-14 | [
[
"Doebeli",
"Michael",
""
],
[
"Ispolatov",
"Iaroslav",
""
]
] | We study macroevolutionary dynamics by extending microevolutionary competition models to long time scales. It has been shown that for a general class of competition models, gradual evolutionary change in continuous phenotypes (evolutionary dynamics) can be non-stationary and even chaotic when the dimension of the phenotype space in which the evolutionary dynamics unfold is high. It has also been shown that evolutionary diversification can occur along non-equilibrium trajectories in phenotype space. We combine these lines of thinking by studying long-term coevolutionary dynamics of emerging lineages in multi-dimensional phenotype spaces. We use a statistical approach to investigate the evolutionary dynamics of many different systems. We find: 1) for a given dimension of phenotype space, the coevolutionary dynamics tends to be fast and non-stationary for an intermediate number of coexisting lineages, but tends to stabilize as the evolving communities reach a saturation level of diversity; and 2) the amount of diversity at the saturation level increases rapidly (exponentially) with the dimension of phenotype space. These results have implications for theoretical perspectives on major macroevolutionary patterns such as adaptive radiation, long-term temporal patterns of phenotypic changes, and the evolution of diversity. |
q-bio/0604026 | Johannes Berg | Johannes Berg and Michael L\"assig | Cross-species analysis of biological networks by Bayesian alignment | Published version - new title and figure, some changes to the text.
10 pages, 5 figures. Supporting text is available from the authors | PNAS 103 (29), 10967-10972 (2006) | 10.1073/pnas.0602294103 | null | q-bio.MN cond-mat.dis-nn q-bio.GN | null | Complex interactions between genes or proteins contribute a substantial part
to phenotypic evolution. Here we develop an evolutionarily grounded method for
the cross-species analysis of interaction networks by {\em alignment}, which
maps bona fide functional relationships between genes in different organisms.
Network alignment is based on a scoring function measuring mutual similarities
between networks taking into account their interaction patterns as well as
sequence similarities between their nodes. High-scoring alignments and optimal
alignment parameters are inferred by a systematic Bayesian analysis. We apply
this method to analyze the evolution of co-expression networks between human
and mouse. We find evidence for significant conservation of gene expression
clusters and give network-based predictions of gene function. We discuss
examples where cross-species functional relationships between genes do not
concur with sequence similarity.
| [
{
"created": "Thu, 20 Apr 2006 12:18:25 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Aug 2006 08:31:16 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Berg",
"Johannes",
""
],
[
"Lässig",
"Michael",
""
]
] | Complex interactions between genes or proteins contribute a substantial part to phenotypic evolution. Here we develop an evolutionarily grounded method for the cross-species analysis of interaction networks by {\em alignment}, which maps bona fide functional relationships between genes in different organisms. Network alignment is based on a scoring function measuring mutual similarities between networks taking into account their interaction patterns as well as sequence similarities between their nodes. High-scoring alignments and optimal alignment parameters are inferred by a systematic Bayesian analysis. We apply this method to analyze the evolution of co-expression networks between human and mouse. We find evidence for significant conservation of gene expression clusters and give network-based predictions of gene function. We discuss examples where cross-species functional relationships between genes do not concur with sequence similarity. |
0709.2420 | Dietrich Stauffer | S. Cebrat and D. Stauffer | Gamete recognition and complementary haplotypes in sexual Penna ageing
model | 9 pages including many figures | null | 10.1142/S0129183108012066 | null | q-bio.PE | null | In simulations of sexual reproduction with diploid individuals, we introduce
that female haploid gametes recognize one specific allele of the genomes as a
marker of the male haploid gametes. They fuse to zygotes preferrably with male
gametes having a different marker than their own. This gamete recognition
enhances the advantage of complementary bit-strings in the simulated diploid
individuals, at low recombination rates. Thus with rare recombinations the
bit-string evolve to be complementary; with recombination rate above about 0.1
instead they evolve under Darwinian purification selection, with few bits
mutated.
| [
{
"created": "Sat, 15 Sep 2007 11:27:54 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Cebrat",
"S.",
""
],
[
"Stauffer",
"D.",
""
]
] | In simulations of sexual reproduction with diploid individuals, we introduce that female haploid gametes recognize one specific allele of the genomes as a marker of the male haploid gametes. They fuse to zygotes preferrably with male gametes having a different marker than their own. This gamete recognition enhances the advantage of complementary bit-strings in the simulated diploid individuals, at low recombination rates. Thus with rare recombinations the bit-string evolve to be complementary; with recombination rate above about 0.1 instead they evolve under Darwinian purification selection, with few bits mutated. |
0908.1603 | Herculano Martinho | Luis Felipe das Chagas e Silva de Carvalho, Renata Andrade Bitar,
Emilia Angela Loschiavo Arisawa, Adriana Aigotti Haberbeck Brandao, Kathia
Maria Honorio, Luiz Antonio Guimaraes Cabral, Airton Abrahao Martin,
Herculano da Silva Martinho, Janete Dias Almeida | Spectral region optimization for Raman-based optical diagnosis of
inflammatory lesions | 21 pages, Submitted to Lasers in Surgery and Medicine | null | null | null | q-bio.TO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FT-Raman Spectroscopy was applied to identify biochemical alterations
existing between inflammatory fibrous hyperplasia (IFH) and normal tissues of
buccal mucosa. One important implication of this study is related to the cancer
lesion border. In fact, the cancerous normal border line is characterized by
the presence of inflammation and its correct discrimination would increase the
accuracy in delimiting the lesion frontier. Seventy spectra of IFH from 14
patients were compared to 30 spectra of normal tissue from 6 patients. The
statistical analysis was performed with Principal Components Analysis and Soft
Independent Modeling Class Analogy methodologies. After studying several
spectral ranges it was concluded that the best discrimination capability
(sensibility of 95% and specificity of 100%) was found using the 530 to 580
cm$^{-1}$ wavenumbers. The bands in this region are related to vibrational
modes of Collagen aminoacids Cistine, Cysteine, and Proline and their relevant
contribution to the classification probably relies on the extracellular matrix
degeneration process occurring in the inflammatory tissues.
| [
{
"created": "Wed, 12 Aug 2009 00:59:41 GMT",
"version": "v1"
}
] | 2009-08-13 | [
[
"de Carvalho",
"Luis Felipe das Chagas e Silva",
""
],
[
"Bitar",
"Renata Andrade",
""
],
[
"Arisawa",
"Emilia Angela Loschiavo",
""
],
[
"Brandao",
"Adriana Aigotti Haberbeck",
""
],
[
"Honorio",
"Kathia Maria",
""
],
[
"Cabral",... | FT-Raman Spectroscopy was applied to identify biochemical alterations existing between inflammatory fibrous hyperplasia (IFH) and normal tissues of buccal mucosa. One important implication of this study is related to the cancer lesion border. In fact, the cancerous normal border line is characterized by the presence of inflammation and its correct discrimination would increase the accuracy in delimiting the lesion frontier. Seventy spectra of IFH from 14 patients were compared to 30 spectra of normal tissue from 6 patients. The statistical analysis was performed with Principal Components Analysis and Soft Independent Modeling Class Analogy methodologies. After studying several spectral ranges it was concluded that the best discrimination capability (sensibility of 95% and specificity of 100%) was found using the 530 to 580 cm$^{-1}$ wavenumbers. The bands in this region are related to vibrational modes of Collagen aminoacids Cistine, Cysteine, and Proline and their relevant contribution to the classification probably relies on the extracellular matrix degeneration process occurring in the inflammatory tissues. |
2004.09314 | Mathilde Badoual | G. Nakamura, B. Grammaticos, M. Badoual | Confinement strategies in a simple SIR model | 15 pages, 12 figures | null | 10.1134/S1560354720060015 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple SIR model in order to investigate the impact of various
confinement strategies on a most virulent epidemic. Our approach is motivated
by the current COVID-19 pandemic. The main hypothesis is the existence of two
populations of susceptible persons, one which obeys confinement and for which
the infection rate does not exceed 1, and a population which, being non
confined for various imperatives, can be substantially more infective. The
model, initially formulated as a differential system, is discretised following
a specific procedure, the discrete system serving as an integrator for the
differential one. Our model is calibrated so as to correspond to what is
observed in the COVID-19 epidemic.
Several conclusions can be reached, despite the very simple structure of our
model. First, it is not possible to pinpoint the genesis of the epidemic by
just analysing data from when the epidemic is in full swing. It may well turn
out that the epidemic has reached a sizeable part of the world months before it
became noticeable. Concerning the confinement scenarios, a universal feature of
all our simulations is that relaxing the lockdown constraints leads to a
rekindling of the epidemic. Thus we sought the conditions for the second
epidemic peak to be lower than the first one. This is possible in all the
scenarios considered (abrupt, progressive or stepwise exit) but typically a
progressive exit can start earlier than an abrupt one. However, by the time the
progressive exit is complete, the overall confinement times are not too
different. From our results, the most promising strategy is that of a stepwise
exit. And in fact its implementation could be quite feasible, with the major
part of the population (minus the fragile groups) exiting simultaneously but
obeying rigorous distancing constraints.
| [
{
"created": "Mon, 20 Apr 2020 14:06:43 GMT",
"version": "v1"
}
] | 2020-12-30 | [
[
"Nakamura",
"G.",
""
],
[
"Grammaticos",
"B.",
""
],
[
"Badoual",
"M.",
""
]
] | We propose a simple SIR model in order to investigate the impact of various confinement strategies on a most virulent epidemic. Our approach is motivated by the current COVID-19 pandemic. The main hypothesis is the existence of two populations of susceptible persons, one which obeys confinement and for which the infection rate does not exceed 1, and a population which, being non confined for various imperatives, can be substantially more infective. The model, initially formulated as a differential system, is discretised following a specific procedure, the discrete system serving as an integrator for the differential one. Our model is calibrated so as to correspond to what is observed in the COVID-19 epidemic. Several conclusions can be reached, despite the very simple structure of our model. First, it is not possible to pinpoint the genesis of the epidemic by just analysing data from when the epidemic is in full swing. It may well turn out that the epidemic has reached a sizeable part of the world months before it became noticeable. Concerning the confinement scenarios, a universal feature of all our simulations is that relaxing the lockdown constraints leads to a rekindling of the epidemic. Thus we sought the conditions for the second epidemic peak to be lower than the first one. This is possible in all the scenarios considered (abrupt, progressive or stepwise exit) but typically a progressive exit can start earlier than an abrupt one. However, by the time the progressive exit is complete, the overall confinement times are not too different. From our results, the most promising strategy is that of a stepwise exit. And in fact its implementation could be quite feasible, with the major part of the population (minus the fragile groups) exiting simultaneously but obeying rigorous distancing constraints. |
1606.03884 | Arnab Barua | Arnab Barua | A Mathematical Model of Cell Reprogramming due to Intermediate
Differential Regulator's Regulations | null | null | null | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper I have given a mathematical model of Cell reprogramming from a
different contexts. Here I considered there is a delay in differential
regulator rate equations due to intermediate regulator's regulations. At first
I gave some basic mathematical models by Ferell Jr.[2] of reprogramming and
after that I gave mathematical model of cell reprogramming by Mithun Mitra[4].
In the last section I contributed a mathematical model of cell reprogramming
from intermediate steps regulations and tried to find the critical point of
pluripotent cell.
| [
{
"created": "Mon, 13 Jun 2016 10:19:56 GMT",
"version": "v1"
}
] | 2016-06-14 | [
[
"Barua",
"Arnab",
""
]
] | In this paper I have given a mathematical model of Cell reprogramming from a different contexts. Here I considered there is a delay in differential regulator rate equations due to intermediate regulator's regulations. At first I gave some basic mathematical models by Ferell Jr.[2] of reprogramming and after that I gave mathematical model of cell reprogramming by Mithun Mitra[4]. In the last section I contributed a mathematical model of cell reprogramming from intermediate steps regulations and tried to find the critical point of pluripotent cell. |
2312.05963 | William Bialek | Lauren McGough, Helena Casademunt, Milo\v{s} Nikoli\'c, Mariela D.
Petkova, Thomas Gregor, and William Bialek | Finding the last bits of positional information | null | null | null | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | In a developing embryo, information about the position of cells is encoded in
the concentrations of "morphogen" molecules. In the fruit fly, the local
concentrations of just a handful of proteins encoded by the gap genes are
sufficient to specify position with a precision comparable to the spacing
between cells along the anterior--posterior axis. This matches the precision of
downstream events such as the striped patterns of expression in the pair-rule
genes, but is not quite sufficient to define unique identities for individual
cells. We demonstrate theoretically that this information gap can be bridged if
positional errors are spatially correlated, with relatively long correlation
lengths. We then show experimentally that these correlations are present, with
the required strength, in the fluctuating positions of the pair-rule stripes,
and this can be traced back to the gap genes. Taking account of these
correlations, the available information matches the information needed for
unique cellular specification, within error bars of ~2%. These observation
support a precisionist view of information flow through the underlying genetic
networks, in which accurate signals are available from the start and preserved
as they are transformed into the final spatial patterns.
| [
{
"created": "Sun, 10 Dec 2023 18:29:45 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"McGough",
"Lauren",
""
],
[
"Casademunt",
"Helena",
""
],
[
"Nikolić",
"Miloš",
""
],
[
"Petkova",
"Mariela D.",
""
],
[
"Gregor",
"Thomas",
""
],
[
"Bialek",
"William",
""
]
] | In a developing embryo, information about the position of cells is encoded in the concentrations of "morphogen" molecules. In the fruit fly, the local concentrations of just a handful of proteins encoded by the gap genes are sufficient to specify position with a precision comparable to the spacing between cells along the anterior--posterior axis. This matches the precision of downstream events such as the striped patterns of expression in the pair-rule genes, but is not quite sufficient to define unique identities for individual cells. We demonstrate theoretically that this information gap can be bridged if positional errors are spatially correlated, with relatively long correlation lengths. We then show experimentally that these correlations are present, with the required strength, in the fluctuating positions of the pair-rule stripes, and this can be traced back to the gap genes. Taking account of these correlations, the available information matches the information needed for unique cellular specification, within error bars of ~2%. These observation support a precisionist view of information flow through the underlying genetic networks, in which accurate signals are available from the start and preserved as they are transformed into the final spatial patterns. |
1402.4579 | Venkatakrishnan Ramaswamy | Venkatakrishnan Ramaswamy, Arunava Banerjee | Connectomic Constraints on Computation in Feedforward Networks of
Spiking Neurons | Accepted at the Journal of Computational Neuroscience | null | 10.1007/s10827-014-0497-5 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several efforts are currently underway to decipher the connectome or parts
thereof in a variety of organisms. Ascertaining the detailed physiological
properties of all the neurons in these connectomes, however, is out of the
scope of such projects. It is therefore unclear to what extent knowledge of the
connectome alone will advance a mechanistic understanding of computation
occurring in these neural circuits, especially when the high-level function of
the said circuit is unknown. We consider, here, the question of how the wiring
diagram of neurons imposes constraints on what neural circuits can compute,
when we cannot assume detailed information on the physiological response
properties of the neurons. We call such constraints -- that arise by virtue of
the connectome -- connectomic constraints on computation. For feedforward
networks equipped with neurons that obey a deterministic spiking neuron model
which satisfies a small number of properties, we ask if just by knowing the
architecture of a network, we can rule out computations that it could be doing,
no matter what response properties each of its neurons may have. We show
results of this form, for certain classes of network architectures. On the
other hand, we also prove that with the limited set of properties assumed for
our model neurons, there are fundamental limits to the constraints imposed by
network structure. Thus, our theory suggests that while connectomic constraints
might restrict the computational ability of certain classes of network
architectures, we may require more elaborate information on the properties of
neurons in the network, before we can discern such results for other classes of
networks.
| [
{
"created": "Wed, 19 Feb 2014 08:07:40 GMT",
"version": "v1"
}
] | 2014-04-03 | [
[
"Ramaswamy",
"Venkatakrishnan",
""
],
[
"Banerjee",
"Arunava",
""
]
] | Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints -- that arise by virtue of the connectome -- connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks. |
1810.07272 | David Holcman | Matteo Dora, David Holcman | Active unidirectional network flow generates a packet molecular
transport in cells | 2 figs | null | null | null | q-bio.SC cond-mat.other physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internet, social media, neuronal or blood vessel are organized in complex
networks. These networks are characterized by several quantities such as the
underlying graph connectivity (topology), how they grow in time, scaling laws
or by the mean time a random search can cover a network and also by shortest
path between two nodes. We present here a novel type of network property based
on a unidirectional transport mechanism occurring in the Endoplasmic Reticulum
network found in the cell cytoplasm. This mechanism is an active-waiting
transportation, where molecules have to wait a random time before being
transported from one node to the next one. We find that the consequence of this
unusual network transportation is that molecules travel together by recurrent
packets, which quite a large deviation behavior compared to classical
propagation in graphs. To conclude, this form of transportation is associated
with an efficient and robust molecular redistribution inside cells.
| [
{
"created": "Tue, 16 Oct 2018 20:58:51 GMT",
"version": "v1"
}
] | 2018-10-18 | [
[
"Dora",
"Matteo",
""
],
[
"Holcman",
"David",
""
]
] | Internet, social media, neuronal or blood vessel are organized in complex networks. These networks are characterized by several quantities such as the underlying graph connectivity (topology), how they grow in time, scaling laws or by the mean time a random search can cover a network and also by shortest path between two nodes. We present here a novel type of network property based on a unidirectional transport mechanism occurring in the Endoplasmic Reticulum network found in the cell cytoplasm. This mechanism is an active-waiting transportation, where molecules have to wait a random time before being transported from one node to the next one. We find that the consequence of this unusual network transportation is that molecules travel together by recurrent packets, which quite a large deviation behavior compared to classical propagation in graphs. To conclude, this form of transportation is associated with an efficient and robust molecular redistribution inside cells. |
2102.06629 | Karthik Raman | Sahana Gangadharan and Karthik Raman | The art of molecular computing: whence and whither | 18 pages, 1 figure, 1 table, Supplementary information: 3 pages, 2
supplementary figures, 1 supplementary table | null | null | null | q-bio.BM q-bio.MN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An astonishingly diverse biomolecular circuitry orchestrates the functioning
machinery underlying every living cell. These biomolecules and their circuits
have been engineered not only for various industrial applications but also to
perform other atypical functions that they were not evolved for - including
computation. Various kinds of computational challenges, such as solving
NP-complete problems with many variables, logical computation, neural network
operations, and cryptography, have all been attempted through this
unconventional computing paradigm. In this review, we highlight key experiments
across three different eras of molecular computation, beginning with molecular
solutions, transitioning to logic circuits and ultimately, more complex
molecular networks. We also discuss a variety of applications of molecular
computation, from solving NP-hard problems to self-assembled nanostructures for
delivering molecules, and provide a glimpse into the exciting potential that
molecular computing holds for the future.
| [
{
"created": "Fri, 12 Feb 2021 17:15:18 GMT",
"version": "v1"
}
] | 2021-02-15 | [
[
"Gangadharan",
"Sahana",
""
],
[
"Raman",
"Karthik",
""
]
] | An astonishingly diverse biomolecular circuitry orchestrates the functioning machinery underlying every living cell. These biomolecules and their circuits have been engineered not only for various industrial applications but also to perform other atypical functions that they were not evolved for - including computation. Various kinds of computational challenges, such as solving NP-complete problems with many variables, logical computation, neural network operations, and cryptography, have all been attempted through this unconventional computing paradigm. In this review, we highlight key experiments across three different eras of molecular computation, beginning with molecular solutions, transitioning to logic circuits and ultimately, more complex molecular networks. We also discuss a variety of applications of molecular computation, from solving NP-hard problems to self-assembled nanostructures for delivering molecules, and provide a glimpse into the exciting potential that molecular computing holds for the future. |
1011.4080 | Edward Baskerville | Edward B. Baskerville, Andy P. Dobson, Trevor Bedford, Stefano
Allesina, Mercedes Pascual | Spatial Guilds in the Serengeti Food Web Revealed by a Bayesian Group
Model | 28 pages, 6 figures (+ 3 supporting), 2 tables (+ 4 supporting) | null | 10.1371/journal.pcbi.1002321 | null | q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | Food webs, networks of feeding relationships among organisms, provide
fundamental insights into mechanisms that determine ecosystem stability and
persistence. Despite long-standing interest in the compartmental structure of
food webs, past network analyses of food webs have been constrained by a
standard definition of compartments, or modules, that requires many links
within compartments and few links between them. Empirical analyses have been
further limited by low-resolution data for primary producers. In this paper, we
present a Bayesian computational method for identifying group structure in food
webs using a flexible definition of a group that can describe both functional
roles and standard compartments. The Serengeti ecosystem provides an
opportunity to examine structure in a newly compiled food web that includes
species-level resolution among plants, allowing us to address whether groups in
the food web correspond to tightly-connected compartments or functional groups,
and whether network structure reflects spatial or trophic organization, or a
combination of the two. We have compiled the major mammalian and plant
components of the Serengeti food web from published literature, and we infer
its group structure using our method. We find that network structure
corresponds to spatially distinct plant groups coupled at higher trophic levels
by groups of herbivores, which are in turn coupled by carnivore groups. Thus
the group structure of the Serengeti web represents a mixture of trophic guild
structure and spatial patterns, in contrast to the standard compartments
typically identified in ecological networks. From data consisting only of nodes
and links, the group structure that emerges supports recent ideas on spatial
coupling and energy channels in ecosystems that have been proposed as important
for persistence.
| [
{
"created": "Wed, 17 Nov 2010 21:23:29 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jan 2011 22:06:02 GMT",
"version": "v2"
}
] | 2013-06-18 | [
[
"Baskerville",
"Edward B.",
""
],
[
"Dobson",
"Andy P.",
""
],
[
"Bedford",
"Trevor",
""
],
[
"Allesina",
"Stefano",
""
],
[
"Pascual",
"Mercedes",
""
]
] | Food webs, networks of feeding relationships among organisms, provide fundamental insights into mechanisms that determine ecosystem stability and persistence. Despite long-standing interest in the compartmental structure of food webs, past network analyses of food webs have been constrained by a standard definition of compartments, or modules, that requires many links within compartments and few links between them. Empirical analyses have been further limited by low-resolution data for primary producers. In this paper, we present a Bayesian computational method for identifying group structure in food webs using a flexible definition of a group that can describe both functional roles and standard compartments. The Serengeti ecosystem provides an opportunity to examine structure in a newly compiled food web that includes species-level resolution among plants, allowing us to address whether groups in the food web correspond to tightly-connected compartments or functional groups, and whether network structure reflects spatial or trophic organization, or a combination of the two. We have compiled the major mammalian and plant components of the Serengeti food web from published literature, and we infer its group structure using our method. We find that network structure corresponds to spatially distinct plant groups coupled at higher trophic levels by groups of herbivores, which are in turn coupled by carnivore groups. Thus the group structure of the Serengeti web represents a mixture of trophic guild structure and spatial patterns, in contrast to the standard compartments typically identified in ecological networks. From data consisting only of nodes and links, the group structure that emerges supports recent ideas on spatial coupling and energy channels in ecosystems that have been proposed as important for persistence. |
1312.6879 | Min Yue | Min Yue, Huanchun Chen | Zoonoses Frontier: Veterinarian, Producer, Processor and Beyond | 16 pages with 1 figure. science 2014 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As many emerging and re-emerging infectious diseases are associated with food
animals, the relationship between available healthy food sources and population
health and social stability has become evident. A recent example of the
importance of this relationship was observed during the current flu pandemic.
This recent pandemic brought attention to novel target groups of susceptible
people at the interface of the animal and human populations. Veterinarians,
producers and processors are uniquely exposed to emerging zoonoses. Therefore
these individuals may serve as key sentinels and allow efficient evaluation of
the effectiveness of zoonoses prophylaxis and control, including evaluation of
the cost-effectiveness in the broader view. We also suggest some valuable
approaches for rapid diagnosis of emerging and re-emerging infectious diseases
and supportive systemic research which may address related ethical questions.
We also highly recommend more research investigations characterizing this
human/animal zoonosis interface, a potentially productive target for emerging
disease diagnosis and control.
| [
{
"created": "Tue, 24 Dec 2013 19:42:25 GMT",
"version": "v1"
}
] | 2013-12-25 | [
[
"Yue",
"Min",
""
],
[
"Chen",
"Huanchun",
""
]
] | As many emerging and re-emerging infectious diseases are associated with food animals, the relationship between available healthy food sources and population health and social stability has become evident. A recent example of the importance of this relationship was observed during the current flu pandemic. This recent pandemic brought attention to novel target groups of susceptible people at the interface of the animal and human populations. Veterinarians, producers and processors are uniquely exposed to emerging zoonoses. Therefore these individuals may serve as key sentinels and allow efficient evaluation of the effectiveness of zoonoses prophylaxis and control, including evaluation of the cost-effectiveness in the broader view. We also suggest some valuable approaches for rapid diagnosis of emerging and re-emerging infectious diseases and supportive systemic research which may address related ethical questions. We also highly recommend more research investigations characterizing this human/animal zoonosis interface, a potentially productive target for emerging disease diagnosis and control. |
2304.10267 | Genki Kanda | Takashi Inagaki, Akari Kato, Koichi Takahashi, Haruka Ozaki, Genki N.
Kanda | LLMs can generate robotic scripts from goal-oriented instructions in
biological laboratory automation | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of laboratory automation by all researchers may substantially
accelerate scientific activities by humans, including those in the life
sciences. However, computer programs to operate robots should be written to
implement laboratory automation, which requires technical knowledge and skills
that may not be part of a researcher's training or expertise. In the last few
years, there has been remarkable development in large language models (LLMs)
such as GPT-4, which can generate computer codes based on natural language
instructions. In this study, we used LLMs, including GPT-4, to generate scripts
for robot operations in biological experiments based on ambiguous instructions.
GPT-4 successfully generates scripts for OT-2, an automated liquid-handling
robot, from simple instructions in natural language without specifying the
robotic actions. Conventionally, translating the nuances of biological
experiments into low-level robot actions requires researchers to understand
both biology and robotics, imagine robot actions, and write robotic scripts.
Our results showed that GPT-4 can connect the context of biological experiments
with robot operation through simple prompts with expert-level contextual
understanding and inherent knowledge. Replacing robot script programming, which
is a tedious task for biological researchers, with natural-language LLM
instructions that do not consider robot behavior significantly increases the
number of researchers who can benefit from automating biological experiments.
| [
{
"created": "Tue, 18 Apr 2023 09:15:37 GMT",
"version": "v1"
}
] | 2023-04-21 | [
[
"Inagaki",
"Takashi",
""
],
[
"Kato",
"Akari",
""
],
[
"Takahashi",
"Koichi",
""
],
[
"Ozaki",
"Haruka",
""
],
[
"Kanda",
"Genki N.",
""
]
] | The use of laboratory automation by all researchers may substantially accelerate scientific activities by humans, including those in the life sciences. However, computer programs to operate robots should be written to implement laboratory automation, which requires technical knowledge and skills that may not be part of a researcher's training or expertise. In the last few years, there has been remarkable development in large language models (LLMs) such as GPT-4, which can generate computer codes based on natural language instructions. In this study, we used LLMs, including GPT-4, to generate scripts for robot operations in biological experiments based on ambiguous instructions. GPT-4 successfully generates scripts for OT-2, an automated liquid-handling robot, from simple instructions in natural language without specifying the robotic actions. Conventionally, translating the nuances of biological experiments into low-level robot actions requires researchers to understand both biology and robotics, imagine robot actions, and write robotic scripts. Our results showed that GPT-4 can connect the context of biological experiments with robot operation through simple prompts with expert-level contextual understanding and inherent knowledge. Replacing robot script programming, which is a tedious task for biological researchers, with natural-language LLM instructions that do not consider robot behavior significantly increases the number of researchers who can benefit from automating biological experiments. |
1809.04412 | Yasser A. Ahmed | Yasser A. Ahmed | Chondrocyte Heterogeneity; It Is the Time to Update the Understanding of
Cartilage Histology | 3 pages, 2 figures | SVU-International Journal of Veterinary Sciences (2018) | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Chondrocytes were described as one cell populations in most cartilage
literature. Two different chondrocyte populations; dark and light, were
described in the articular cartilage and a third population, adipochondrocytes,
was described in the elastic cartilage. The cur-rent literature of cartilage
histology should be updated and highlight that three different populations of
chondrocytes are existed in cartilage.
| [
{
"created": "Sat, 11 Aug 2018 18:13:59 GMT",
"version": "v1"
}
] | 2018-09-13 | [
[
"Ahmed",
"Yasser A.",
""
]
] | Chondrocytes were described as one cell populations in most cartilage literature. Two different chondrocyte populations; dark and light, were described in the articular cartilage and a third population, adipochondrocytes, was described in the elastic cartilage. The cur-rent literature of cartilage histology should be updated and highlight that three different populations of chondrocytes are existed in cartilage. |
1803.06132 | Jason Olejarz | Jason Olejarz, Kamran Kaveh, Carl Veller, Martin A. Nowak | Selection for synchronized cell division in simple multicellular
organisms | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of multicellularity was a major transition in the history of
life on earth. Conditions under which multicellularity is favored have been
studied theoretically and experimentally. But since the construction of a
multicellular organism requires multiple rounds of cell division, a natural
question is whether these cell divisions should be synchronous or not. We study
a simple population model in which there compete simple multicellular organisms
that grow either by synchronous or asynchronous cell divisions. We demonstrate
that natural selection can act differently on synchronous and asynchronous cell
division, and we offer intuition for why these phenotypes are generally not
neutral variants of each other.
| [
{
"created": "Fri, 16 Mar 2018 09:51:32 GMT",
"version": "v1"
}
] | 2018-03-19 | [
[
"Olejarz",
"Jason",
""
],
[
"Kaveh",
"Kamran",
""
],
[
"Veller",
"Carl",
""
],
[
"Nowak",
"Martin A.",
""
]
] | The evolution of multicellularity was a major transition in the history of life on earth. Conditions under which multicellularity is favored have been studied theoretically and experimentally. But since the construction of a multicellular organism requires multiple rounds of cell division, a natural question is whether these cell divisions should be synchronous or not. We study a simple population model in which there compete simple multicellular organisms that grow either by synchronous or asynchronous cell divisions. We demonstrate that natural selection can act differently on synchronous and asynchronous cell division, and we offer intuition for why these phenotypes are generally not neutral variants of each other. |
2304.09451 | Stuart Johnston | Stuart T. Johnston, Matthew J. Simpson | Exact solutions for diffusive transport on heterogeneous growing domains | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | From the smallest biological systems to the largest cosmological structures,
spatial domains undergo expansion and contraction. Within these growing
domains, diffusive transport is a common phenomenon. Mathematical models have
been widely employed to investigate diffusive processes on growing domains.
However, a standard assumption is that the domain growth is spatially uniform.
There are many relevant examples where this is not the case, such as the
colonisation of growing gut tissue by neural crest cells. As such, it is not
straightforward to disentangle the individual roles of heterogeneous growth and
diffusive transport. Here we present exact solutions to models of diffusive
transport on domains undergoing spatially non-uniform growth. The exact
solutions are obtained via a combination of transformation, convolution and
superposition techniques. We verify the accuracy of these solutions via
comparison with simulations of a corresponding lattice-based random walk. We
explore various domain growth functions, including linear growth, exponential
growth and contraction, and oscillatory growth. Provided the domain size
remains positive, we find that the derived solutions are valid. The exact
solutions reveal the relationship between model parameters, such as the
diffusivity and the type and rate of domain growth, and key statistics, such as
the survival and splitting probabilities.
| [
{
"created": "Wed, 19 Apr 2023 06:46:28 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jun 2023 06:44:24 GMT",
"version": "v2"
}
] | 2023-06-27 | [
[
"Johnston",
"Stuart T.",
""
],
[
"Simpson",
"Matthew J.",
""
]
] | From the smallest biological systems to the largest cosmological structures, spatial domains undergo expansion and contraction. Within these growing domains, diffusive transport is a common phenomenon. Mathematical models have been widely employed to investigate diffusive processes on growing domains. However, a standard assumption is that the domain growth is spatially uniform. There are many relevant examples where this is not the case, such as the colonisation of growing gut tissue by neural crest cells. As such, it is not straightforward to disentangle the individual roles of heterogeneous growth and diffusive transport. Here we present exact solutions to models of diffusive transport on domains undergoing spatially non-uniform growth. The exact solutions are obtained via a combination of transformation, convolution and superposition techniques. We verify the accuracy of these solutions via comparison with simulations of a corresponding lattice-based random walk. We explore various domain growth functions, including linear growth, exponential growth and contraction, and oscillatory growth. Provided the domain size remains positive, we find that the derived solutions are valid. The exact solutions reveal the relationship between model parameters, such as the diffusivity and the type and rate of domain growth, and key statistics, such as the survival and splitting probabilities. |
0707.4244 | David Rabson | D.C. Lovelady, T.C. Richmond, A.N. Maggi, C.-M. Lo, D.A. Rabson | Distinguishing cancerous from non-cancerous cells through analysis of
electrical noise | 8 pages, 4 figures; submitted to PRE | null | 10.1103/PhysRevE.76.041908 | null | q-bio.CB q-bio.QM | null | Since 1984, electric cell-substrate impedance sensing (ECIS) has been used to
monitor cell behavior in tissue culture and has proven sensitive to cell
morphological changes and cell motility. We have taken ECIS measurements on
several cultures of non-cancerous (HOSE) and cancerous (SKOV) human ovarian
surface epithelial cells. By analyzing the noise in real and imaginary
electrical impedance, we demonstrate that it is possible to distinguish the two
cell types purely from signatures of their electrical noise. Our measures
include power-spectral exponents, Hurst and detrended fluctuation analysis, and
estimates of correlation time; principal-component analysis combines all the
measures. The noise from both cancerous and non-cancerous cultures shows
correlations on many time scales, but these correlations are stronger for the
non-cancerous cells.
| [
{
"created": "Sat, 28 Jul 2007 15:32:19 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Lovelady",
"D. C.",
""
],
[
"Richmond",
"T. C.",
""
],
[
"Maggi",
"A. N.",
""
],
[
"Lo",
"C. -M.",
""
],
[
"Rabson",
"D. A.",
""
]
] | Since 1984, electric cell-substrate impedance sensing (ECIS) has been used to monitor cell behavior in tissue culture and has proven sensitive to cell morphological changes and cell motility. We have taken ECIS measurements on several cultures of non-cancerous (HOSE) and cancerous (SKOV) human ovarian surface epithelial cells. By analyzing the noise in real and imaginary electrical impedance, we demonstrate that it is possible to distinguish the two cell types purely from signatures of their electrical noise. Our measures include power-spectral exponents, Hurst and detrended fluctuation analysis, and estimates of correlation time; principal-component analysis combines all the measures. The noise from both cancerous and non-cancerous cultures shows correlations on many time scales, but these correlations are stronger for the non-cancerous cells. |
1411.6484 | Josep Sardanyes | Josep Sardany\'es | Viral RNA replication modes: evolutionary and dynamical implications | Trends in Mathematics 2: 1-4 In: Emergence, Spread and Control of
Infectious Diseases. (Research Perspectives CRM Barcelona - Springer) 2014 | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Viruses can amplify their genomes following different replication modes (RMs)
ranging from the stamping machine replication (SMR) model to the geometric
replication (GR) model. Different RMs are expected to produce different
evolutionary and dynamical outcomes in viral quasispecies due to differences in
the mutations accumulation rate. Theoretical and computational models revealed
that while SMR may provide RNA viruses with mutational robustness, GR may
confer a dynamical advantage against genomes degradation. Here, recent advances
in the investigation of the RM in positive-sense single-stranded RNA viruses
are reviewed. Dynamical experimental quantification of Turnip mosaic virus RNA
strands, together with a nonlinear mathematical model, indicated the SMR model
for this pathogen. The same mathematical model for natural infections is here
further analyzed, and we prove that the interior equilibrium involving
coexistence of both positive and negative viral strands is globally
asymptotically stable.
| [
{
"created": "Mon, 24 Nov 2014 15:25:01 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Nov 2014 14:49:31 GMT",
"version": "v2"
}
] | 2014-11-27 | [
[
"Sardanyés",
"Josep",
""
]
] | Viruses can amplify their genomes following different replication modes (RMs) ranging from the stamping machine replication (SMR) model to the geometric replication (GR) model. Different RMs are expected to produce different evolutionary and dynamical outcomes in viral quasispecies due to differences in the mutations accumulation rate. Theoretical and computational models revealed that while SMR may provide RNA viruses with mutational robustness, GR may confer a dynamical advantage against genomes degradation. Here, recent advances in the investigation of the RM in positive-sense single-stranded RNA viruses are reviewed. Dynamical experimental quantification of Turnip mosaic virus RNA strands, together with a nonlinear mathematical model, indicated the SMR model for this pathogen. The same mathematical model for natural infections is here further analyzed, and we prove that the interior equilibrium involving coexistence of both positive and negative viral strands is globally asymptotically stable. |
1608.06971 | Himadri Samanta | Himadri S. Samanta, Pavel I. Zhuravlev, Michael Hinczewski, Naoto
Hori, Shaon Chakrabarti, and D. Thirumalai | Protein Collapse is Encoded in the Folded State Architecture | 8 figures | null | null | null | q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural protein sequences that self-assemble to form globular structures are
compact with high packing densities in the folded states. It is known that
proteins unfold upon addition of denaturants, adopting random coil structures.
The dependence of the radii of gyration on protein size in the folded and
unfolded states obeys the same scaling laws as synthetic polymers. Thus, one
might surmise that the mechanism of collapse in proteins and polymers ought to
be similar. However, because the number of amino acids in single domain
proteins is not significantly greater than about two hundred, it has not been
resolved if the unfolded states of proteins are compact under conditions that
favor the folded states - a problem at the heart of how proteins fold. By
adopting a theory used to derive polymer-scaling laws, we find that the
propensity for the unfolded state of a protein to be compact is universal and
is encoded in the contact map of the folded state. Remarkably, analysis of over
2000 proteins shows that proteins rich in $\beta$-sheets have greater tendency
to be compact than $\alpha$-helical proteins. The theory provides insights into
the reasons for the small size of single domain proteins and the physical basis
for the origin of multi-domain proteins. Application to non-coding RNA
molecules show that they have evolved to collapse sharing similarities to
$\beta$-sheet proteins. An implication of our theory is that the evolution of
natural foldable sequences is guided by the requirement that for efficient
folding they should populate minimum energy compact states under folding
conditions. This concept also supports the compaction selection hypothesis used
to rationalize the unusually condensed states of viral RNA molecules.
| [
{
"created": "Wed, 24 Aug 2016 21:23:02 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Dec 2016 20:26:13 GMT",
"version": "v2"
}
] | 2016-12-02 | [
[
"Samanta",
"Himadri S.",
""
],
[
"Zhuravlev",
"Pavel I.",
""
],
[
"Hinczewski",
"Michael",
""
],
[
"Hori",
"Naoto",
""
],
[
"Chakrabarti",
"Shaon",
""
],
[
"Thirumalai",
"D.",
""
]
] | Natural protein sequences that self-assemble to form globular structures are compact with high packing densities in the folded states. It is known that proteins unfold upon addition of denaturants, adopting random coil structures. The dependence of the radii of gyration on protein size in the folded and unfolded states obeys the same scaling laws as synthetic polymers. Thus, one might surmise that the mechanism of collapse in proteins and polymers ought to be similar. However, because the number of amino acids in single domain proteins is not significantly greater than about two hundred, it has not been resolved if the unfolded states of proteins are compact under conditions that favor the folded states - a problem at the heart of how proteins fold. By adopting a theory used to derive polymer-scaling laws, we find that the propensity for the unfolded state of a protein to be compact is universal and is encoded in the contact map of the folded state. Remarkably, analysis of over 2000 proteins shows that proteins rich in $\beta$-sheets have greater tendency to be compact than $\alpha$-helical proteins. The theory provides insights into the reasons for the small size of single domain proteins and the physical basis for the origin of multi-domain proteins. Application to non-coding RNA molecules show that they have evolved to collapse sharing similarities to $\beta$-sheet proteins. An implication of our theory is that the evolution of natural foldable sequences is guided by the requirement that for efficient folding they should populate minimum energy compact states under folding conditions. This concept also supports the compaction selection hypothesis used to rationalize the unusually condensed states of viral RNA molecules. |
1912.05890 | Matteo Smerlak | Matteo Smerlak | Effective potential reveals evolutionary trajectories in complex fitness
landscapes | Cross-posted on bioRxiv | null | 10.1101/869883 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Growing efforts to measure fitness landscapes in molecular and microbial
systems are premised on a tight relationship between landscape topography and
evolutionary trajectories. This relationship, however, is far from being
straightforward: depending on their mutation rate, Darwinian populations can
climb the closest fitness peak (survival of the fittest), settle in lower
regions with higher mutational robustness (survival of the flattest), or fail
to adapt altogether (error catastrophes). These bifurcations highlight that
evolution does not necessarily drive populations "from lower peak to higher
peak", as Wright imagined. The problem therefore remains: how exactly does a
complex landscape topography constrain evolution, and can we predict where it
will go next? Here I introduce a generalization of quasispecies theory which
identifies metastable evolutionary states as minima of an effective potential.
From this representation I derive a coarse-grained, Markov state model of
evolution, which in turn forms a basis for evolutionary predictions across a
wide range of mutation rates. Because the effective potential is related to the
ground state of a quantum Hamiltonian, my approach could stimulate fruitful
interactions between evolutionary dynamics and quantum many-body theory
| [
{
"created": "Thu, 12 Dec 2019 11:46:00 GMT",
"version": "v1"
}
] | 2019-12-13 | [
[
"Smerlak",
"Matteo",
""
]
] | Growing efforts to measure fitness landscapes in molecular and microbial systems are premised on a tight relationship between landscape topography and evolutionary trajectories. This relationship, however, is far from being straightforward: depending on their mutation rate, Darwinian populations can climb the closest fitness peak (survival of the fittest), settle in lower regions with higher mutational robustness (survival of the flattest), or fail to adapt altogether (error catastrophes). These bifurcations highlight that evolution does not necessarily drive populations "from lower peak to higher peak", as Wright imagined. The problem therefore remains: how exactly does a complex landscape topography constrain evolution, and can we predict where it will go next? Here I introduce a generalization of quasispecies theory which identifies metastable evolutionary states as minima of an effective potential. From this representation I derive a coarse-grained, Markov state model of evolution, which in turn forms a basis for evolutionary predictions across a wide range of mutation rates. Because the effective potential is related to the ground state of a quantum Hamiltonian, my approach could stimulate fruitful interactions between evolutionary dynamics and quantum many-body theory |
0712.4332 | Peter Waddell | Peter J Waddell | Comparing a Menagerie of Models for Estimating Molecular Divergence
Times | null | null | null | null | q-bio.GN q-bio.PE | null | Estimation of molecular evolutionary divergence times requires models of rate
change. These vary with regard to the assumption of what quantity is penalized.
The possibilities considered are the rate of evolution, the log of the rate of
evolution and the inverse of the rate of evolution. These models also vary with
regard to how time affects the expected variance of rate change. Here the
alternatives are not at all, linearly with time and as the product of rate and
time. This results in a set of nine models, both random walks and Brownian
motion. A priori any of these models could be correct, yet different
researchers may well prefer, or simply use, one rather than the others. Another
variable is whether to use a scaling factor to take account of the variance of
the process of rate change being unknown and therefore avoid minimizing the
penalty function with unrealistically large times. Here the difference these
models and assumptions make on a tree of mammals, with the root fixed and with
a single internal node fixed, is measured. The similarity of models is measured
as the correlation of their time estimates and visualized with a least squares
tree. The fit of model to data is measured and Q-Q plots are shown. Comparing
model estimates with each other, the age of clades within Laurasiatheria are
seen to vary far more across models than those within Supraprimates (informally
called Euarchontoglires). Especially problematic are the often-used fossil
calibrated nodes of horse/rhino and whale/hippo clashing with times within
Supraprimates and in particular no fossil rodent teeth older than ~60 mybp. A
scaling factor in addition to penalizing rate change is seen to yield
consistent relative time estimates irrespective of exactly where the
calibration point is placed.
| [
{
"created": "Fri, 28 Dec 2007 20:47:55 GMT",
"version": "v1"
}
] | 2007-12-31 | [
[
"Waddell",
"Peter J",
""
]
] | Estimation of molecular evolutionary divergence times requires models of rate change. These vary with regard to the assumption of what quantity is penalized. The possibilities considered are the rate of evolution, the log of the rate of evolution and the inverse of the rate of evolution. These models also vary with regard to how time affects the expected variance of rate change. Here the alternatives are not at all, linearly with time and as the product of rate and time. This results in a set of nine models, both random walks and Brownian motion. A priori any of these models could be correct, yet different researchers may well prefer, or simply use, one rather than the others. Another variable is whether to use a scaling factor to take account of the variance of the process of rate change being unknown and therefore avoid minimizing the penalty function with unrealistically large times. Here the difference these models and assumptions make on a tree of mammals, with the root fixed and with a single internal node fixed, is measured. The similarity of models is measured as the correlation of their time estimates and visualized with a least squares tree. The fit of model to data is measured and Q-Q plots are shown. Comparing model estimates with each other, the age of clades within Laurasiatheria are seen to vary far more across models than those within Supraprimates (informally called Euarchontoglires). Especially problematic are the often-used fossil calibrated nodes of horse/rhino and whale/hippo clashing with times within Supraprimates and in particular no fossil rodent teeth older than ~60 mybp. A scaling factor in addition to penalizing rate change is seen to yield consistent relative time estimates irrespective of exactly where the calibration point is placed. |
1107.0270 | Thorsten Pr\"ustel | Thorsten Pr\"ustel and Martin Meier-Schellersheim | Coarse-Grained Stochastic Particle-based Reaction-Diffusion Simulation
Algorithm | null | null | null | null | q-bio.QM cond-mat.stat-mech physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, several particle-based stochastic simulation algorithms
(PSSA) have been developed to study the spatially resolved dynamics of
biochemical networks at a molecular scale. A challenge all these approaches
have to address is to allow for simulations at cell-biologically relevant
timescales without neither neglecting important spatial and biochemical
properties of the simulated system nor introducing ad-hoc assumptions not based
on physical principles. Here we describe a PSSA that permits large time steps
while still retaining a high degree of accuracy. The approach addresses the
typical disadvantage of Brownian dynamics, namely the need to use small time
steps to resolve bimolecular encounters accurately, by estimating the number of
otherwise unnoticed encounters with the help of the Green's functions of the
diffusion equation incorporating molecular interactions. This method has
previously been proposed for purely absorbing boundary conditions and
irreversible bimolecular reactions. Building on those ideas, we developed a
general-purpose PSSA that is applicable to a broad class of reaction-diffusion
problems by incorporating reflective and radiation boundary conditions and
reversible reactions. We furthermore discuss how reaction-diffusion systems on
2D membranes can be described and derive small time expansions of the Green's
functions that substantially speed up key calculations, particularly in the
problematic case of molecules in close proximity. Finally, we point out the
formal relationship between our and exact algorithms. The proposed algorithm
may serve as an easily implementable and flexible, computationally efficient,
coarse-grained description of reaction-diffusion systems in 2D and 3D that
nevertheless provides a stochastic, detailed representation at the level of
individual particle trajectories in space and time.
| [
{
"created": "Fri, 1 Jul 2011 16:30:44 GMT",
"version": "v1"
}
] | 2011-07-04 | [
[
"Prüstel",
"Thorsten",
""
],
[
"Meier-Schellersheim",
"Martin",
""
]
] | In recent years, several particle-based stochastic simulation algorithms (PSSA) have been developed to study the spatially resolved dynamics of biochemical networks at a molecular scale. A challenge all these approaches have to address is to allow for simulations at cell-biologically relevant timescales without neither neglecting important spatial and biochemical properties of the simulated system nor introducing ad-hoc assumptions not based on physical principles. Here we describe a PSSA that permits large time steps while still retaining a high degree of accuracy. The approach addresses the typical disadvantage of Brownian dynamics, namely the need to use small time steps to resolve bimolecular encounters accurately, by estimating the number of otherwise unnoticed encounters with the help of the Green's functions of the diffusion equation incorporating molecular interactions. This method has previously been proposed for purely absorbing boundary conditions and irreversible bimolecular reactions. Building on those ideas, we developed a general-purpose PSSA that is applicable to a broad class of reaction-diffusion problems by incorporating reflective and radiation boundary conditions and reversible reactions. We furthermore discuss how reaction-diffusion systems on 2D membranes can be described and derive small time expansions of the Green's functions that substantially speed up key calculations, particularly in the problematic case of molecules in close proximity. Finally, we point out the formal relationship between our and exact algorithms. The proposed algorithm may serve as an easily implementable and flexible, computationally efficient, coarse-grained description of reaction-diffusion systems in 2D and 3D that nevertheless provides a stochastic, detailed representation at the level of individual particle trajectories in space and time. |
2104.08256 | Ian Leifer | Ian Leifer, Mishael S\'anchez-P\'erez, Cecilia Ishida, Hern\'an A.
Makse | Predicting synchronized gene coexpression patterns from fibration
symmetries in gene regulatory networks in bacteria | null | BMC Bioinformatics 22, 363 (2021) | 10.1186/s12859-021-04213-5 | null | q-bio.MN physics.bio-ph physics.data-an q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Gene regulatory networks coordinate the expression of genes
across physiological states and ensure a synchronized expression of genes in
cellular subsystems, critical for the coherent functioning of cells. Here we
address the questions whether it is possible to predict gene synchronization
from network structure alone. We have recently shown that synchronized gene
expression may be predicted from symmetries in the gene regulatory networks
(GRN) and described by the concept of symmetry fibrations. We showed that
symmetry fibrations partition the genes into groups called fibers based on the
symmetries of their 'input trees', the set of paths in the network through
which signals can reach a gene. In idealized dynamic gene expression models,
all genes in a fiber are perfectly synchronized, while less idealized models -
with gene input functions differencing between genes - predict symmetry
breaking and desynchronization.
Results: To study the functional role of gene fibers and to test whether some
of the fiber-induced coexpression remains in reality, we analyze gene
fibrations for the gene regulatory networks of E. coli and B. subtilis and
confront them with expression data. We find approximate gene coexpression
patterns consistent with symmetry fibrations with idealized gene expression
dynamics. This shows that network structure alone provides useful information
about gene synchronization, and suggest that gene input functions within fibers
may be further streamlined by evolutionary pressures to realize a coexpression
of genes.
Conclusions: Thus, gene fibrations provides a sound conceptual tool to
describe tunable coexpression induced by network topology and shaped by
mechanistic details of gene expression.
| [
{
"created": "Fri, 16 Apr 2021 17:38:12 GMT",
"version": "v1"
}
] | 2021-07-28 | [
[
"Leifer",
"Ian",
""
],
[
"Sánchez-Pérez",
"Mishael",
""
],
[
"Ishida",
"Cecilia",
""
],
[
"Makse",
"Hernán A.",
""
]
] | Background: Gene regulatory networks coordinate the expression of genes across physiological states and ensure a synchronized expression of genes in cellular subsystems, critical for the coherent functioning of cells. Here we address the questions whether it is possible to predict gene synchronization from network structure alone. We have recently shown that synchronized gene expression may be predicted from symmetries in the gene regulatory networks (GRN) and described by the concept of symmetry fibrations. We showed that symmetry fibrations partition the genes into groups called fibers based on the symmetries of their 'input trees', the set of paths in the network through which signals can reach a gene. In idealized dynamic gene expression models, all genes in a fiber are perfectly synchronized, while less idealized models - with gene input functions differencing between genes - predict symmetry breaking and desynchronization. Results: To study the functional role of gene fibers and to test whether some of the fiber-induced coexpression remains in reality, we analyze gene fibrations for the gene regulatory networks of E. coli and B. subtilis and confront them with expression data. We find approximate gene coexpression patterns consistent with symmetry fibrations with idealized gene expression dynamics. This shows that network structure alone provides useful information about gene synchronization, and suggest that gene input functions within fibers may be further streamlined by evolutionary pressures to realize a coexpression of genes. Conclusions: Thus, gene fibrations provides a sound conceptual tool to describe tunable coexpression induced by network topology and shaped by mechanistic details of gene expression. |
2309.15132 | Yaochen Xie | Yaochen Xie, Ziqian Xie, Sheikh Muhammad Saiful Islam, Degui Zhi,
Shuiwang Ji | Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies | 17 pages, 7 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide association studies (GWAS) are used to identify relationships
between genetic variations and specific traits. When applied to
high-dimensional medical imaging data, a key step is to extract
lower-dimensional, yet informative representations of the data as traits.
Representation learning for imaging genetics is largely under-explored due to
the unique challenges posed by GWAS in comparison to typical visual
representation learning. In this study, we tackle this problem from the mutual
information (MI) perspective by identifying key limitations of existing
methods. We introduce a trans-modal learning framework Genetic InfoMax (GIM),
including a regularized MI estimator and a novel genetics-informed transformer
to address the specific challenges of GWAS. We evaluate GIM on human brain 3D
MRI data and establish standardized evaluation protocols to compare it to
existing approaches. Our results demonstrate the effectiveness of GIM and a
significantly improved performance on GWAS.
| [
{
"created": "Tue, 26 Sep 2023 03:59:21 GMT",
"version": "v1"
}
] | 2023-09-28 | [
[
"Xie",
"Yaochen",
""
],
[
"Xie",
"Ziqian",
""
],
[
"Islam",
"Sheikh Muhammad Saiful",
""
],
[
"Zhi",
"Degui",
""
],
[
"Ji",
"Shuiwang",
""
]
] | Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits. When applied to high-dimensional medical imaging data, a key step is to extract lower-dimensional, yet informative representations of the data as traits. Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS in comparison to typical visual representation learning. In this study, we tackle this problem from the mutual information (MI) perspective by identifying key limitations of existing methods. We introduce a trans-modal learning framework Genetic InfoMax (GIM), including a regularized MI estimator and a novel genetics-informed transformer to address the specific challenges of GWAS. We evaluate GIM on human brain 3D MRI data and establish standardized evaluation protocols to compare it to existing approaches. Our results demonstrate the effectiveness of GIM and a significantly improved performance on GWAS. |
1409.4700 | Liane Gabora | Liane Gabora and Apara Ranjan | Can Sol's Explanation for the Evolution of Animal Innovation Account for
Human Innovation? | 7 pages | In A. Kaufman & J. Kaufman (Eds.), Animal creativity and
innovation: Research and theory (pp. 183-188). Philadelphia PA: Elsevier.
(2015) | null | null | q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sol argues that innovation propensity is not a specialized adaptation
resulting from targeted selection but an instance of exaptation because
selection cannot act on situations that are only encountered once. In
exaptation, a trait that originally evolved to solve one problem is co-opted to
solve a new problem; thus the trait or traits in question must be necessary and
sufficient to solve the new problem. Sol claims that traits such as persistence
and neophilia, are necessary and sufficient for animal innovation, which is a
matter of trial and error. We suggest that this explanation does not extend to
human innovation, which involves strategy, logic, intuition, and insight, and
requires traits that evolved, not as a byproduct of some other function, but
for the purpose of coming up with adaptive responses to environmental
variability itself. We point to an agent based model that indicates the
feasibility of two such proposed traits: (1) chaining, the ability to construct
complex thoughts from simple ones, and (2) contextual focus, the ability to
shift between convergent and divergent modes of thought. We agree that there is
a sense in which innovation is exaptation--it occurs when an existing object or
behaviour is adapted to new needs or tastes--and refer to a mathematical model
of biological and cultural exaltation. We conclude that much is gained by
comparing and contrasting animal and human innovation.
| [
{
"created": "Tue, 16 Sep 2014 16:56:04 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2019 21:32:09 GMT",
"version": "v2"
}
] | 2019-07-17 | [
[
"Gabora",
"Liane",
""
],
[
"Ranjan",
"Apara",
""
]
] | Sol argues that innovation propensity is not a specialized adaptation resulting from targeted selection but an instance of exaptation because selection cannot act on situations that are only encountered once. In exaptation, a trait that originally evolved to solve one problem is co-opted to solve a new problem; thus the trait or traits in question must be necessary and sufficient to solve the new problem. Sol claims that traits such as persistence and neophilia, are necessary and sufficient for animal innovation, which is a matter of trial and error. We suggest that this explanation does not extend to human innovation, which involves strategy, logic, intuition, and insight, and requires traits that evolved, not as a byproduct of some other function, but for the purpose of coming up with adaptive responses to environmental variability itself. We point to an agent based model that indicates the feasibility of two such proposed traits: (1) chaining, the ability to construct complex thoughts from simple ones, and (2) contextual focus, the ability to shift between convergent and divergent modes of thought. We agree that there is a sense in which innovation is exaptation--it occurs when an existing object or behaviour is adapted to new needs or tastes--and refer to a mathematical model of biological and cultural exaltation. We conclude that much is gained by comparing and contrasting animal and human innovation. |
2401.06182 | Baiyang Dai | Baiyang Dai, Jiamin Yang, Hari Shroff, Patrick La Riviere | Prediction of Cellular Identities from Trajectory and Cell Fate
Information | null | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Determining cell identities in imaging sequences is an important yet
challenging task. The conventional method for cell identification is via cell
tracking, which is complex and can be time-consuming. In this study, we propose
an innovative approach to cell identification during early $\textit{C.
elegans}$ embryogenesis using machine learning. Cell identification during
$\textit{C. elegans}$ embryogenesis would provide insights into neural
development with implications for higher organisms including humans. We
employed random forest, MLP, and LSTM models, and tested cell classification
accuracy on 3D time-lapse confocal datasets spanning the first 4 hours of
embryogenesis. By leveraging a small number of spatial-temporal features of
individual cells, including cell trajectory and cell fate information, our
models achieve an accuracy of over 91%, even with limited data. We also
determine the most important feature contributions and can interpret these
features in the context of biological knowledge. Our research demonstrates the
success of predicting cell identities in time-lapse imaging sequences directly
from simple spatio-temporal features.
| [
{
"created": "Thu, 11 Jan 2024 03:28:13 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2024 17:59:41 GMT",
"version": "v2"
}
] | 2024-03-05 | [
[
"Dai",
"Baiyang",
""
],
[
"Yang",
"Jiamin",
""
],
[
"Shroff",
"Hari",
""
],
[
"La Riviere",
"Patrick",
""
]
] | Determining cell identities in imaging sequences is an important yet challenging task. The conventional method for cell identification is via cell tracking, which is complex and can be time-consuming. In this study, we propose an innovative approach to cell identification during early $\textit{C. elegans}$ embryogenesis using machine learning. Cell identification during $\textit{C. elegans}$ embryogenesis would provide insights into neural development with implications for higher organisms including humans. We employed random forest, MLP, and LSTM models, and tested cell classification accuracy on 3D time-lapse confocal datasets spanning the first 4 hours of embryogenesis. By leveraging a small number of spatial-temporal features of individual cells, including cell trajectory and cell fate information, our models achieve an accuracy of over 91%, even with limited data. We also determine the most important feature contributions and can interpret these features in the context of biological knowledge. Our research demonstrates the success of predicting cell identities in time-lapse imaging sequences directly from simple spatio-temporal features. |
1903.06624 | Linbo Liu | Si Chen, Xinyu Liu, Nanshuo Wang, Qianshan Ding, Xianghong Wang, Xin
Ge, En Bo, Xiaojun Yu, Honggang Yu, Chenjie Xu, and Linbo Liu | Contrast of nuclei in stratified squamous epithelium in optical
coherence tomography images at 800 nm | 24 pages, 7 figures, 1 table | Journal of Biophotonics 2019 | 10.1002/jbio.201900073 | null | q-bio.TO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imaging nuclei of keratinocytes in the stratified squamous epithelium has
been a subject of intense research since nucleus associated cellular atypia is
the key criteria for the screening and diagnosis of epithelial cancers and
their precursors. However, keratinocyte nuclei have been reported to be either
low scattering or high scattering, so that these inconsistent reports might
have led to misinterpretations of optical images, and more importantly,
hindered the establishment of optical diagnostic criteria. We disclose that
they are generally low scattering in the core using Micro-optical coherence
tomography (micro-OCT) of 1.28 um axial resolution in vivo; those previously
reported high scattering or bright signals from nuclei are likely from the
nucleocytoplasmic boundary, and the low-scattering nuclear cores were missed
possibly due to insufficient axial resolutions (about 4 um). It is further
demonstrated that the high scattering signals may be associated with flattening
of nuclei and cytoplasmic glycogen accumulation, which are valuable cytologic
hallmarks of cell maturation.
| [
{
"created": "Sun, 10 Mar 2019 05:17:13 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 12:53:06 GMT",
"version": "v2"
}
] | 2019-06-04 | [
[
"Chen",
"Si",
""
],
[
"Liu",
"Xinyu",
""
],
[
"Wang",
"Nanshuo",
""
],
[
"Ding",
"Qianshan",
""
],
[
"Wang",
"Xianghong",
""
],
[
"Ge",
"Xin",
""
],
[
"Bo",
"En",
""
],
[
"Yu",
"Xiaojun",
""... | Imaging nuclei of keratinocytes in the stratified squamous epithelium has been a subject of intense research since nucleus associated cellular atypia is the key criteria for the screening and diagnosis of epithelial cancers and their precursors. However, keratinocyte nuclei have been reported to be either low scattering or high scattering, so that these inconsistent reports might have led to misinterpretations of optical images, and more importantly, hindered the establishment of optical diagnostic criteria. We disclose that they are generally low scattering in the core using Micro-optical coherence tomography (micro-OCT) of 1.28 um axial resolution in vivo; those previously reported high scattering or bright signals from nuclei are likely from the nucleocytoplasmic boundary, and the low-scattering nuclear cores were missed possibly due to insufficient axial resolutions (about 4 um). It is further demonstrated that the high scattering signals may be associated with flattening of nuclei and cytoplasmic glycogen accumulation, which are valuable cytologic hallmarks of cell maturation. |
1306.5877 | Jacopo Grilli | Jacopo Grilli, Samir Suweis, Amos Maritan | Growth or Reproduction: Emergence of an Evolutionary Optimal Strategy | 10 pages, 5 figures | null | 10.1088/1742-5468/2013/10/P10020 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern ecology has re-emphasized the need for a quantitative understanding of
the original 'survival of the fittest theme' based on analyzis of the intricate
trade-offs between competing evolutionary strategies that characterize the
evolution of life. This is key to the understanding of species coexistence and
ecosystem diversity under the omnipresent constraint of limited resources. In
this work we propose an agent based model replicating a community of
interacting individuals, e.g. plants in a forest, where all are competing for
the same finite amount of resources and each competitor is characterized by a
specific growth-reproduction strategy. We show that such an evolution dynamics
drives the system towards a stationary state characterized by an emergent
optimal strategy, which in turn depends on the amount of available resources
the ecosystem can rely on. We find that the share of resources used by
individuals is power-law distributed with an exponent directly related to the
optimal strategy. The model can be further generalized to devise optimal
strategies in social and economical interacting systems dynamics.
| [
{
"created": "Tue, 25 Jun 2013 08:44:47 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Grilli",
"Jacopo",
""
],
[
"Suweis",
"Samir",
""
],
[
"Maritan",
"Amos",
""
]
] | Modern ecology has re-emphasized the need for a quantitative understanding of the original 'survival of the fittest theme' based on analyzis of the intricate trade-offs between competing evolutionary strategies that characterize the evolution of life. This is key to the understanding of species coexistence and ecosystem diversity under the omnipresent constraint of limited resources. In this work we propose an agent based model replicating a community of interacting individuals, e.g. plants in a forest, where all are competing for the same finite amount of resources and each competitor is characterized by a specific growth-reproduction strategy. We show that such an evolution dynamics drives the system towards a stationary state characterized by an emergent optimal strategy, which in turn depends on the amount of available resources the ecosystem can rely on. We find that the share of resources used by individuals is power-law distributed with an exponent directly related to the optimal strategy. The model can be further generalized to devise optimal strategies in social and economical interacting systems dynamics. |
2403.15413 | Paolo Burelli | Paolo Burelli and Laurits Dixen | Playing With Neuroscience: Past, Present and Future of Neuroimaging and
Games | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Videogames have been a catalyst for advances in many research fields, such as
artificial intelligence, human-computer interaction or virtual reality. Over
the years, research in fields such as artificial intelligence has enabled the
design of new types of games, while games have often served as a powerful tool
for testing and simulation. Can this also happen with neuroscience? What is the
current relationship between neuroscience and games research? what can we
expect from the future? In this article, we'll try to answer these questions,
analysing the current state-of-the-art at the crossroads between neuroscience
and games and envisioning future directions.
| [
{
"created": "Wed, 6 Mar 2024 12:38:18 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Burelli",
"Paolo",
""
],
[
"Dixen",
"Laurits",
""
]
] | Videogames have been a catalyst for advances in many research fields, such as artificial intelligence, human-computer interaction or virtual reality. Over the years, research in fields such as artificial intelligence has enabled the design of new types of games, while games have often served as a powerful tool for testing and simulation. Can this also happen with neuroscience? What is the current relationship between neuroscience and games research? what can we expect from the future? In this article, we'll try to answer these questions, analysing the current state-of-the-art at the crossroads between neuroscience and games and envisioning future directions. |
2006.16630 | Torbj{\o}rn V Ness | Torbj{\o}rn V. Ness and Geir Halnes and Solveig N{\ae}ss and Klas H.
Pettersen and Gaute T. Einevoll | Computing extracellular electric potentials from neuronal simulations | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measurements of electric potentials from neural activity have played a key
role in neuroscience for almost a century, and simulations of neural activity
is an important tool for understanding such measurements. Volume conductor (VC)
theory is used to compute extracellular electric potentials such as
extracellular spikes, MUA, LFP, ECoG and EEG surrounding neurons, and also
inversely, to reconstruct neuronal current source distributions from recorded
potentials through current source density methods. In this book chapter, we
show how VC theory can be derived from a detailed electrodiffusive theory for
ion concentration dynamics in the extracellular medium, and show what
assumptions that must be introduced to get the VC theory on the simplified form
that is commonly used by neuroscientists. Furthermore, we provide examples of
how the theory is applied to compute spikes, LFP signals and EEG signals
generated by neurons and neuronal populations.
| [
{
"created": "Tue, 30 Jun 2020 09:46:57 GMT",
"version": "v1"
},
{
"created": "Sun, 2 May 2021 07:11:46 GMT",
"version": "v2"
}
] | 2021-05-04 | [
[
"Ness",
"Torbjørn V.",
""
],
[
"Halnes",
"Geir",
""
],
[
"Næss",
"Solveig",
""
],
[
"Pettersen",
"Klas H.",
""
],
[
"Einevoll",
"Gaute T.",
""
]
] | Measurements of electric potentials from neural activity have played a key role in neuroscience for almost a century, and simulations of neural activity is an important tool for understanding such measurements. Volume conductor (VC) theory is used to compute extracellular electric potentials such as extracellular spikes, MUA, LFP, ECoG and EEG surrounding neurons, and also inversely, to reconstruct neuronal current source distributions from recorded potentials through current source density methods. In this book chapter, we show how VC theory can be derived from a detailed electrodiffusive theory for ion concentration dynamics in the extracellular medium, and show what assumptions that must be introduced to get the VC theory on the simplified form that is commonly used by neuroscientists. Furthermore, we provide examples of how the theory is applied to compute spikes, LFP signals and EEG signals generated by neurons and neuronal populations. |
1706.08151 | Juan B Gutierrez | Derek Onken and Eric Marty and Roberto Palomares and Rui Xie and Leyao
Zhang and Jonathan Arnold and Juan B. Gutierrez | The lunar cycle's influence on sex determination at conception in humans | 12 pages, 12 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lunar cycle has long been suspected to influence biological phenomena.
Folklore alludes to such a relationship, but previous scientific analyses have
failed to find significant associations. It has been shown that lunar cycles
indeed have effects on animals; significant associations between human
circadian rhythms and lunar cycles have also been reported. We set out to
determine whether a significant statistical correlation exists between the
lunar phase and sex determination during conception. We found that significant
associations (\textit{p}-value $< 5 \times 10^{-5}$) exist between the average
sex ratio (male:female) and the lunar month. The likelihood of conception of a
male is at its highest point five days after the full moon, whereas the highest
likelihood of female conception occurs nineteen days after the full moon.
Furthermore, we found that the strength of this influence is correlated with
the amount of solar radiation (which is proportional to moonlight). Our results
suggest that sex determination may be influenced by the moon cycle, which
suggests the possibility of lunar influence on other biological phenomena. We
suggest for future research the exploration of similar effects in other
phenomena involving humans and other species.
| [
{
"created": "Sun, 25 Jun 2017 18:19:23 GMT",
"version": "v1"
}
] | 2017-06-27 | [
[
"Onken",
"Derek",
""
],
[
"Marty",
"Eric",
""
],
[
"Palomares",
"Roberto",
""
],
[
"Xie",
"Rui",
""
],
[
"Zhang",
"Leyao",
""
],
[
"Arnold",
"Jonathan",
""
],
[
"Gutierrez",
"Juan B.",
""
]
] | The lunar cycle has long been suspected to influence biological phenomena. Folklore alludes to such a relationship, but previous scientific analyses have failed to find significant associations. It has been shown that lunar cycles indeed have effects on animals; significant associations between human circadian rhythms and lunar cycles have also been reported. We set out to determine whether a significant statistical correlation exists between the lunar phase and sex determination during conception. We found that significant associations (\textit{p}-value $< 5 \times 10^{-5}$) exist between the average sex ratio (male:female) and the lunar month. The likelihood of conception of a male is at its highest point five days after the full moon, whereas the highest likelihood of female conception occurs nineteen days after the full moon. Furthermore, we found that the strength of this influence is correlated with the amount of solar radiation (which is proportional to moonlight). Our results suggest that sex determination may be influenced by the moon cycle, which suggests the possibility of lunar influence on other biological phenomena. We suggest for future research the exploration of similar effects in other phenomena involving humans and other species. |
1701.03702 | Rosalind J Allen | Philip Greulich, Jakub Dolezal, Matthew Scott, Martin R. Evans,
Rosalind J. Allen | Predicting the dynamics of bacterial growth inhibition by
ribosome-targeting antibiotics | null | null | 10.1088/1478-3975/aa8001 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how antibiotics inhibit bacteria can help to reduce antibiotic
use and hence avoid antimicrobial resistance - yet few theoretical models exist
for bacterial growth inhibition by a clinically relevant antibiotic treatment
regimen. In particular, in the clinic, antibiotic treatment is time dependent.
Here, we use a recently-developed model to obtain predictions for the dynamical
response of a bacterial cell to a time-dependent dose of ribosome-targeting
antibiotic. Our results depend strongly on whether the antibiotic shows
reversible transport and/or low-affinity ribosome binding ("low-affinity
antibiotic") or, in contrast, irreversible transport and/or high affinity
ribosome binding ("high-affinity antibiotic"). For low-affinity antibiotics,
our model predicts that growth inhibition depends on the duration of the
antibiotic pulse, with a transient period of very fast growth following removal
of the antibiotic. For high-affinity antibiotics, growth inhibition depends on
peak dosage rather than dose duration, and the model predicts a pronounced
post-antibiotic effect, due to hysteresis, in which growth can be suppressed
for long times after the antibiotic dose has ended. These predictions are
experimentally testable and may be of clinical significance.
| [
{
"created": "Fri, 13 Jan 2017 15:18:36 GMT",
"version": "v1"
}
] | 2017-12-06 | [
[
"Greulich",
"Philip",
""
],
[
"Dolezal",
"Jakub",
""
],
[
"Scott",
"Matthew",
""
],
[
"Evans",
"Martin R.",
""
],
[
"Allen",
"Rosalind J.",
""
]
] | Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance - yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time dependent. Here, we use a recently-developed model to obtain predictions for the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding ("low-affinity antibiotic") or, in contrast, irreversible transport and/or high affinity ribosome binding ("high-affinity antibiotic"). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, with a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance. |
q-bio/0512001 | Marko Djordjevic | Marko Djordjevic (1 and 2), Anirvan M. Sengupta (3) ((1) Columbia
University Physics Department, (2) Mathematical Biosciences Institute, The
Ohio State University, (3) Department of Physics and BioMaPS Institute,
Rutgers University) | Quantitative modeling and data analysis of SELEX experiments | 29 pages, 8 figures, to appear in Physical Biology | null | 10.1088/1478-3975/3/1/002 | null | q-bio.GN | null | SELEX (Systematic Evolution of Ligands by Exponential Enrichment) is an
experimental procedure that allows extracting, from an initially random pool of
DNA, those oligomers with high affinity for a given DNA-binding protein. We
address what is a suitable experimental and computational procedure to infer
parameters of transcription factor-DNA interaction from SELEX experiments. To
answer this, we use a biophysical model of transcription factor-DNA
interactions to quantitatively model SELEX. We show that a standard procedure
is unsuitable for obtaining accurate interaction parameters. However, we
theoretically show that a modified experiment in which chemical potential is
fixed through different rounds of the experiment allows robust generation of an
appropriate data set. Based on our quantitative model, we propose a novel
bioinformatic method of data analysis for such modified experiment and apply it
to extract the interaction parameters for a mammalian transcription factor
CTF/NFI. From a practical point of view, our method results in a significantly
improved false positive/false negative trade-off, as compared to both the
standard information theory based method and a widely used empirically
formulated procedure.
| [
{
"created": "Wed, 30 Nov 2005 21:08:56 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Djordjevic",
"Marko",
"",
"1 and 2"
],
[
"Sengupta",
"Anirvan M.",
""
]
] | SELEX (Systematic Evolution of Ligands by Exponential Enrichment) is an experimental procedure that allows extracting, from an initially random pool of DNA, those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of transcription factor-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of transcription factor-DNA interactions to quantitatively model SELEX. We show that a standard procedure is unsuitable for obtaining accurate interaction parameters. However, we theoretically show that a modified experiment in which chemical potential is fixed through different rounds of the experiment allows robust generation of an appropriate data set. Based on our quantitative model, we propose a novel bioinformatic method of data analysis for such modified experiment and apply it to extract the interaction parameters for a mammalian transcription factor CTF/NFI. From a practical point of view, our method results in a significantly improved false positive/false negative trade-off, as compared to both the standard information theory based method and a widely used empirically formulated procedure. |
2105.07407 | Sergei Grudinin | Elodie Laine, Stephan Eismann, Arne Elofsson, and Sergei Grudinin | Protein sequence-to-structure learning: Is this the end(-to-end
revolution)? | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The potential of deep learning has been recognized in the protein structure
prediction community for some time, and became indisputable after CASP13. In
CASP14, deep learning has boosted the field to unanticipated levels reaching
near-experimental accuracy. This success comes from advances transferred from
other machine learning areas, as well as methods specifically designed to deal
with protein sequences and structures, and their abstractions. Novel emerging
approaches include (i) geometric learning, i.e. learning on representations
such as graphs, 3D Voronoi tessellations, and point clouds; (ii) pre-trained
protein language models leveraging attention; (iii) equivariant architectures
preserving the symmetry of 3D space; (iv) use of large meta-genome databases;
(v) combinations of protein representations; (vi) and finally truly end-to-end
architectures, i.e. differentiable models starting from a sequence and
returning a 3D structure. Here, we provide an overview and our opinion of the
novel deep learning approaches developed in the last two years and widely used
in CASP14.
| [
{
"created": "Sun, 16 May 2021 10:46:44 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Sep 2021 08:54:39 GMT",
"version": "v2"
}
] | 2021-09-14 | [
[
"Laine",
"Elodie",
""
],
[
"Eismann",
"Stephan",
""
],
[
"Elofsson",
"Arne",
""
],
[
"Grudinin",
"Sergei",
""
]
] | The potential of deep learning has been recognized in the protein structure prediction community for some time, and became indisputable after CASP13. In CASP14, deep learning has boosted the field to unanticipated levels reaching near-experimental accuracy. This success comes from advances transferred from other machine learning areas, as well as methods specifically designed to deal with protein sequences and structures, and their abstractions. Novel emerging approaches include (i) geometric learning, i.e. learning on representations such as graphs, 3D Voronoi tessellations, and point clouds; (ii) pre-trained protein language models leveraging attention; (iii) equivariant architectures preserving the symmetry of 3D space; (iv) use of large meta-genome databases; (v) combinations of protein representations; (vi) and finally truly end-to-end architectures, i.e. differentiable models starting from a sequence and returning a 3D structure. Here, we provide an overview and our opinion of the novel deep learning approaches developed in the last two years and widely used in CASP14. |
1512.00034 | Kirill Korolev S | Kirill S. Korolev | Evolution arrests invasions of cooperative populations | null | Physical Review Letters 115.20 (2015): 208104 | 10.1103/PhysRevLett.115.208104 | null | q-bio.PE cond-mat.stat-mech nlin.AO nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Population expansions trigger many biomedical and ecological transitions,
from tumor growth to invasions of non-native species. Although population
spreading often selects for more invasive phenotypes, we show that this outcome
is far from inevitable. In cooperative populations, mutations reducing
dispersal have a competitive advantage. Such mutations then steadily accumulate
at the expansion front bringing invasion to a halt. Our findings are a rare
example of evolution driving the population into an unfavorable state and could
lead to new strategies to combat unwelcome invaders. In addition, we obtain an
exact analytical expression for the fitness advantage of mutants with different
dispersal rates.
| [
{
"created": "Mon, 30 Nov 2015 21:08:48 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Dec 2015 14:47:14 GMT",
"version": "v2"
}
] | 2015-12-14 | [
[
"Korolev",
"Kirill S.",
""
]
] | Population expansions trigger many biomedical and ecological transitions, from tumor growth to invasions of non-native species. Although population spreading often selects for more invasive phenotypes, we show that this outcome is far from inevitable. In cooperative populations, mutations reducing dispersal have a competitive advantage. Such mutations then steadily accumulate at the expansion front bringing invasion to a halt. Our findings are a rare example of evolution driving the population into an unfavorable state and could lead to new strategies to combat unwelcome invaders. In addition, we obtain an exact analytical expression for the fitness advantage of mutants with different dispersal rates. |
0706.2353 | Supratim Sengupta | Supratim Sengupta, Andrew D. Rutenberg | Modeling partitioning of Min proteins between daughter cells after
septation in Escherichia coli | 17 pages, including 6 figures. Typo in captions of fig.2,5 corrected.
Version which appears in Physical Biology | Phys. Biol. 4 (2007) 145-153. | 10.1088/1478-3975/4/3/001 | null | q-bio.SC | null | Ongoing sub-cellular oscillation of Min proteins is required to block
minicelling in E. coli. Experimentally, Min oscillations are seen in newly
divided cells and no minicells are produced. In model Min systems many daughter
cells do not oscillate following septation because of unequal partitioning of
Min proteins between the daughter cells. Using the 3D model of Huang et al., we
investigate the septation process in detail to determine the cause of the
asymmetric partitioning of Min proteins between daughter cells. We find that
this partitioning problem arises at certain phases of the MinD and MinE
oscillations with respect to septal closure and it persists independently of
parameter variation. At most 85% of the daughter cells exhibit Min oscillation
following septation. Enhanced MinD binding at the static polar and dynamic
septal regions, consistent with cardiolipin domains, does not substantially
increase this fraction of oscillating daughters. We believe that this problem
will be shared among all existing Min models and discuss possible biological
mechanisms that may minimize partitioning errors of Min proteins following
septation.
| [
{
"created": "Fri, 15 Jun 2007 18:59:37 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jul 2007 19:15:16 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Sengupta",
"Supratim",
""
],
[
"Rutenberg",
"Andrew D.",
""
]
] | Ongoing sub-cellular oscillation of Min proteins is required to block minicelling in E. coli. Experimentally, Min oscillations are seen in newly divided cells and no minicells are produced. In model Min systems many daughter cells do not oscillate following septation because of unequal partitioning of Min proteins between the daughter cells. Using the 3D model of Huang et al., we investigate the septation process in detail to determine the cause of the asymmetric partitioning of Min proteins between daughter cells. We find that this partitioning problem arises at certain phases of the MinD and MinE oscillations with respect to septal closure and it persists independently of parameter variation. At most 85% of the daughter cells exhibit Min oscillation following septation. Enhanced MinD binding at the static polar and dynamic septal regions, consistent with cardiolipin domains, does not substantially increase this fraction of oscillating daughters. We believe that this problem will be shared among all existing Min models and discuss possible biological mechanisms that may minimize partitioning errors of Min proteins following septation. |
1612.01413 | Chenzhe Qian | Chenzhe Qian | Multi-stage Clustering of Breast Cancer for Precision Medicine | null | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer has become one of the most widespread diseases in the world.
Specifically, breast cancer is diagnosed more often than any other type of
cancer. However, breast cancer patients and their individual tumors are often
unique. Identifying the underlying genetic phenotype can lead to precision
(personalized) medicine. Tailoring medical treatment strategies to best fit the
needs of individual patients can dramatically improve their health. Such an
approach requires sufficient knowledge of the patients and the diseases, which
is currently unavailable to practitioners. This study focuses on breast cancer
and proposes a novel two-stage clustering method to partition patients into
hierarchical groups. The first stage is broad grouping, which is based on
phenotypes such as demographic information and clinical features. The second
stage is fine grouping based on genomic characteristics, such as copy number
variation and somatic mutation, of patients in a subgroup resulting from the
first stage. Generally, this framework offers a mechanism to mix multiple forms
of data, both phenotypic and genomic, to most effectively define individual
patients for personalized predictions. This method provides the ability to
detect correlation among all factors.
| [
{
"created": "Fri, 2 Dec 2016 17:42:36 GMT",
"version": "v1"
}
] | 2016-12-06 | [
[
"Qian",
"Chenzhe",
""
]
] | Cancer has become one of the most widespread diseases in the world. Specifically, breast cancer is diagnosed more often than any other type of cancer. However, breast cancer patients and their individual tumors are often unique. Identifying the underlying genetic phenotype can lead to precision (personalized) medicine. Tailoring medical treatment strategies to best fit the needs of individual patients can dramatically improve their health. Such an approach requires sufficient knowledge of the patients and the diseases, which is currently unavailable to practitioners. This study focuses on breast cancer and proposes a novel two-stage clustering method to partition patients into hierarchical groups. The first stage is broad grouping, which is based on phenotypes such as demographic information and clinical features. The second stage is fine grouping based on genomic characteristics, such as copy number variation and somatic mutation, of patients in a subgroup resulting from the first stage. Generally, this framework offers a mechanism to mix multiple forms of data, both phenotypic and genomic, to most effectively define individual patients for personalized predictions. This method provides the ability to detect correlation among all factors. |
2212.08352 | Olivier Rivoire | Olivier Rivoire | How flexibility can enhance catalysis | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Conformational changes are observed in many enzymes, but their role in
catalysis is highly controversial. Here we present a theoretical model that
illustrates how rigid catalysts can be fundamentally limited and how a
conformational change induced by substrate binding can overcome this
limitation, ultimately enabling barrier-free catalysis. The model is
deliberately minimal, but the principle it illustrates is general and
consistent with unique features of proteins as well as with previous informal
proposals to explain the superiority of enzymes over other classes of
catalysts. Implementing the discriminative switch suggested by the model could
help overcome limitations currently encountered in the design of artificial
catalysts.
| [
{
"created": "Fri, 16 Dec 2022 08:59:42 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Aug 2023 09:44:37 GMT",
"version": "v2"
}
] | 2023-08-10 | [
[
"Rivoire",
"Olivier",
""
]
] | Conformational changes are observed in many enzymes, but their role in catalysis is highly controversial. Here we present a theoretical model that illustrates how rigid catalysts can be fundamentally limited and how a conformational change induced by substrate binding can overcome this limitation, ultimately enabling barrier-free catalysis. The model is deliberately minimal, but the principle it illustrates is general and consistent with unique features of proteins as well as with previous informal proposals to explain the superiority of enzymes over other classes of catalysts. Implementing the discriminative switch suggested by the model could help overcome limitations currently encountered in the design of artificial catalysts. |
2408.06521 | Kevin Lin | Katherine E. Prater, Kevin Z. Lin | All the single cells: single-cell transcriptomics/epigenomics
experimental design and analysis considerations for glial biologists | 66 pages, 1 table, 5 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-cell transcriptomics, epigenomics, and other 'omics applied at
single-cell resolution can significantly advance hypotheses and understanding
of glial biology. Omics technologies are revealing a large and growing number
of new glial cell subtypes, defined by their gene expression profile. These
subtypes have significant implications for understanding glial cell function,
cell-cell communications, and glia-specific changes between homeostasis and
conditions such as neurological disease. For many, the training in how to
analyze, interpret, and understand these large datasets has been through
reading and understanding literature from other fields like biostatistics.
Here, we provide a primer for glial biologists on experimental design and
analysis of single-cell RNA-seq datasets. Our goal is to further the
understanding of why decisions might be made about datasets and to enhance
biologists' ability to interpret and critique their work and the work of
others. We review the steps involved in single-cell analysis with a focus on
decision points and particular notes for glia. The goal of this primer is to
ensure that single-cell 'omics experiments continue to advance glial biology in
a rigorous and replicable way.
| [
{
"created": "Mon, 12 Aug 2024 22:41:05 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Prater",
"Katherine E.",
""
],
[
"Lin",
"Kevin Z.",
""
]
] | Single-cell transcriptomics, epigenomics, and other 'omics applied at single-cell resolution can significantly advance hypotheses and understanding of glial biology. Omics technologies are revealing a large and growing number of new glial cell subtypes, defined by their gene expression profile. These subtypes have significant implications for understanding glial cell function, cell-cell communications, and glia-specific changes between homeostasis and conditions such as neurological disease. For many, the training in how to analyze, interpret, and understand these large datasets has been through reading and understanding literature from other fields like biostatistics. Here, we provide a primer for glial biologists on experimental design and analysis of single-cell RNA-seq datasets. Our goal is to further the understanding of why decisions might be made about datasets and to enhance biologists' ability to interpret and critique their work and the work of others. We review the steps involved in single-cell analysis with a focus on decision points and particular notes for glia. The goal of this primer is to ensure that single-cell 'omics experiments continue to advance glial biology in a rigorous and replicable way. |
1305.3754 | Nachol Chaiyaratana PhD | Damrongrit Setsirichok, Theera Piroonratana, Anunchai Assawamakin,
Touchpong Usavanarong, Chanin Limwongse, Waranyu Wongseree, Chatchawit
Aporntewan, Nachol Chaiyaratana | Small ancestry informative marker panels for complete classification
between the original four HapMap populations | 24 pages, 4 figures | Setsirichok, D. et al. (2012). International Journal of Data
Mining and Bioinformatics, 6(6), 651-674 | 10.1504/IJDMB.2012.050249 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A protocol for the identification of ancestry informative markers (AIMs) from
genome-wide single nucleotide polymorphism (SNP) data is proposed. The protocol
consists of three main steps: (a) identification of potential positive
selection regions via Fst extremity measurement, (b) SNP screening via
two-stage attribute selection and (c) classification model construction using a
naive Bayes classifier. The two-stage attribute selection is composed of a
newly developed round robin symmetrical uncertainty ranking technique and a
wrapper embedded with a naive Bayes classifier. The protocol has been applied
to the HapMap Phase II data. Two AIM panels, which consist of 10 and 16 SNPs
that lead to complete classification between CEU, CHB, JPT and YRI populations,
are identified. Moreover, the panels are at least four times smaller than those
reported in previous studies. The results suggest that the protocol could be
useful in a scenario involving a larger number of populations.
| [
{
"created": "Thu, 16 May 2013 10:42:48 GMT",
"version": "v1"
}
] | 2013-05-17 | [
[
"Setsirichok",
"Damrongrit",
""
],
[
"Piroonratana",
"Theera",
""
],
[
"Assawamakin",
"Anunchai",
""
],
[
"Usavanarong",
"Touchpong",
""
],
[
"Limwongse",
"Chanin",
""
],
[
"Wongseree",
"Waranyu",
""
],
[
"Aporntew... | A protocol for the identification of ancestry informative markers (AIMs) from genome-wide single nucleotide polymorphism (SNP) data is proposed. The protocol consists of three main steps: (a) identification of potential positive selection regions via Fst extremity measurement, (b) SNP screening via two-stage attribute selection and (c) classification model construction using a naive Bayes classifier. The two-stage attribute selection is composed of a newly developed round robin symmetrical uncertainty ranking technique and a wrapper embedded with a naive Bayes classifier. The protocol has been applied to the HapMap Phase II data. Two AIM panels, which consist of 10 and 16 SNPs that lead to complete classification between CEU, CHB, JPT and YRI populations, are identified. Moreover, the panels are at least four times smaller than those reported in previous studies. The results suggest that the protocol could be useful in a scenario involving a larger number of populations. |
2207.09598 | Eric Roberts | Niraj Gupta, Eric J. Roberts, Song Pang, C. Shan Xu, Harald F. Hess,
Fan Wu, Abby Dernburg, Danielle Jorgens, Petrus H. Zwart, Vignesh Kasinath | Deep learning-based identification of sub-nuclear structures in FIB-SEM
images | 15 pages, 10 figures, for eventual peer-reviewed journal publication | null | null | null | q-bio.QM q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | Three-dimensional volumetric imaging of cells allows for in situ
visualization, thus preserving contextual insights into cellular processes.
Despite recent advances in machine learning methods, morphological analysis of
sub-nuclear structures have proven challenging due to both the shallow contrast
profile and the technical limitation in feature detection. Here, we present a
convolutional neural network, supervised deep learning-based approach which can
identify sub-nuclear structures with 90% accuracy. We develop and apply this
model to C. elegans gonads imaged using focused ion beam milling combined with
scanning electron microscopy resulting in the accurate identification and
segmentation of all sub-nuclear structures including entire chromosomes. We
discuss in depth the architecture, parameterization, and optimization of the
deep learning model, as well as provide evaluation metrics to assess the
quality of the network prediction. Lastly, we highlight specific aspects of the
model that can be optimized for its broad application to other volumetric
imaging data as well as in situ cryo-electron tomography.
| [
{
"created": "Tue, 19 Jul 2022 23:58:20 GMT",
"version": "v1"
}
] | 2022-07-21 | [
[
"Gupta",
"Niraj",
""
],
[
"Roberts",
"Eric J.",
""
],
[
"Pang",
"Song",
""
],
[
"Xu",
"C. Shan",
""
],
[
"Hess",
"Harald F.",
""
],
[
"Wu",
"Fan",
""
],
[
"Dernburg",
"Abby",
""
],
[
"Jorgens",
... | Three-dimensional volumetric imaging of cells allows for in situ visualization, thus preserving contextual insights into cellular processes. Despite recent advances in machine learning methods, morphological analysis of sub-nuclear structures have proven challenging due to both the shallow contrast profile and the technical limitation in feature detection. Here, we present a convolutional neural network, supervised deep learning-based approach which can identify sub-nuclear structures with 90% accuracy. We develop and apply this model to C. elegans gonads imaged using focused ion beam milling combined with scanning electron microscopy resulting in the accurate identification and segmentation of all sub-nuclear structures including entire chromosomes. We discuss in depth the architecture, parameterization, and optimization of the deep learning model, as well as provide evaluation metrics to assess the quality of the network prediction. Lastly, we highlight specific aspects of the model that can be optimized for its broad application to other volumetric imaging data as well as in situ cryo-electron tomography. |
0709.4209 | Gasper Tkacik | Gasper Tkacik, Curtis G Callan Jr, William Bialek | Information capacity of genetic regulatory elements | 17 pages, 9 figures | Phys. Rev. E 78, 011910 (2008) | 10.1103/PhysRevE.78.011910 | null | q-bio.MN | null | Changes in a cell's external or internal conditions are usually reflected in
the concentrations of the relevant transcription factors. These proteins in
turn modulate the expression levels of the genes under their control and
sometimes need to perform non-trivial computations that integrate several
inputs and affect multiple genes. At the same time, the activities of the
regulated genes would fluctuate even if the inputs were held fixed, as a
consequence of the intrinsic noise in the system, and such noise must
fundamentally limit the reliability of any genetic computation. Here we use
information theory to formalize the notion of information transmission in
simple genetic regulatory elements in the presence of physically realistic
noise sources. The dependence of this "channel capacity" on noise parameters,
cooperativity and cost of making signaling molecules is explored
systematically. We find that, at least in principle, capacities higher than one
bit should be achievable and that consequently genetic regulation is not
limited the use of binary, or "on-off", components.
| [
{
"created": "Wed, 26 Sep 2007 16:31:30 GMT",
"version": "v1"
}
] | 2013-08-01 | [
[
"Tkacik",
"Gasper",
""
],
[
"Callan",
"Curtis G",
"Jr"
],
[
"Bialek",
"William",
""
]
] | Changes in a cell's external or internal conditions are usually reflected in the concentrations of the relevant transcription factors. These proteins in turn modulate the expression levels of the genes under their control and sometimes need to perform non-trivial computations that integrate several inputs and affect multiple genes. At the same time, the activities of the regulated genes would fluctuate even if the inputs were held fixed, as a consequence of the intrinsic noise in the system, and such noise must fundamentally limit the reliability of any genetic computation. Here we use information theory to formalize the notion of information transmission in simple genetic regulatory elements in the presence of physically realistic noise sources. The dependence of this "channel capacity" on noise parameters, cooperativity and cost of making signaling molecules is explored systematically. We find that, at least in principle, capacities higher than one bit should be achievable and that consequently genetic regulation is not limited the use of binary, or "on-off", components. |
1503.03815 | Silvia Grigolon | Silvia Grigolon, Silvio Franz, Matteo Marsili | Identifying relevant positions in proteins by Critical Variable
Selection | 16 pages (Main Text), 4 pages (Supplementary Material), 13 figures.
Major changes with respect to the previous version | Mol. BioSyst., 2016 | 10.1039/C6MB00047A | null | q-bio.QM cond-mat.stat-mech q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution in its course found a variety of solutions to the same optimisation
problem. The advent of high-throughput genomic sequencing has made available
extensive data from which, in principle, one can infer the underlying structure
on which biological functions rely. In this paper, we present a new method
aimed at extracting sites encoding structural and func- tional properties from
a set of protein primary sequences, namely a Multiple Sequence Alignment. The
method, called Critical Variable Selection, is based on the idea that subsets
of relevant sites cor- respond to subsequences that occur with a particularly
broad frequency distribution in the dataset. By applying this algorithm to in
silico sequences, to the Response Regulator Receiver and to the Voltage Sensor
Domain of Ion Channels, we show that this procedure recovers not only
information encoded in single site statistics and pairwise correlations but it
also captures dependencies going beyond pairwise correlations. The method
proposed here is complementary to Statistical Coupling Analysis, in that the
most relevant sites predicted by the two methods markedly differ. We find
robust and consistent results for datasets as small as few hundred sequences,
that reveal a hidden hierarchy of sites that is consistent with present
knowledge on biologically relevant sites and evo- lutionary dynamics. This
suggests that Critical Variable Selection is able to identify in a Multiple
Sequence Alignment a core of sites encoding functional and structural
information.
| [
{
"created": "Thu, 12 Mar 2015 17:07:19 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jan 2016 21:26:33 GMT",
"version": "v2"
}
] | 2016-04-12 | [
[
"Grigolon",
"Silvia",
""
],
[
"Franz",
"Silvio",
""
],
[
"Marsili",
"Matteo",
""
]
] | Evolution in its course found a variety of solutions to the same optimisation problem. The advent of high-throughput genomic sequencing has made available extensive data from which, in principle, one can infer the underlying structure on which biological functions rely. In this paper, we present a new method aimed at extracting sites encoding structural and func- tional properties from a set of protein primary sequences, namely a Multiple Sequence Alignment. The method, called Critical Variable Selection, is based on the idea that subsets of relevant sites cor- respond to subsequences that occur with a particularly broad frequency distribution in the dataset. By applying this algorithm to in silico sequences, to the Response Regulator Receiver and to the Voltage Sensor Domain of Ion Channels, we show that this procedure recovers not only information encoded in single site statistics and pairwise correlations but it also captures dependencies going beyond pairwise correlations. The method proposed here is complementary to Statistical Coupling Analysis, in that the most relevant sites predicted by the two methods markedly differ. We find robust and consistent results for datasets as small as few hundred sequences, that reveal a hidden hierarchy of sites that is consistent with present knowledge on biologically relevant sites and evo- lutionary dynamics. This suggests that Critical Variable Selection is able to identify in a Multiple Sequence Alignment a core of sites encoding functional and structural information. |
1403.3659 | Gerard Govaert | Julien Caudeville (INERIS), C\'eline Boudet (INERIS), G\'erard Govaert
(HEUDIASYC), Roseline Bonnard (INERIS), Andr\'e Cicollela (INERIS) | Construction d'une plate-forme int\'egr\'ee pour la cartographie de
l'exposition des populations aux substances chimiques de l'environnement | null | Env Risque Sante 10 (2011) 239-242 | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | L'analyse du lien entre l'environnement et la sant\'e est devenue une
pr\'eoccupation majeure de sant\'e publique comme en t\'emoigne l'\'emergence
des deux Plans nationaux sant\'e environnement. Pour ce faire, les d\'ecideurs
sont confront\'es au besoin de d\'eveloppement d'outils n\'ecessaires \`a
l'identification des zones g\'eographiques dans lesquelles une surexposition
potentielle \`a des substances toxiques est observ\'ee. L'objectif du projet
Syst\`eme d'information g\'eographique (SIG), facteurs de risques
environnementaux et d\'ec\`es par cancer (SIGFRIED 1) est de construire une
plate-forme de mod\'elisation permettant d'\'evaluer, par une approche
spatiale, l'exposition de la population fran\c{c}aise aux substances chimiques
et d'en identifier ses d\'eterminants. L'\'evaluation des expositions est
r\'ealis\'ee par le biais d'une mod\'elisation multim\'edia probabiliste. Les
probl\`emes \'epist\'emologiques li\'es \`a l'absence de donn\'ees sont
palli\'es par la mise en {\oe}uvre d'outils utilisant les techniques d'analyse
spatiale. Un exemple est fourni sur la r\'egion Nord-Pas-de-Calais et Picardie,
pour le cadmium, le nickel et le plomb. Le calcul de l'exposition est
r\'ealis\'e sur une dur\'ee de 70 ans sur la base des donn\'ees disponibles
autour de l'ann\'ee 2004 sur une maille de 1 km de c\^ot\'e. Par exemple pour
le Nord-Pas-de-Calais, les indicateurs permettent de d\'efinir deux zones pour
le cadmium et trois zones pour le plomb. Celles-ci sont li\'ees \`a
l'historique industriel de la r\'egion : le bassin minier, les activit\'es
m\'etallurgiques et l'agglom\'eration lilloise. La contribution des
diff\'erentes voies d'exposition varie sensiblement d'un polluant \`a l'autre.
Les cartes d'exposition ainsi obtenues permettent d'identifier les zones
g\'eographiques dans lesquelles conduire en priorit\'e des \'etudes
environnementales de terrains. Le SIG construit constitue la base d'une
plate-forme o\`u les donn\'ees d'\'emission \`a la source, de mesures
environnementales, d'exposition, puis sanitaires et socio-\'economiques
pourront \^etre associ\'ees.
--
Analysis of the association between the environment and health has become a
major public health concern, as shown by the development of two national
environmental health plans. For such an analysis, policy-makers need tools to
identify the geographic areas where overexposure to toxic agents may be
observed. The objective of the SIGFRIED 1 project is to build a work station
for spatial modeling of the exposure of the French population to chemical
substances and for identifying the determinants of this exposure. Probabilistic
multimedia modeling is used to assess exposure. The epistemological problems
associated with the absence of data are overcome by the implementation of tools
that apply spatial analysis techniques. An example is furnished for the region
of Nord-Pas-de-Calais and Picardie, for cadmium, nickel and lead exposure. The
calculation of exposure is performed for duration of 70 years on the basis of
data collected around 2004 fora grid of squares 1 km in length. For example,
for Nord-Pas-de-Calais, the indicators allow us to define two areas for cadmium
and three for lead. They are linked to the region's industrial history: mining
basin, metallurgy activities, and the Lille metropolitan area. The contribution
of various exposure pathways varied substantially from one pollutant to
another. The exposure maps thus obtained allow us to identify the geographic
area where environmental studies must be conducted in priority. The GIS thus
constructed is the foundation of a workstation where source emission data,
environmental exposure measurements, and finally health and socioeconomic
measurements can be combined.
| [
{
"created": "Wed, 12 Feb 2014 20:27:20 GMT",
"version": "v1"
}
] | 2014-03-17 | [
[
"Caudeville",
"Julien",
"",
"INERIS"
],
[
"Boudet",
"Céline",
"",
"INERIS"
],
[
"Govaert",
"Gérard",
"",
"HEUDIASYC"
],
[
"Bonnard",
"Roseline",
"",
"INERIS"
],
[
"Cicollela",
"André",
"",
"INERIS"
]
] | L'analyse du lien entre l'environnement et la sant\'e est devenue une pr\'eoccupation majeure de sant\'e publique comme en t\'emoigne l'\'emergence des deux Plans nationaux sant\'e environnement. Pour ce faire, les d\'ecideurs sont confront\'es au besoin de d\'eveloppement d'outils n\'ecessaires \`a l'identification des zones g\'eographiques dans lesquelles une surexposition potentielle \`a des substances toxiques est observ\'ee. L'objectif du projet Syst\`eme d'information g\'eographique (SIG), facteurs de risques environnementaux et d\'ec\`es par cancer (SIGFRIED 1) est de construire une plate-forme de mod\'elisation permettant d'\'evaluer, par une approche spatiale, l'exposition de la population fran\c{c}aise aux substances chimiques et d'en identifier ses d\'eterminants. L'\'evaluation des expositions est r\'ealis\'ee par le biais d'une mod\'elisation multim\'edia probabiliste. Les probl\`emes \'epist\'emologiques li\'es \`a l'absence de donn\'ees sont palli\'es par la mise en {\oe}uvre d'outils utilisant les techniques d'analyse spatiale. Un exemple est fourni sur la r\'egion Nord-Pas-de-Calais et Picardie, pour le cadmium, le nickel et le plomb. Le calcul de l'exposition est r\'ealis\'e sur une dur\'ee de 70 ans sur la base des donn\'ees disponibles autour de l'ann\'ee 2004 sur une maille de 1 km de c\^ot\'e. Par exemple pour le Nord-Pas-de-Calais, les indicateurs permettent de d\'efinir deux zones pour le cadmium et trois zones pour le plomb. Celles-ci sont li\'ees \`a l'historique industriel de la r\'egion : le bassin minier, les activit\'es m\'etallurgiques et l'agglom\'eration lilloise. La contribution des diff\'erentes voies d'exposition varie sensiblement d'un polluant \`a l'autre. Les cartes d'exposition ainsi obtenues permettent d'identifier les zones g\'eographiques dans lesquelles conduire en priorit\'e des \'etudes environnementales de terrains. Le SIG construit constitue la base d'une plate-forme o\`u les donn\'ees d'\'emission \`a la source, de mesures environnementales, d'exposition, puis sanitaires et socio-\'economiques pourront \^etre associ\'ees. -- Analysis of the association between the environment and health has become a major public health concern, as shown by the development of two national environmental health plans. For such an analysis, policy-makers need tools to identify the geographic areas where overexposure to toxic agents may be observed. The objective of the SIGFRIED 1 project is to build a work station for spatial modeling of the exposure of the French population to chemical substances and for identifying the determinants of this exposure. Probabilistic multimedia modeling is used to assess exposure. The epistemological problems associated with the absence of data are overcome by the implementation of tools that apply spatial analysis techniques. An example is furnished for the region of Nord-Pas-de-Calais and Picardie, for cadmium, nickel and lead exposure. The calculation of exposure is performed for duration of 70 years on the basis of data collected around 2004 fora grid of squares 1 km in length. For example, for Nord-Pas-de-Calais, the indicators allow us to define two areas for cadmium and three for lead. They are linked to the region's industrial history: mining basin, metallurgy activities, and the Lille metropolitan area. The contribution of various exposure pathways varied substantially from one pollutant to another. The exposure maps thus obtained allow us to identify the geographic area where environmental studies must be conducted in priority. The GIS thus constructed is the foundation of a workstation where source emission data, environmental exposure measurements, and finally health and socioeconomic measurements can be combined. |
2305.09369 | Eleni Nisioti | Eleni Nisioti and Cl\'ement Moulin-Frier | Dynamics of niche construction in adaptable populations evolving in
diverse environments | null | null | null | null | q-bio.PE cs.NE | http://creativecommons.org/licenses/by/4.0/ | In both natural and artificial studies, evolution is often seen as synonymous
to natural selection. Individuals evolve under pressures set by environments
that are either reset or do not carry over significant changes from previous
generations. Thus, niche construction (NC), the reciprocal process to natural
selection where individuals incur inheritable changes to their environment, is
ignored. Arguably due to this lack of study, the dynamics of NC are today
little understood, especially in real-world settings. In this work, we study NC
in simulation environments that consist of multiple, diverse niches and
populations that evolve their plasticity, evolvability and niche-constructing
behaviors. Our empirical analysis reveals many interesting dynamics, with
populations experiencing mass extinctions, arms races and oscillations. To
understand these behaviors, we analyze the interaction between NC and
adaptability and the effect of NC on the population's genomic diversity and
dispersal, observing that NC diversifies niches. Our study suggests that
complexifying the simulation environments studying NC, by considering multiple
and diverse niches, is necessary for understanding its dynamics and can lend
testable hypotheses to future studies of both natural and artificial systems.
| [
{
"created": "Tue, 16 May 2023 11:52:14 GMT",
"version": "v1"
}
] | 2023-05-17 | [
[
"Nisioti",
"Eleni",
""
],
[
"Moulin-Frier",
"Clément",
""
]
] | In both natural and artificial studies, evolution is often seen as synonymous to natural selection. Individuals evolve under pressures set by environments that are either reset or do not carry over significant changes from previous generations. Thus, niche construction (NC), the reciprocal process to natural selection where individuals incur inheritable changes to their environment, is ignored. Arguably due to this lack of study, the dynamics of NC are today little understood, especially in real-world settings. In this work, we study NC in simulation environments that consist of multiple, diverse niches and populations that evolve their plasticity, evolvability and niche-constructing behaviors. Our empirical analysis reveals many interesting dynamics, with populations experiencing mass extinctions, arms races and oscillations. To understand these behaviors, we analyze the interaction between NC and adaptability and the effect of NC on the population's genomic diversity and dispersal, observing that NC diversifies niches. Our study suggests that complexifying the simulation environments studying NC, by considering multiple and diverse niches, is necessary for understanding its dynamics and can lend testable hypotheses to future studies of both natural and artificial systems. |
0805.0223 | Vadim N. Biktashev | S. W. Morgan, G. Plank, I. V. Biktasheva, V. N. Biktashev | Low energy defibrillation in human cardiac tissue: a simulation study | 17 pages, 7 figures, as accepted to Biophysical Journal 2008/11/25 | null | 10.1016/j.bpj.2008.11.031 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We aim to assess the effectiveness of feedback controlled resonant drift
pacing as a method for low energy defibrillation. Antitachycardia pacing is the
only low energy defibrillation approach to have gained clinical significance,
but it is still suboptimal. Low energy defibrillation would avoid adverse side
effects associated with high voltage shocks and allow the application of ICD
therapy where it is not tolerated today. We present results of computer
simulations of a bidomain model of cardiac tissue with human atrial ionic
kinetics. Re-entry was initiated and low energy shocks were applied with the
same period as the re-entry, using feedback to maintain resonance. We
demonstrate that such stimulation can move the core of re-entrant patterns, in
the direction depending on location of electrodes and a time delay in the
feedback. Termination of re-entry is achieved with shock strength one order of
magnitude weaker than in conventional single-shock defibrillation. We conclude
that resonant drift pacing can terminate re-entry at a fraction of the shock
strength currently used for defibrillation and can potentially work where
antitachycardia pacing fails, due to the feedback mechanisms. Success depends
on a number of details which these numerical simulations have uncovered.
\emph{Keywords} Re-entry; Bidomain model; Resonant drift; ICD; Defibrillation;
Antitachycardia pacing; Feedback.
| [
{
"created": "Fri, 2 May 2008 12:48:45 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Nov 2008 15:35:40 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Morgan",
"S. W.",
""
],
[
"Plank",
"G.",
""
],
[
"Biktasheva",
"I. V.",
""
],
[
"Biktashev",
"V. N.",
""
]
] | We aim to assess the effectiveness of feedback controlled resonant drift pacing as a method for low energy defibrillation. Antitachycardia pacing is the only low energy defibrillation approach to have gained clinical significance, but it is still suboptimal. Low energy defibrillation would avoid adverse side effects associated with high voltage shocks and allow the application of ICD therapy where it is not tolerated today. We present results of computer simulations of a bidomain model of cardiac tissue with human atrial ionic kinetics. Re-entry was initiated and low energy shocks were applied with the same period as the re-entry, using feedback to maintain resonance. We demonstrate that such stimulation can move the core of re-entrant patterns, in the direction depending on location of electrodes and a time delay in the feedback. Termination of re-entry is achieved with shock strength one order of magnitude weaker than in conventional single-shock defibrillation. We conclude that resonant drift pacing can terminate re-entry at a fraction of the shock strength currently used for defibrillation and can potentially work where antitachycardia pacing fails, due to the feedback mechanisms. Success depends on a number of details which these numerical simulations have uncovered. \emph{Keywords} Re-entry; Bidomain model; Resonant drift; ICD; Defibrillation; Antitachycardia pacing; Feedback. |
2012.00566 | A. K. M. Azad | AKM Azad, Salem Alyami | Discovering novel cancer bio-markers in acquired lapatinib resistance
using Bayesian methods | null | 15 April 2021, Briefings in Bioinformatics | 10.1093/bib/bbab137 | bbab137 | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Genes/Proteins do not work alone within our body, rather as a group they
perform certain activities indicated as pathways. Signalling transduction
pathways (STPs) are some of the important pathways that transmit biological
signals from protein-to-protein controlling several cellular activities.
However, many diseases such as cancer target some of these signalling pathways
for their growth and malignance, but demystifying their underlying mechanisms
are a very complicated tasks. In this study, we use a fully Bayesian approach
to develop methodologies in discovering novel driver bio-markers in aberrant
STPs given two-conditional high-throughput gene expression data. This project,
namely PathTurbEr (Pathway Perturbation Driver), is applied on a global gene
expression dataset derived from the lapatinib (an EGFR/HER dual inhibitor)
sensitive and resistant samples from breast cancer cell lines (SKBR3).
Differential expression analysis revealed 512 differentially expressed genes
(DEGs) and their signalling pathway enrichment analysis revealed 22 singalling
pathways as aberrated including PI3K-AKT, Hippo, Chemokine, and TGF-beta
singalling pathway as highly dysregulated in lapatinib resistance. Next, we
model the aberrant activities in TGF-beta STP as a causal Bayesian network (BN)
from given observational datasets using three Markov Chain Monte Carlo (MCMC)
sampling methods, i.e. Neighbourhood sampler (NS) and Hit-and-Run (HAR)
sampler, which has already proven to have more robust inference with lower
chances of getting stuck at local optima and faster convergence compared to
other state-of-art methods. Next, we examined the structural features of the
optimal BN as a statistical process that generates the global structure using,
$p_1$-model, a special class of Exponential Random Graph Models (ERGMs) and
MCMC methods for their hyper-parameter sampling....
| [
{
"created": "Mon, 30 Nov 2020 14:58:04 GMT",
"version": "v1"
}
] | 2021-08-02 | [
[
"Azad",
"AKM",
""
],
[
"Alyami",
"Salem",
""
]
] | Genes/Proteins do not work alone within our body, rather as a group they perform certain activities indicated as pathways. Signalling transduction pathways (STPs) are some of the important pathways that transmit biological signals from protein-to-protein controlling several cellular activities. However, many diseases such as cancer target some of these signalling pathways for their growth and malignance, but demystifying their underlying mechanisms are a very complicated tasks. In this study, we use a fully Bayesian approach to develop methodologies in discovering novel driver bio-markers in aberrant STPs given two-conditional high-throughput gene expression data. This project, namely PathTurbEr (Pathway Perturbation Driver), is applied on a global gene expression dataset derived from the lapatinib (an EGFR/HER dual inhibitor) sensitive and resistant samples from breast cancer cell lines (SKBR3). Differential expression analysis revealed 512 differentially expressed genes (DEGs) and their signalling pathway enrichment analysis revealed 22 singalling pathways as aberrated including PI3K-AKT, Hippo, Chemokine, and TGF-beta singalling pathway as highly dysregulated in lapatinib resistance. Next, we model the aberrant activities in TGF-beta STP as a causal Bayesian network (BN) from given observational datasets using three Markov Chain Monte Carlo (MCMC) sampling methods, i.e. Neighbourhood sampler (NS) and Hit-and-Run (HAR) sampler, which has already proven to have more robust inference with lower chances of getting stuck at local optima and faster convergence compared to other state-of-art methods. Next, we examined the structural features of the optimal BN as a statistical process that generates the global structure using, $p_1$-model, a special class of Exponential Random Graph Models (ERGMs) and MCMC methods for their hyper-parameter sampling.... |
1004.5332 | Steven Kelk | Leo van Iersel and Steven Kelk | When two trees go to war | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rooted phylogenetic networks are often constructed by combining trees,
clusters, triplets or characters into a single network that in some
well-defined sense simultaneously represents them all. We review these four
models and investigate how they are related. In general, the model chosen
influences the minimum number of reticulation events required. However, when
one obtains the input data from two binary trees, we show that the minimum
number of reticulations is independent of the model. The number of
reticulations necessary to represent the trees, triplets, clusters (in the
softwired sense) and characters (with unrestricted multiple crossover
recombination) are all equal. Furthermore, we show that these results also hold
when not the number of reticulations but the level of the constructed network
is minimised. We use these unification results to settle several complexity
questions that have been open in the field for some time. We also give explicit
examples to show that already for data obtained from three binary trees the
models begin to diverge.
| [
{
"created": "Thu, 29 Apr 2010 16:30:16 GMT",
"version": "v1"
}
] | 2010-04-30 | [
[
"van Iersel",
"Leo",
""
],
[
"Kelk",
"Steven",
""
]
] | Rooted phylogenetic networks are often constructed by combining trees, clusters, triplets or characters into a single network that in some well-defined sense simultaneously represents them all. We review these four models and investigate how they are related. In general, the model chosen influences the minimum number of reticulation events required. However, when one obtains the input data from two binary trees, we show that the minimum number of reticulations is independent of the model. The number of reticulations necessary to represent the trees, triplets, clusters (in the softwired sense) and characters (with unrestricted multiple crossover recombination) are all equal. Furthermore, we show that these results also hold when not the number of reticulations but the level of the constructed network is minimised. We use these unification results to settle several complexity questions that have been open in the field for some time. We also give explicit examples to show that already for data obtained from three binary trees the models begin to diverge. |
0908.0634 | Peter O. Fedichev | P. O. Fedichev, G. N. Getmantsev, L. I. Menshikov | Fast Surface Based Electrostatics for biomolecules modeling | multiple changes, improved text, material and refs added | null | null | null | q-bio.QM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze deficiencies of commonly used Coulomb approximations in
Generalized Born solvation energy calculation models and report a development
of a new fast surface-based method (FSBE) for numerical calculations of the
solvation energy of biomolecules with charged groups. The procedure is only a
few percents wrong for molecular configurations of arbitrary sizes, provides
explicit values for the reaction field potential at any point of the molecular
interior, water polarization at the surface of the molecule, both the solvation
energy value and its derivatives suitable for Molecular Dynamics (MD)
simulations. The method works well both for large and small molecules and thus
gives stable energy differences for quantities such as solvation energies
contributions to a molecular complex formation.
| [
{
"created": "Wed, 5 Aug 2009 10:05:12 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2009 05:39:14 GMT",
"version": "v2"
}
] | 2009-10-14 | [
[
"Fedichev",
"P. O.",
""
],
[
"Getmantsev",
"G. N.",
""
],
[
"Menshikov",
"L. I.",
""
]
] | We analyze deficiencies of commonly used Coulomb approximations in Generalized Born solvation energy calculation models and report a development of a new fast surface-based method (FSBE) for numerical calculations of the solvation energy of biomolecules with charged groups. The procedure is only a few percents wrong for molecular configurations of arbitrary sizes, provides explicit values for the reaction field potential at any point of the molecular interior, water polarization at the surface of the molecule, both the solvation energy value and its derivatives suitable for Molecular Dynamics (MD) simulations. The method works well both for large and small molecules and thus gives stable energy differences for quantities such as solvation energies contributions to a molecular complex formation. |
1003.1415 | Subhadip Raychaudhuri | Philippos K. Tsourkas, Subhadip Raychaudhuri | Affinity Discrimination in B cells in Response to Membrane Antigen
Requires Kinetic Proofreading | 29 pages, 11 figures | null | null | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | B cells signaling in response to antigen is proportional to antigen affinity,
a process known as affinity discrimination. Recent research suggests that B
cells can acquire antigen in membrane-bound form on the surface of
antigen-presenting cells (APCs), with signaling being initiated within a few
seconds of B cell/APC contact. During the earliest stages of B cell/APC
contact, B cell receptors (BCRs) on protrusions of the B cell surface bind to
antigen on the APC surface and form micro-clusters of 10-100 BCR/Antigen
complexes. In this study, we use computational modeling to show that B cell
affinity discrimination at the level of BCR-antigen micro-clusters requires a
threshold antigen binding time, in a manner similar to kinetic proofreading. We
find that if BCR molecules become signaling-capable immediately upon binding
antigen, there is a loss in serial engagement due to the increase in bond
lifetime as koff decreases. This results in decreasing signaling strength as
affinity increases. A threshold time for antigen to stay bound to BCR before
the latter becomes signaling-capable favors high affinity BCR-antigen bonds, as
these long-lived bonds can better fulfill the threshold time requirement than
low-affinity bonds. A threshold antigen binding time of ~10 seconds results in
monotonically increasing signaling with affinity, replicating the affinity
discrimination pattern observed in B cell activation experiments. This time
matches well (within order of magnitude) with the experimentally observed time
(~ 20 seconds) required for the BCR signaling domains to undergo antigen and
lipid raft-mediated conformational changes that lead to association with Syk.
| [
{
"created": "Sat, 6 Mar 2010 18:42:23 GMT",
"version": "v1"
}
] | 2010-03-09 | [
[
"Tsourkas",
"Philippos K.",
""
],
[
"Raychaudhuri",
"Subhadip",
""
]
] | B cells signaling in response to antigen is proportional to antigen affinity, a process known as affinity discrimination. Recent research suggests that B cells can acquire antigen in membrane-bound form on the surface of antigen-presenting cells (APCs), with signaling being initiated within a few seconds of B cell/APC contact. During the earliest stages of B cell/APC contact, B cell receptors (BCRs) on protrusions of the B cell surface bind to antigen on the APC surface and form micro-clusters of 10-100 BCR/Antigen complexes. In this study, we use computational modeling to show that B cell affinity discrimination at the level of BCR-antigen micro-clusters requires a threshold antigen binding time, in a manner similar to kinetic proofreading. We find that if BCR molecules become signaling-capable immediately upon binding antigen, there is a loss in serial engagement due to the increase in bond lifetime as koff decreases. This results in decreasing signaling strength as affinity increases. A threshold time for antigen to stay bound to BCR before the latter becomes signaling-capable favors high affinity BCR-antigen bonds, as these long-lived bonds can better fulfill the threshold time requirement than low-affinity bonds. A threshold antigen binding time of ~10 seconds results in monotonically increasing signaling with affinity, replicating the affinity discrimination pattern observed in B cell activation experiments. This time matches well (within order of magnitude) with the experimentally observed time (~ 20 seconds) required for the BCR signaling domains to undergo antigen and lipid raft-mediated conformational changes that lead to association with Syk. |
2012.11240 | Tiziana Cattai | Tiziana Cattai, Gaetano Scarano, Marie-Constance Corsi, Danielle S.
Bassett, Fabrizio De Vico Fallani, Stefania Colonnese | Improving J-divergence of brain connectivity states by graph Laplacian
denoising | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | IEEE transactions on Signal and Information Processing over
Networks 7 (2021): 493-508 | 10.1109/TSIPN.2021.3100302 | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional connectivity (FC) can be represented as a network, and is
frequently used to better understand the neural underpinnings of complex tasks
such as motor imagery (MI) detection in brain-computer interfaces (BCIs).
However, errors in the estimation of connectivity can affect the detection
performances. In this work, we address the problem of denoising common
connectivity estimates to improve the detectability of different connectivity
states. Specifically, we propose a denoising algorithm that acts on the network
graph Laplacian, which leverages recent graph signal processing results.
Further, we derive a novel formulation of the Jensen divergence for the
denoised Laplacian under different states. Numerical simulations on synthetic
data show that the denoising method improves the Jensen divergence of
connectivity patterns corresponding to different task conditions. Furthermore,
we apply the Laplacian denoising technique to brain networks estimated from
real EEG data recorded during MI-BCI experiments. Using our novel formulation
of the J-divergence, we are able to quantify the distance between the FC
networks in the motor imagery and resting states, as well as to understand the
contribution of each Laplacian variable to the total J-divergence between two
states. Experimental results on real MI-BCI EEG data demonstrate that the
Laplacian denoising improves the separation of motor imagery and resting mental
states, and shortens the time interval required for connectivity estimation. We
conclude that the approach shows promise for the robust detection of
connectivity states while being appealing for implementation in real-time BCI
applications.
| [
{
"created": "Mon, 21 Dec 2020 10:43:50 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Dec 2020 11:23:07 GMT",
"version": "v2"
}
] | 2024-02-02 | [
[
"Cattai",
"Tiziana",
""
],
[
"Scarano",
"Gaetano",
""
],
[
"Corsi",
"Marie-Constance",
""
],
[
"Bassett",
"Danielle S.",
""
],
[
"Fallani",
"Fabrizio De Vico",
""
],
[
"Colonnese",
"Stefania",
""
]
] | Functional connectivity (FC) can be represented as a network, and is frequently used to better understand the neural underpinnings of complex tasks such as motor imagery (MI) detection in brain-computer interfaces (BCIs). However, errors in the estimation of connectivity can affect the detection performances. In this work, we address the problem of denoising common connectivity estimates to improve the detectability of different connectivity states. Specifically, we propose a denoising algorithm that acts on the network graph Laplacian, which leverages recent graph signal processing results. Further, we derive a novel formulation of the Jensen divergence for the denoised Laplacian under different states. Numerical simulations on synthetic data show that the denoising method improves the Jensen divergence of connectivity patterns corresponding to different task conditions. Furthermore, we apply the Laplacian denoising technique to brain networks estimated from real EEG data recorded during MI-BCI experiments. Using our novel formulation of the J-divergence, we are able to quantify the distance between the FC networks in the motor imagery and resting states, as well as to understand the contribution of each Laplacian variable to the total J-divergence between two states. Experimental results on real MI-BCI EEG data demonstrate that the Laplacian denoising improves the separation of motor imagery and resting mental states, and shortens the time interval required for connectivity estimation. We conclude that the approach shows promise for the robust detection of connectivity states while being appealing for implementation in real-time BCI applications. |
1911.11326 | Doo Seok Jeong | Vladimir Kornijcuk, Dohun Kim, Guhyun Kim, Doo Seok Jeong | Simplified calcium signaling cascade for synaptic plasticity | 42 pages, 7 figures, Accepted by Neural Networks | null | null | null | q-bio.NC cs.NE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a model for synaptic plasticity based on a calcium signaling
cascade. The model simplifies the full signaling pathways from a calcium influx
to the phosphorylation (potentiation) and dephosphorylation (depression) of
glutamate receptors that are gated by fictive C1 and C2 catalysts,
respectively. This model is based on tangible chemical reactions, including
fictive catalysts, for long-term plasticity rather than the conceptual theories
commonplace in various models, such as preset thresholds of calcium
concentration. Our simplified model successfully reproduced the experimental
synaptic plasticity induced by different protocols such as (i) a synchronous
pairing protocol and (ii) correlated presynaptic and postsynaptic action
potentials (APs). Further, the ocular dominance plasticity (or the experimental
verification of the celebrated Bienenstock--Cooper--Munro theory) was
reproduced by two model synapses that compete by means of back-propagating APs
(bAPs). The key to this competition is synapse-specific bAPs with reference to
bAP-boosting on the physiological grounds.
| [
{
"created": "Tue, 26 Nov 2019 04:02:34 GMT",
"version": "v1"
}
] | 2019-11-27 | [
[
"Kornijcuk",
"Vladimir",
""
],
[
"Kim",
"Dohun",
""
],
[
"Kim",
"Guhyun",
""
],
[
"Jeong",
"Doo Seok",
""
]
] | We propose a model for synaptic plasticity based on a calcium signaling cascade. The model simplifies the full signaling pathways from a calcium influx to the phosphorylation (potentiation) and dephosphorylation (depression) of glutamate receptors that are gated by fictive C1 and C2 catalysts, respectively. This model is based on tangible chemical reactions, including fictive catalysts, for long-term plasticity rather than the conceptual theories commonplace in various models, such as preset thresholds of calcium concentration. Our simplified model successfully reproduced the experimental synaptic plasticity induced by different protocols such as (i) a synchronous pairing protocol and (ii) correlated presynaptic and postsynaptic action potentials (APs). Further, the ocular dominance plasticity (or the experimental verification of the celebrated Bienenstock--Cooper--Munro theory) was reproduced by two model synapses that compete by means of back-propagating APs (bAPs). The key to this competition is synapse-specific bAPs with reference to bAP-boosting on the physiological grounds. |
2008.03558 | John E. McCarthy | John E. McCarthy and Myles T. McCarthy and Bob A. Dumas | Long range versus short range aerial transmission of SARS-CoV-2 | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We compare the quantitative risk of infection from short range airborne
transmission of SARS-CoV-2 to the long-range risk from aerosol transmission in
an enclosed space.
| [
{
"created": "Sat, 8 Aug 2020 17:11:46 GMT",
"version": "v1"
}
] | 2020-08-11 | [
[
"McCarthy",
"John E.",
""
],
[
"McCarthy",
"Myles T.",
""
],
[
"Dumas",
"Bob A.",
""
]
] | We compare the quantitative risk of infection from short range airborne transmission of SARS-CoV-2 to the long-range risk from aerosol transmission in an enclosed space. |
0803.2443 | Jens Christian Claussen | Jens Christian Claussen | Discrete stochastic processes, replicator and Fokker-Planck equations of
coevolutionary dynamics in finite and infinite populations | Banach Center publications, in press | Banach Center Publications 80, 17-31 (2008) | 10.4064/bc80-0-1 | null | q-bio.PE cond-mat.stat-mech cs.SI math.PR math.ST physics.bio-ph physics.soc-ph stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finite-size fluctuations in coevolutionary dynamics arise in models of
biological as well as of social and economic systems. This brief tutorial
review surveys a systematic approach starting from a stochastic process
discrete both in time and state. The limit $N\to \infty$ of an infinite
population can be considered explicitly, generally leading to a replicator-type
equation in zero order, and to a Fokker-Planck-type equation in first order in
$1/\sqrt{N}$. Consequences and relations to some previous approaches are
outlined.
| [
{
"created": "Mon, 17 Mar 2008 13:33:00 GMT",
"version": "v1"
}
] | 2019-07-15 | [
[
"Claussen",
"Jens Christian",
""
]
] | Finite-size fluctuations in coevolutionary dynamics arise in models of biological as well as of social and economic systems. This brief tutorial review surveys a systematic approach starting from a stochastic process discrete both in time and state. The limit $N\to \infty$ of an infinite population can be considered explicitly, generally leading to a replicator-type equation in zero order, and to a Fokker-Planck-type equation in first order in $1/\sqrt{N}$. Consequences and relations to some previous approaches are outlined. |
1306.5110 | Adam Siepel | Matthew D. Rasmussen, Melissa J. Hubisz, Ilan Gronau, Adam Siepel | Genome-wide inference of ancestral recombination graphs | 88 pages, 7 main figures, 22 supplementary figures. This version
contains a substantially expanded genomic data analysis | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The complex correlation structure of a collection of orthologous DNA
sequences is uniquely captured by the "ancestral recombination graph" (ARG), a
complete record of coalescence and recombination events in the history of the
sample. However, existing methods for ARG inference are computationally
intensive, highly approximate, or limited to small numbers of sequences, and,
as a consequence, explicit ARG inference is rarely used in applied population
genomics. Here, we introduce a new algorithm for ARG inference that is
efficient enough to apply to dozens of complete mammalian genomes. The key idea
of our approach is to sample an ARG of n chromosomes conditional on an ARG of
n-1 chromosomes, an operation we call "threading." Using techniques based on
hidden Markov models, we can perform this threading operation exactly, up to
the assumptions of the sequentially Markov coalescent and a discretization of
time. An extension allows for threading of subtrees instead of individual
sequences. Repeated application of these threading operations results in highly
efficient Markov chain Monte Carlo samplers for ARGs. We have implemented these
methods in a computer program called ARGweaver. Experiments with simulated data
indicate that ARGweaver converges rapidly to the true posterior distribution
and is effective in recovering various features of the ARG for dozens of
sequences generated under realistic parameters for human populations. In
applications of ARGweaver to 54 human genome sequences from Complete Genomics,
we find clear signatures of natural selection, including regions of unusually
ancient ancestry associated with balancing selection and reductions in allele
age in sites under directional selection. Preliminary results also indicate
that our methods can be used to gain insight into complex features of human
population structure, even with a noninformative prior distribution.
| [
{
"created": "Fri, 21 Jun 2013 11:55:48 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jul 2013 20:40:41 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Dec 2013 18:34:04 GMT",
"version": "v3"
}
] | 2013-12-04 | [
[
"Rasmussen",
"Matthew D.",
""
],
[
"Hubisz",
"Melissa J.",
""
],
[
"Gronau",
"Ilan",
""
],
[
"Siepel",
"Adam",
""
]
] | The complex correlation structure of a collection of orthologous DNA sequences is uniquely captured by the "ancestral recombination graph" (ARG), a complete record of coalescence and recombination events in the history of the sample. However, existing methods for ARG inference are computationally intensive, highly approximate, or limited to small numbers of sequences, and, as a consequence, explicit ARG inference is rarely used in applied population genomics. Here, we introduce a new algorithm for ARG inference that is efficient enough to apply to dozens of complete mammalian genomes. The key idea of our approach is to sample an ARG of n chromosomes conditional on an ARG of n-1 chromosomes, an operation we call "threading." Using techniques based on hidden Markov models, we can perform this threading operation exactly, up to the assumptions of the sequentially Markov coalescent and a discretization of time. An extension allows for threading of subtrees instead of individual sequences. Repeated application of these threading operations results in highly efficient Markov chain Monte Carlo samplers for ARGs. We have implemented these methods in a computer program called ARGweaver. Experiments with simulated data indicate that ARGweaver converges rapidly to the true posterior distribution and is effective in recovering various features of the ARG for dozens of sequences generated under realistic parameters for human populations. In applications of ARGweaver to 54 human genome sequences from Complete Genomics, we find clear signatures of natural selection, including regions of unusually ancient ancestry associated with balancing selection and reductions in allele age in sites under directional selection. Preliminary results also indicate that our methods can be used to gain insight into complex features of human population structure, even with a noninformative prior distribution. |
1710.02199 | Joaquin Goni | Enrico Amico, Joaqu\'in Go\~ni | Mapping hybrid functional-structural connectivity traits in the human
connectome | article: 34 pages, 4 figures; supplementary material: 5 pages, 5
figures | Network Neuroscience; 2018 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the crucial questions in neuroscience is how a rich functional
repertoire of brain states relates to its underlying structural organization.
How to study the associations between these structural and functional layers is
an open problem that involves novel conceptual ways of tackling this question.
We here propose an extension of the Connectivity Independent Component Analysis
(connICA) framework, to identify joint structural-functional connectivity
traits. Here, we extend connICA to integrate structural and functional
connectomes by merging them into common hybrid connectivity patterns that
represent the connectivity fingerprint of a subject. We test this extended
approach on the 100 unrelated subjects from the Human Connectome Project. The
method is able to extract main independent structural-functional connectivity
patterns from the entire cohort that are sensitive to the realization of
different tasks. The hybrid connICA extracted two main task-sensitive hybrid
traits. The first, encompassing the within and between connections of dorsal
attentional and visual areas, as well as fronto-parietal circuits. The second,
mainly encompassing the connectivity between visual, attentional, DMN and
subcortical networks. Overall, these findings confirms the potential ofthe
hybrid connICA for the compression of structural/functional connectomes into
integrated patterns from a set of individual brain networks.
| [
{
"created": "Thu, 5 Oct 2017 20:13:05 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jan 2018 15:06:04 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Feb 2018 15:30:39 GMT",
"version": "v3"
}
] | 2018-02-22 | [
[
"Amico",
"Enrico",
""
],
[
"Goñi",
"Joaquín",
""
]
] | One of the crucial questions in neuroscience is how a rich functional repertoire of brain states relates to its underlying structural organization. How to study the associations between these structural and functional layers is an open problem that involves novel conceptual ways of tackling this question. We here propose an extension of the Connectivity Independent Component Analysis (connICA) framework, to identify joint structural-functional connectivity traits. Here, we extend connICA to integrate structural and functional connectomes by merging them into common hybrid connectivity patterns that represent the connectivity fingerprint of a subject. We test this extended approach on the 100 unrelated subjects from the Human Connectome Project. The method is able to extract main independent structural-functional connectivity patterns from the entire cohort that are sensitive to the realization of different tasks. The hybrid connICA extracted two main task-sensitive hybrid traits. The first, encompassing the within and between connections of dorsal attentional and visual areas, as well as fronto-parietal circuits. The second, mainly encompassing the connectivity between visual, attentional, DMN and subcortical networks. Overall, these findings confirms the potential ofthe hybrid connICA for the compression of structural/functional connectomes into integrated patterns from a set of individual brain networks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.