id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.08820 | Aram Ansary Ogholbake | Aram Ansary Ogholbake, Hana Khamfroush | A Modified SEIR Model for the Spread of COVID-19 Considering Different
Vaccine Types | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The COVID-19 pandemic has influenced the lives of people globally. In the
past year many researchers have proposed different models and approaches to
explore in what ways the spread of the disease could be mitigated. One of the
models that have been used a great deal is the
Susceptible-Exposed-Infectious-Recovered (SEIR) model. Some researchers have
modified the traditional SEIR model, and proposed new versions of it. However,
to the best of our knowledge, the state-of-the-art papers have not considered
the effect of different vaccine types, meaning single shot and double shot
vaccines, in their SEIR model. In this paper, we propose a modified version of
the SEIR model which takes into account the effect of different vaccine types.
We compare how different policies for the administration of the vaccine can
influence the rate at which people are exposed to the disease, get infected,
recover, and pass away. Our results suggest that taking the double shot vaccine
such as Pfizer-BioNTech and Moderna does a better job at mitigating the spread
and fatality rate of the disease compared to the single shot vaccine, due to
its higher efficacy.
| [
{
"created": "Sun, 6 Jun 2021 16:35:20 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Ogholbake",
"Aram Ansary",
""
],
[
"Khamfroush",
"Hana",
""
]
] | The COVID-19 pandemic has influenced the lives of people globally. In the past year many researchers have proposed different models and approaches to explore in what ways the spread of the disease could be mitigated. One of the models that have been used a great deal is the Susceptible-Exposed-Infectious-Recovered (SEIR) model. Some researchers have modified the traditional SEIR model, and proposed new versions of it. However, to the best of our knowledge, the state-of-the-art papers have not considered the effect of different vaccine types, meaning single shot and double shot vaccines, in their SEIR model. In this paper, we propose a modified version of the SEIR model which takes into account the effect of different vaccine types. We compare how different policies for the administration of the vaccine can influence the rate at which people are exposed to the disease, get infected, recover, and pass away. Our results suggest that taking the double shot vaccine such as Pfizer-BioNTech and Moderna does a better job at mitigating the spread and fatality rate of the disease compared to the single shot vaccine, due to its higher efficacy. |
1902.00067 | Yue Cao | Yue Cao and Yang Shen | Bayesian active learning for optimization and uncertainty quantification
in protein docking | null | null | null | null | q-bio.BM cs.LG math.OC stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Motivation: Ab initio protein docking represents a major challenge for
optimizing a noisy and costly "black box"-like function in a high-dimensional
space. Despite progress in this field, there is no docking method available for
rigorous uncertainty quantification (UQ) of its solution quality (e.g.
interface RMSD or iRMSD).
Results: We introduce a novel algorithm, Bayesian Active Learning (BAL), for
optimization and UQ of such black-box functions and flexible protein docking.
BAL directly models the posterior distribution of the global optimum (or native
structures for protein docking) with active sampling and posterior estimation
iteratively feeding each other. Furthermore, we use complex normal modes to
represent a homogeneous Euclidean conformation space suitable for
high-dimension optimization and construct funnel-like energy models for
encounter complexes. Over a protein docking benchmark set and a CAPRI set
including homology docking, we establish that BAL significantly improve against
both starting points by rigid docking and refinements by particle swarm
optimization, providing for one third targets a top-3 near-native prediction.
BAL also generates tight confidence intervals with half range around 25% of
iRMSD and confidence level at 85%. Its estimated probability of a prediction
being native or not achieves binary classification AUROC at 0.93 and AUPRC over
0.60 (compared to 0.14 by chance); and also found to help ranking predictions.
To the best of our knowledge, this study represents the first uncertainty
quantification solution for protein docking, with theoretical rigor and
comprehensive assessment.
Source codes are available at https://github.com/Shen-Lab/BAL.
| [
{
"created": "Thu, 31 Jan 2019 20:52:06 GMT",
"version": "v1"
}
] | 2019-02-04 | [
[
"Cao",
"Yue",
""
],
[
"Shen",
"Yang",
""
]
] | Motivation: Ab initio protein docking represents a major challenge for optimizing a noisy and costly "black box"-like function in a high-dimensional space. Despite progress in this field, there is no docking method available for rigorous uncertainty quantification (UQ) of its solution quality (e.g. interface RMSD or iRMSD). Results: We introduce a novel algorithm, Bayesian Active Learning (BAL), for optimization and UQ of such black-box functions and flexible protein docking. BAL directly models the posterior distribution of the global optimum (or native structures for protein docking) with active sampling and posterior estimation iteratively feeding each other. Furthermore, we use complex normal modes to represent a homogeneous Euclidean conformation space suitable for high-dimension optimization and construct funnel-like energy models for encounter complexes. Over a protein docking benchmark set and a CAPRI set including homology docking, we establish that BAL significantly improve against both starting points by rigid docking and refinements by particle swarm optimization, providing for one third targets a top-3 near-native prediction. BAL also generates tight confidence intervals with half range around 25% of iRMSD and confidence level at 85%. Its estimated probability of a prediction being native or not achieves binary classification AUROC at 0.93 and AUPRC over 0.60 (compared to 0.14 by chance); and also found to help ranking predictions. To the best of our knowledge, this study represents the first uncertainty quantification solution for protein docking, with theoretical rigor and comprehensive assessment. Source codes are available at https://github.com/Shen-Lab/BAL. |
1201.0689 | L.T. Handoko | A. Sulaiman, F. P. Zen, H. Alatas, L. T. Handoko | Dynamics of DNA Bubble in Viscous Medium | 4 pages. arXiv admin note: substantial text overlap with
arXiv:1112.4715 | AIP Conference Proceeding 1454 (2012) pp. 298-301 | 10.1063/1.4730745 | FISIKALIPI-11047 | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The damping effect to the DNA bubble is investigated within the
Peyrard-Bishop model. In the continuum limit, the dynamics of the bubble of DNA
is described by the damped nonlinear Schrodinger equation and studied by means
of variational method. It is shown that the propagation of solitary wave
pattern is not vanishing in a non-viscous system. Inversely, the solitary wave
vanishes soon as the viscous force is introduced.
| [
{
"created": "Tue, 3 Jan 2012 16:38:53 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jun 2012 14:38:09 GMT",
"version": "v2"
}
] | 2012-06-25 | [
[
"Sulaiman",
"A.",
""
],
[
"Zen",
"F. P.",
""
],
[
"Alatas",
"H.",
""
],
[
"Handoko",
"L. T.",
""
]
] | The damping effect to the DNA bubble is investigated within the Peyrard-Bishop model. In the continuum limit, the dynamics of the bubble of DNA is described by the damped nonlinear Schrodinger equation and studied by means of variational method. It is shown that the propagation of solitary wave pattern is not vanishing in a non-viscous system. Inversely, the solitary wave vanishes soon as the viscous force is introduced. |
1211.6493 | Changbong Hyeon | Jong-Chin Lin, Changbong Hyeon, D. Thirumalai | RNA under Tension: Folding Landscapes, Kinetic Partitioning Mechanism,
and Molecular Tensegrity | 24 pages, 6 figures | J. Phys. Chem. Lett., 2012, vol 3, 3616 | 10.1021/jz301537t | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-coding RNA sequences play a great role in controlling a number of
cellular functions, thus raising the need to understand their complex
conformational dynamics in quantitative detail. In this perspective, we first
show that single molecule pulling experiments when combined with with theory
and simulations can be used to quantitatively explore the folding landscape of
nucleic acid hairpins, and riboswitches with tertiary interactions.
Applications to riboswitches, which are non-coding RNA elements that control
gene expression by undergoing dynamical conformational changes in response to
binding of metabolites, lead to an organization principle that assembly of RNA
is determined by the stability of isolated helices. We also point out the
limitations of single molecule pulling experiments, with molecular extension as
the only accessible parameter, in extracting key parameters of the folding
landscapes of RNA molecules.
| [
{
"created": "Wed, 28 Nov 2012 01:20:04 GMT",
"version": "v1"
}
] | 2012-11-29 | [
[
"Lin",
"Jong-Chin",
""
],
[
"Hyeon",
"Changbong",
""
],
[
"Thirumalai",
"D.",
""
]
] | Non-coding RNA sequences play a great role in controlling a number of cellular functions, thus raising the need to understand their complex conformational dynamics in quantitative detail. In this perspective, we first show that single molecule pulling experiments when combined with with theory and simulations can be used to quantitatively explore the folding landscape of nucleic acid hairpins, and riboswitches with tertiary interactions. Applications to riboswitches, which are non-coding RNA elements that control gene expression by undergoing dynamical conformational changes in response to binding of metabolites, lead to an organization principle that assembly of RNA is determined by the stability of isolated helices. We also point out the limitations of single molecule pulling experiments, with molecular extension as the only accessible parameter, in extracting key parameters of the folding landscapes of RNA molecules. |
2011.05755 | Szu-Chi Chung | Szu-Chi Chung, Cheng-Yu Hung, Huei-Lun Siao, Hung-Yi Wu, Wei-Hau Chang
and I-Ping Tu | Cryo-RALib -- a modular library for accelerating alignment in cryo-EM | null | null | null | null | q-bio.QM cs.DC eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Thanks to automated cryo-EM and GPU-accelerated processing, single-particle
cryo-EM has become a rapid structure determination method that permits capture
of dynamical structures of molecules in solution, which has been recently
demonstrated by the determination of COVID-19 spike protein in March, shortly
after its breakout in late January 2020. This rapidity is critical for vaccine
development in response to emerging pandemic. This explains why a 2D
classification approach based on multi-reference alignment (MRA) is not as
popular as the Bayesian-based approach despite that the former has advantage in
differentiating structural variations under low signal-to-noise ratio. This is
perhaps because that MRA is a time-consuming process and a modular
GPU-acceleration library for MRA is lacking. Here, we introduce a library
called Cryo-RALib that expands the functionality of CUDA library used by GPU
ISAC. It contains a GPU-accelerated MRA routine for accelerating MRA-based
classification algorithms. In addition, we connect the cryo-EM image analysis
with the python data science stack so as to make it easier for users to perform
data analysis and visualization. Benchmarking on the TaiWan Computing Cloud
(TWCC) container shows that our implementation can accelerate the computation
by one order of magnitude. The library is available at
https://github.com/phonchi/Cryo-RAlib.
| [
{
"created": "Wed, 11 Nov 2020 13:15:22 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Nov 2020 05:32:16 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jan 2021 02:28:45 GMT",
"version": "v3"
},
{
"created": "Thu, 25 Feb 2021 08:48:26 GMT",
"version": "v4"
}
] | 2021-02-26 | [
[
"Chung",
"Szu-Chi",
""
],
[
"Hung",
"Cheng-Yu",
""
],
[
"Siao",
"Huei-Lun",
""
],
[
"Wu",
"Hung-Yi",
""
],
[
"Chang",
"Wei-Hau",
""
],
[
"Tu",
"I-Ping",
""
]
] | Thanks to automated cryo-EM and GPU-accelerated processing, single-particle cryo-EM has become a rapid structure determination method that permits capture of dynamical structures of molecules in solution, which has been recently demonstrated by the determination of COVID-19 spike protein in March, shortly after its breakout in late January 2020. This rapidity is critical for vaccine development in response to emerging pandemic. This explains why a 2D classification approach based on multi-reference alignment (MRA) is not as popular as the Bayesian-based approach despite that the former has advantage in differentiating structural variations under low signal-to-noise ratio. This is perhaps because that MRA is a time-consuming process and a modular GPU-acceleration library for MRA is lacking. Here, we introduce a library called Cryo-RALib that expands the functionality of CUDA library used by GPU ISAC. It contains a GPU-accelerated MRA routine for accelerating MRA-based classification algorithms. In addition, we connect the cryo-EM image analysis with the python data science stack so as to make it easier for users to perform data analysis and visualization. Benchmarking on the TaiWan Computing Cloud (TWCC) container shows that our implementation can accelerate the computation by one order of magnitude. The library is available at https://github.com/phonchi/Cryo-RAlib. |
2107.10332 | Abicumaran Uthamacumaran | Abicumaran Uthamacumaran, Samir Elouatik, Mohamed Abdouh, Michael
Berteau-Rainville, Zu-hua Gao, and Goffredo Arena | Machine Learning Characterization of Cancer Patients-Derived
Extracellular Vesicles using Vibrational Spectroscopies | 41 pages | Applied Intelligence (2022) | 10.1007/s10489-022-03203-1 | null | q-bio.OT cs.AI cs.LG physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The early detection of cancer is a challenging problem in medicine. The blood
sera of cancer patients are enriched with heterogeneous secretory lipid bound
extracellular vesicles (EVs), which present a complex repertoire of information
and biomarkers, representing their cell of origin, that are being currently
studied in the field of liquid biopsy and cancer screening. Vibrational
spectroscopies provide non-invasive approaches for the assessment of structural
and biophysical properties in complex biological samples. In this pilot study,
multiple Raman spectroscopy measurements were performed on the EVs extracted
from the blood sera of 9 patients consisting of four different cancer subtypes
(colorectal cancer, hepatocellular carcinoma, breast cancer and pancreatic
cancer) and five healthy patients (controls). FTIR (Fourier Transform Infrared)
spectroscopy measurements were performed as a complementary approach to Raman
analysis, on two of the four cancer subtypes. The AdaBoost Random Forest
Classifier, Decision Trees, and Support Vector Machines (SVM) distinguished the
baseline corrected Raman spectra of cancer EVs from those of healthy controls
(18 spectra) with a classification accuracy of above 90 percent when reduced to
a spectral frequency range of 1800 to 1940 inverse cm and subjected to a 50:50
training: testing split. FTIR classification accuracy on 14 spectra showed an
80 percent classification accuracy. Our findings demonstrate that basic machine
learning algorithms are powerful applied intelligence tools to distinguish the
complex vibrational spectra of cancer patient EVs from those of healthy
patients. These experimental methods hold promise as valid and efficient liquid
biopsy for artificial intelligence-assisted early cancer screening.
| [
{
"created": "Wed, 21 Jul 2021 19:56:33 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Aug 2021 15:41:05 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Aug 2021 15:01:41 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Nov 2021 16:14:36 GMT",
"version": "v4"
},
{
"created": "Wed, 17 Nov 2021 19:03:29 GMT",
"version": "v5"
},
{
"created": "Wed, 22 Dec 2021 21:11:38 GMT",
"version": "v6"
},
{
"created": "Wed, 26 Jan 2022 15:45:58 GMT",
"version": "v7"
},
{
"created": "Fri, 28 Jan 2022 18:54:27 GMT",
"version": "v8"
},
{
"created": "Mon, 14 Feb 2022 04:18:47 GMT",
"version": "v9"
}
] | 2022-02-15 | [
[
"Uthamacumaran",
"Abicumaran",
""
],
[
"Elouatik",
"Samir",
""
],
[
"Abdouh",
"Mohamed",
""
],
[
"Berteau-Rainville",
"Michael",
""
],
[
"Gao",
"Zu-hua",
""
],
[
"Arena",
"Goffredo",
""
]
] | The early detection of cancer is a challenging problem in medicine. The blood sera of cancer patients are enriched with heterogeneous secretory lipid bound extracellular vesicles (EVs), which present a complex repertoire of information and biomarkers, representing their cell of origin, that are being currently studied in the field of liquid biopsy and cancer screening. Vibrational spectroscopies provide non-invasive approaches for the assessment of structural and biophysical properties in complex biological samples. In this pilot study, multiple Raman spectroscopy measurements were performed on the EVs extracted from the blood sera of 9 patients consisting of four different cancer subtypes (colorectal cancer, hepatocellular carcinoma, breast cancer and pancreatic cancer) and five healthy patients (controls). FTIR (Fourier Transform Infrared) spectroscopy measurements were performed as a complementary approach to Raman analysis, on two of the four cancer subtypes. The AdaBoost Random Forest Classifier, Decision Trees, and Support Vector Machines (SVM) distinguished the baseline corrected Raman spectra of cancer EVs from those of healthy controls (18 spectra) with a classification accuracy of above 90 percent when reduced to a spectral frequency range of 1800 to 1940 inverse cm and subjected to a 50:50 training: testing split. FTIR classification accuracy on 14 spectra showed an 80 percent classification accuracy. Our findings demonstrate that basic machine learning algorithms are powerful applied intelligence tools to distinguish the complex vibrational spectra of cancer patient EVs from those of healthy patients. These experimental methods hold promise as valid and efficient liquid biopsy for artificial intelligence-assisted early cancer screening. |
2209.13561 | Jacob Granley | Jacob Granley, Alexander Riedel, Michael Beyeler | Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses | null | null | null | null | q-bio.NC cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Cortical prostheses are devices implanted in the visual cortex that attempt
to restore lost vision by electrically stimulating neurons. Currently, the
vision provided by these devices is limited, and accurately predicting the
visual percepts resulting from stimulation is an open challenge. We propose to
address this challenge by utilizing 'brain-like' convolutional neural networks
(CNNs), which have emerged as promising models of the visual system. To
investigate the feasibility of adapting brain-like CNNs for modeling visual
prostheses, we developed a proof-of-concept model to predict the perceptions
resulting from electrical stimulation. We show that a neurologically-inspired
decoding of CNN activations produces qualitatively accurate phosphenes,
comparable to phosphenes reported by real patients. Overall, this is an
essential first step towards building brain-like models of electrical
stimulation, which may not just improve the quality of vision provided by
cortical prostheses but could also further our understanding of the neural code
of vision.
| [
{
"created": "Tue, 27 Sep 2022 17:33:19 GMT",
"version": "v1"
}
] | 2022-09-28 | [
[
"Granley",
"Jacob",
""
],
[
"Riedel",
"Alexander",
""
],
[
"Beyeler",
"Michael",
""
]
] | Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons. Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge. We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system. To investigate the feasibility of adapting brain-like CNNs for modeling visual prostheses, we developed a proof-of-concept model to predict the perceptions resulting from electrical stimulation. We show that a neurologically-inspired decoding of CNN activations produces qualitatively accurate phosphenes, comparable to phosphenes reported by real patients. Overall, this is an essential first step towards building brain-like models of electrical stimulation, which may not just improve the quality of vision provided by cortical prostheses but could also further our understanding of the neural code of vision. |
2010.10443 | Mattia Sensi | Tommaso Lorenzi, Andrea Pugliese, Mattia Sensi, Agnese Zardini | Evolutionary dynamics in an SI epidemic model with phenotype-structured
susceptible compartment | 29 pages, 8 figures | Journal of Mathematical Biology volume 83, Article number: 72
(2021) | 10.1007/s00285-021-01703-1 | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an SI epidemic model whereby a continuous variable captures
variability in proliferative potential and resistance to infection among
susceptibles. The occurrence of heritable, spontaneous changes in these
phenotype and the presence of a fitness trade-off between resistance to
infection and proliferative potential are incorporated into the model. The
model comprises an ODE for the number of infected individuals that is coupled
with a partial integrodifferential equation for the population density of
susceptibles through an integral term. The expression for the basic
reproduction number $\mathcal{R}_0$ is derived, the disease-free and endemic
equilibrium of the model are characterised and a threshold theorem is proved.
Analytical results are integrated with numerical simulations of a calibrated
version of the model based on the results of artificial selection experiments
in a host-parasite system. The results of our mathematical study disentangle
the impact of different evolutionary parameters on the spread of infectious
diseases and the consequent phenotypic adaption of susceptible individuals. In
particular, these results provide a theoretical basis for the observation that
infectious diseases exerting stronger selective pressures on susceptibles and
being characterised by higher infection rates are more likely to spread.
Moreover, our results indicate that spontaneous phenotypic changes in
proliferative potential and resistance to infection can either promote or
prevent the spread of diseases depending on the strength of selection acting on
susceptible individuals prior to infection. Finally, we demonstrate that, when
an endemic equilibrium is established, higher levels of resistance to infection
and lower degrees of phenotypic heterogeneity are to be expected in the
presence of infections which are characterised by lower rates of death and
exert stronger selective pressures.
| [
{
"created": "Tue, 20 Oct 2020 16:56:34 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Nov 2021 09:55:36 GMT",
"version": "v2"
}
] | 2021-12-30 | [
[
"Lorenzi",
"Tommaso",
""
],
[
"Pugliese",
"Andrea",
""
],
[
"Sensi",
"Mattia",
""
],
[
"Zardini",
"Agnese",
""
]
] | We present an SI epidemic model whereby a continuous variable captures variability in proliferative potential and resistance to infection among susceptibles. The occurrence of heritable, spontaneous changes in these phenotype and the presence of a fitness trade-off between resistance to infection and proliferative potential are incorporated into the model. The model comprises an ODE for the number of infected individuals that is coupled with a partial integrodifferential equation for the population density of susceptibles through an integral term. The expression for the basic reproduction number $\mathcal{R}_0$ is derived, the disease-free and endemic equilibrium of the model are characterised and a threshold theorem is proved. Analytical results are integrated with numerical simulations of a calibrated version of the model based on the results of artificial selection experiments in a host-parasite system. The results of our mathematical study disentangle the impact of different evolutionary parameters on the spread of infectious diseases and the consequent phenotypic adaption of susceptible individuals. In particular, these results provide a theoretical basis for the observation that infectious diseases exerting stronger selective pressures on susceptibles and being characterised by higher infection rates are more likely to spread. Moreover, our results indicate that spontaneous phenotypic changes in proliferative potential and resistance to infection can either promote or prevent the spread of diseases depending on the strength of selection acting on susceptible individuals prior to infection. Finally, we demonstrate that, when an endemic equilibrium is established, higher levels of resistance to infection and lower degrees of phenotypic heterogeneity are to be expected in the presence of infections which are characterised by lower rates of death and exert stronger selective pressures. |
1805.05682 | Begona Diaz | Begona Diaz, Helen Blank, and Katharina von Kriegstein | Task-dependent modulation of the visual sensory thalamus assists
visual-speech recognition | null | null | 10.1016/j.neuroimage.2018.05.032 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The cerebral cortex modulates early sensory processing via feed-back
connections to sensory pathway nuclei. The functions of this top-down
modulation for human behavior are poorly understood. Here, we show that
top-down modulation of the visual sensory thalamus (the lateral geniculate
body, LGN) is involved in visual-speech recognition. In two independent
functional magnetic resonance imaging (fMRI) studies, LGN response increased
when participants processed fast-varying features of articulatory movements
required for visual-speech recognition, as compared to temporally more stable
features required for face identification with the same stimulus material. The
LGN response during the visual-speech task correlated positively with the
visual-speech recognition scores across participants. In addition, the
task-dependent modulation was present for speech movements and did not occur
for control conditions involving non-speech biological movements. In
face-to-face communication, visual speech recognition is used to enhance or
even enable understanding what is said. Speech recognition is commonly
explained in frameworks focusing on cerebral cortex areas. Our findings suggest
that task-dependent modulation at subcortical sensory stages has an important
role for communication: Together with similar findings in the auditory modality
the findings imply that task-dependent modulation of the sensory thalami is a
general mechanism to optimize speech recognition.
| [
{
"created": "Tue, 15 May 2018 10:10:59 GMT",
"version": "v1"
},
{
"created": "Thu, 24 May 2018 15:32:48 GMT",
"version": "v2"
}
] | 2018-05-25 | [
[
"Diaz",
"Begona",
""
],
[
"Blank",
"Helen",
""
],
[
"von Kriegstein",
"Katharina",
""
]
] | The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. |
1801.10262 | Yasser A. Ahmed | Yasser A. Ahmed, Mohammed Abdelsabour-Khalaf, Elsaysed Mohammed | Histological insight into the hepatic tissue of the Nile monitor
(Varanus niloticus) | 10 pages, 2 figures, original article | null | null | null | q-bio.TO | http://creativecommons.org/publicdomain/zero/1.0/ | The liver of reptiles is considered an important study model for the
interaction between environment and hepatic tissue. Little is known about the
histology of the liver of reptiles. The aim of the current study was to
elucidate the histological architecture of the liver of the Nile monitor
(Varanus niloticus). Liver fragments from the Nile monitor were collected in
the summer season and processed for the light and electron microscopy. The
liver of the Nile monitor was bi-lobed and the right lobe was found to be
larger than the left lobe. Histological examination revealed indistinct
lobulation of the liver, and the central vein, sinusoids and portal area were
haphazardly organized. The hepatic parenchyma consisted of hepatocytes arranged
in glandular-like alveoli or tubules separated by a network of twisted
capillary sinusoids. The hepatocytes were polyhedral in shape with vacuolated
cytoplasm and the nucleus was single rounded, eccentric, large and vesicular
with a distinct nucleolus. The hepatocytes contained numerous lipid droplets,
abundant glycogen granules and well-developed RER and mitochondria. The
hepatocytes appeared to secrete into the bile canaliculi through the
disintegration of their dark cytoplasm into the bile canaliculi. The space of
Disse separating between the hepatocytes and sinusoids contained many recesses.
The portal area contained branches of the portal vein, hepatic artery, bile
duct and lymphatic vessels embedded in a connective tissue. Some
non-parenchymal cells were described such as Kupffer cells, heterophils,
melano-macrophages, intercalated cells, myofibroblasts in addition to the
endothelium of the sinusoids. This is the first report about the histological
structure of the liver of the Egyptian Nile monitor. The result presented here
should be considered a baseline knowledge to compare with the pathological
affections of the liver in this species.
| [
{
"created": "Wed, 31 Jan 2018 01:02:57 GMT",
"version": "v1"
}
] | 2018-02-01 | [
[
"Ahmed",
"Yasser A.",
""
],
[
"Abdelsabour-Khalaf",
"Mohammed",
""
],
[
"Mohammed",
"Elsaysed",
""
]
] | The liver of reptiles is considered an important study model for the interaction between environment and hepatic tissue. Little is known about the histology of the liver of reptiles. The aim of the current study was to elucidate the histological architecture of the liver of the Nile monitor (Varanus niloticus). Liver fragments from the Nile monitor were collected in the summer season and processed for the light and electron microscopy. The liver of the Nile monitor was bi-lobed and the right lobe was found to be larger than the left lobe. Histological examination revealed indistinct lobulation of the liver, and the central vein, sinusoids and portal area were haphazardly organized. The hepatic parenchyma consisted of hepatocytes arranged in glandular-like alveoli or tubules separated by a network of twisted capillary sinusoids. The hepatocytes were polyhedral in shape with vacuolated cytoplasm and the nucleus was single rounded, eccentric, large and vesicular with a distinct nucleolus. The hepatocytes contained numerous lipid droplets, abundant glycogen granules and well-developed RER and mitochondria. The hepatocytes appeared to secrete into the bile canaliculi through the disintegration of their dark cytoplasm into the bile canaliculi. The space of Disse separating between the hepatocytes and sinusoids contained many recesses. The portal area contained branches of the portal vein, hepatic artery, bile duct and lymphatic vessels embedded in a connective tissue. Some non-parenchymal cells were described such as Kupffer cells, heterophils, melano-macrophages, intercalated cells, myofibroblasts in addition to the endothelium of the sinusoids. This is the first report about the histological structure of the liver of the Egyptian Nile monitor. The result presented here should be considered a baseline knowledge to compare with the pathological affections of the liver in this species. |
2201.08224 | Samuel Okyere | Samuel Okyere, Joseph Ackora-Prah and Ebenezer Bonyah | A Mathematical Model of Transmission Dynamics of SARS-Cov-2 (Covid-19)
with an Underlying Condition of Diabetes | 41 pages, 22 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | It is well established that people with diabetes are more likely to have
serious complications from COVID-19. Nearly 1 in 5 COVID-19 deaths in the
African region are linked to diabetes. World Health Organization (WHO) finds
that 18.3% of COVID-19 deaths in Africa are among people with diabetes. In this
paper, we have formulated and analysed a mathematical comorbidity model of
diabetes - COVID-19 of the deterministic type. The basic properties of the
model were explored. The basic reproductive number, equilibrium points and
stability of the equilibrium points were examined. Sensitivity analysis of the
model was carried on to determine the impact of the model parameters on the
basic reproduction number of the model. The model had a unique endemic
equilibrium point, which was stable for R_0>1. Time-dependent optimal controls
were incorporated into the model with the sole aim of determining the best
strategy for curtailing the spread of the disease. COVID-19 cases from March to
September 2020 in Ghana were used to validate the model. Results of the
numerical simulation suggest a greater number of individuals deceased when the
infected individual had an underlying condition of diabetes. More so COVID-19
is endemic in Ghana with the basic reproduction number found to be R_0=1.4722.
The numerical simulation of the optimal control model reveals the lockdown
control minimized the rate of decay of the susceptible individuals whereas the
vaccination led to a number of susceptible individuals becoming immune to
COVID-19 infections. In all the two preventive control measures were both
effective in curbing the spread of the COVID-19 disease as the number of
COVID-19 infections was greatly reduced. We conclude that more attention should
be paid to COVID-19 patients with an underlying condition of diabetes as the
probability of death in this population was significantly higher.
| [
{
"created": "Wed, 12 Jan 2022 22:48:57 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Apr 2022 17:37:12 GMT",
"version": "v2"
}
] | 2022-04-21 | [
[
"Okyere",
"Samuel",
""
],
[
"Ackora-Prah",
"Joseph",
""
],
[
"Bonyah",
"Ebenezer",
""
]
] | It is well established that people with diabetes are more likely to have serious complications from COVID-19. Nearly 1 in 5 COVID-19 deaths in the African region are linked to diabetes. World Health Organization (WHO) finds that 18.3% of COVID-19 deaths in Africa are among people with diabetes. In this paper, we have formulated and analysed a mathematical comorbidity model of diabetes - COVID-19 of the deterministic type. The basic properties of the model were explored. The basic reproductive number, equilibrium points and stability of the equilibrium points were examined. Sensitivity analysis of the model was carried on to determine the impact of the model parameters on the basic reproduction number of the model. The model had a unique endemic equilibrium point, which was stable for R_0>1. Time-dependent optimal controls were incorporated into the model with the sole aim of determining the best strategy for curtailing the spread of the disease. COVID-19 cases from March to September 2020 in Ghana were used to validate the model. Results of the numerical simulation suggest a greater number of individuals deceased when the infected individual had an underlying condition of diabetes. More so COVID-19 is endemic in Ghana with the basic reproduction number found to be R_0=1.4722. The numerical simulation of the optimal control model reveals the lockdown control minimized the rate of decay of the susceptible individuals whereas the vaccination led to a number of susceptible individuals becoming immune to COVID-19 infections. In all the two preventive control measures were both effective in curbing the spread of the COVID-19 disease as the number of COVID-19 infections was greatly reduced. We conclude that more attention should be paid to COVID-19 patients with an underlying condition of diabetes as the probability of death in this population was significantly higher. |
1605.03952 | Mattia Rigotti | Mattia Rigotti and Stefano Fusi | Estimating the dimensionality of neural responses with fMRI Repetition
Suppression | Appears in Proceedings of the 5th NIPS Workshop on Machine Learning
and Interpretation in Neuroimaging, Montreal, 2015 | null | null | MLINI/2015/10 | q-bio.QM q-bio.NC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method that exploits fMRI Repetition Suppression (RS-fMRI)
to measure the dimensionality of the set response vectors, i.e. the dimension
of the space of linear combinations of neural population activity patterns in
response to specific task conditions. RS-fMRI measures the overlap between
response vectors even in brain areas displaying no discernible average
differential BOLD signal. We show how this property can be used to estimate the
neural response dimensionality in areas lacking macroscopic spatial patterning.
The importance of dimensionality derives from how it relates to a neural
circuit's functionality. As we show, the dimensionality of the response vectors
is predicted to be high in areas involved in multi-stream integration, while it
is low in areas where inputs from independent sources do not interact or merely
overlap linearly. Our method can be used to identify and functionally
characterize cortical circuits that integrate multiple independent information
pathways.
| [
{
"created": "Thu, 12 May 2016 19:44:05 GMT",
"version": "v1"
}
] | 2016-06-07 | [
[
"Rigotti",
"Mattia",
""
],
[
"Fusi",
"Stefano",
""
]
] | We propose a novel method that exploits fMRI Repetition Suppression (RS-fMRI) to measure the dimensionality of the set response vectors, i.e. the dimension of the space of linear combinations of neural population activity patterns in response to specific task conditions. RS-fMRI measures the overlap between response vectors even in brain areas displaying no discernible average differential BOLD signal. We show how this property can be used to estimate the neural response dimensionality in areas lacking macroscopic spatial patterning. The importance of dimensionality derives from how it relates to a neural circuit's functionality. As we show, the dimensionality of the response vectors is predicted to be high in areas involved in multi-stream integration, while it is low in areas where inputs from independent sources do not interact or merely overlap linearly. Our method can be used to identify and functionally characterize cortical circuits that integrate multiple independent information pathways. |
2212.12795 | Diederik Aerts | Diederik Aerts, Jonito Aerts Argu\"elles, Lester Beltran and Sandro
Sozzo | Development of a Thermodynamics of Human Cognition and Human Culture | 20 pages, 3 figures | null | null | null | q-bio.NC cs.CL quant-ph | http://creativecommons.org/licenses/by/4.0/ | Inspired by foundational studies in classical and quantum physics, and by
information retrieval studies in quantum information theory, we prove that the
notions of 'energy' and 'entropy' can be consistently introduced in human
language and, more generally, in human culture. More explicitly, if energy is
attributed to words according to their frequency of appearance in a text, then
the ensuing energy levels are distributed non-classically, namely, they obey
Bose-Einstein, rather than Maxwell-Boltzmann, statistics, as a consequence of
the genuinely 'quantum indistinguishability' of the words that appear in the
text. Secondly, the 'quantum entanglement' due to the way meaning is carried by
a text reduces the (von Neumann) entropy of the words that appear in the text,
a behaviour which cannot be explained within classical (thermodynamic or
information) entropy. We claim here that this 'quantum-type behaviour is valid
in general in human language', namely, any text is conceptually more concrete
than the words composing it, which entails that the entropy of the overall text
decreases. In addition, we provide examples taken from cognition, where
quantization of energy appears in categorical perception, and from culture,
where entities collaborate, thus 'entangle', to decrease overall entropy. We
use these findings to propose the development of a new 'non-classical
thermodynamic theory' for human cognition, which also covers broad parts of
human culture and its artefacts and bridges concepts with quantum physics
entities.
| [
{
"created": "Sat, 24 Dec 2022 18:19:05 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Feb 2023 09:04:08 GMT",
"version": "v2"
}
] | 2023-02-27 | [
[
"Aerts",
"Diederik",
""
],
[
"Arguëlles",
"Jonito Aerts",
""
],
[
"Beltran",
"Lester",
""
],
[
"Sozzo",
"Sandro",
""
]
] | Inspired by foundational studies in classical and quantum physics, and by information retrieval studies in quantum information theory, we prove that the notions of 'energy' and 'entropy' can be consistently introduced in human language and, more generally, in human culture. More explicitly, if energy is attributed to words according to their frequency of appearance in a text, then the ensuing energy levels are distributed non-classically, namely, they obey Bose-Einstein, rather than Maxwell-Boltzmann, statistics, as a consequence of the genuinely 'quantum indistinguishability' of the words that appear in the text. Secondly, the 'quantum entanglement' due to the way meaning is carried by a text reduces the (von Neumann) entropy of the words that appear in the text, a behaviour which cannot be explained within classical (thermodynamic or information) entropy. We claim here that this 'quantum-type behaviour is valid in general in human language', namely, any text is conceptually more concrete than the words composing it, which entails that the entropy of the overall text decreases. In addition, we provide examples taken from cognition, where quantization of energy appears in categorical perception, and from culture, where entities collaborate, thus 'entangle', to decrease overall entropy. We use these findings to propose the development of a new 'non-classical thermodynamic theory' for human cognition, which also covers broad parts of human culture and its artefacts and bridges concepts with quantum physics entities. |
1509.03777 | Alex McAvoy | Alex McAvoy, Christoph Hauert | Structural symmetry in evolutionary games | to appear in J. Roy. Soc. Interface | Journal of the Royal Society Interface vol. 12 no. 111, 20150420
(2015) | 10.1098/rsif.2015.0420 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In evolutionary game theory, an important measure of a mutant trait
(strategy) is its ability to invade and take over an otherwise-monomorphic
population. Typically, one quantifies the success of a mutant strategy via the
probability that a randomly occurring mutant will fixate in the population.
However, in a structured population, this fixation probability may depend on
where the mutant arises. Moreover, the fixation probability is just one
quantity by which one can measure the success of a mutant; fixation time, for
instance, is another. We define a notion of homogeneity for evolutionary games
that captures what it means for two single-mutant states, i.e. two
configurations of a single mutant in an otherwise-monomorphic population, to be
"evolutionarily equivalent" in the sense that all measures of evolutionary
success are the same for both configurations. Using asymmetric games, we argue
that the term "homogeneous" should apply to the evolutionary process as a whole
rather than to just the population structure. For evolutionary matrix games in
graph-structured populations, we give precise conditions under which the
resulting process is homogeneous. Finally, we show that asymmetric matrix games
can be reduced to symmetric games if the population structure possesses a
sufficient degree of symmetry.
| [
{
"created": "Sat, 12 Sep 2015 20:25:06 GMT",
"version": "v1"
}
] | 2016-04-12 | [
[
"McAvoy",
"Alex",
""
],
[
"Hauert",
"Christoph",
""
]
] | In evolutionary game theory, an important measure of a mutant trait (strategy) is its ability to invade and take over an otherwise-monomorphic population. Typically, one quantifies the success of a mutant strategy via the probability that a randomly occurring mutant will fixate in the population. However, in a structured population, this fixation probability may depend on where the mutant arises. Moreover, the fixation probability is just one quantity by which one can measure the success of a mutant; fixation time, for instance, is another. We define a notion of homogeneity for evolutionary games that captures what it means for two single-mutant states, i.e. two configurations of a single mutant in an otherwise-monomorphic population, to be "evolutionarily equivalent" in the sense that all measures of evolutionary success are the same for both configurations. Using asymmetric games, we argue that the term "homogeneous" should apply to the evolutionary process as a whole rather than to just the population structure. For evolutionary matrix games in graph-structured populations, we give precise conditions under which the resulting process is homogeneous. Finally, we show that asymmetric matrix games can be reduced to symmetric games if the population structure possesses a sufficient degree of symmetry. |
1407.5104 | Pulkit Agrawal | Pulkit Agrawal, Dustin Stansbury, Jitendra Malik, Jack L. Gallant | Pixels to Voxels: Modeling Visual Representation in the Human Brain | null | null | null | null | q-bio.NC cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brain is adept at solving difficult high-level visual processing
problems such as image interpretation and object recognition in natural scenes.
Over the past few years neuroscientists have made remarkable progress in
understanding how the human brain represents categories of objects and actions
in natural scenes. However, all current models of high-level human vision
operate on hand annotated images in which the objects and actions have been
assigned semantic tags by a human operator. No current models can account for
high-level visual function directly in terms of low-level visual input (i.e.,
pixels). To overcome this fundamental limitation we sought to develop a new
class of models that can predict human brain activity directly from low-level
visual input (i.e., pixels). We explored two classes of models drawn from
computer vision and machine learning. The first class of models was based on
Fisher Vectors (FV) and the second was based on Convolutional Neural Networks
(ConvNets). We find that both classes of models accurately predict brain
activity in high-level visual areas, directly from pixels and without the need
for any semantic tags or hand annotation of images. This is the first time that
such a mapping has been obtained. The fit models provide a new platform for
exploring the functional principles of human vision, and they show that modern
methods of computer vision and machine learning provide important tools for
characterizing brain function.
| [
{
"created": "Fri, 18 Jul 2014 20:10:06 GMT",
"version": "v1"
}
] | 2014-07-22 | [
[
"Agrawal",
"Pulkit",
""
],
[
"Stansbury",
"Dustin",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Gallant",
"Jack L.",
""
]
] | The human brain is adept at solving difficult high-level visual processing problems such as image interpretation and object recognition in natural scenes. Over the past few years neuroscientists have made remarkable progress in understanding how the human brain represents categories of objects and actions in natural scenes. However, all current models of high-level human vision operate on hand annotated images in which the objects and actions have been assigned semantic tags by a human operator. No current models can account for high-level visual function directly in terms of low-level visual input (i.e., pixels). To overcome this fundamental limitation we sought to develop a new class of models that can predict human brain activity directly from low-level visual input (i.e., pixels). We explored two classes of models drawn from computer vision and machine learning. The first class of models was based on Fisher Vectors (FV) and the second was based on Convolutional Neural Networks (ConvNets). We find that both classes of models accurately predict brain activity in high-level visual areas, directly from pixels and without the need for any semantic tags or hand annotation of images. This is the first time that such a mapping has been obtained. The fit models provide a new platform for exploring the functional principles of human vision, and they show that modern methods of computer vision and machine learning provide important tools for characterizing brain function. |
2201.03630 | Sweta Agrawal | Sweta Agrawal and John C Tuthill | The two body problem: proprioception and motor control across the
metamorphic divide | 17 pages, 3 figures, review paper | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Like a rocket being propelled into space, evolution has engineered flies to
launch into adulthood via multiple stages. Flies develop and deploy two
distinct bodies, linked by the transformative process of metamorphosis. The fly
larva is a soft hydraulic tube that can crawl to find food and avoid predators.
The adult fly has a stiff exoskeleton with articulated limbs capable of
long-distance navigation and rich social interactions. Because the larval and
adult forms are so distinct in structure, they require distinct strategies for
sensing and moving the body. The metamorphic divide thus presents an
opportunity for comparative analysis of neural circuits. Here, we review recent
progress toward understanding the neural mechanisms of proprioception and motor
control in larval and adult Drosophila. We highlight commonalities that point
toward general principles of sensorimotor control and differences that may
reflect unique constraints imposed by biomechanics. Finally, we discuss
emerging opportunities for comparative analysis of neural circuit architecture
in the fly and other animal species.
| [
{
"created": "Mon, 10 Jan 2022 20:26:03 GMT",
"version": "v1"
}
] | 2022-01-12 | [
[
"Agrawal",
"Sweta",
""
],
[
"Tuthill",
"John C",
""
]
] | Like a rocket being propelled into space, evolution has engineered flies to launch into adulthood via multiple stages. Flies develop and deploy two distinct bodies, linked by the transformative process of metamorphosis. The fly larva is a soft hydraulic tube that can crawl to find food and avoid predators. The adult fly has a stiff exoskeleton with articulated limbs capable of long-distance navigation and rich social interactions. Because the larval and adult forms are so distinct in structure, they require distinct strategies for sensing and moving the body. The metamorphic divide thus presents an opportunity for comparative analysis of neural circuits. Here, we review recent progress toward understanding the neural mechanisms of proprioception and motor control in larval and adult Drosophila. We highlight commonalities that point toward general principles of sensorimotor control and differences that may reflect unique constraints imposed by biomechanics. Finally, we discuss emerging opportunities for comparative analysis of neural circuit architecture in the fly and other animal species. |
1911.02301 | Kesheng Xu Dr | Kesheng Xu and Jean Paul Maidana and Patricio Orio | Diversity of neuronal activity is provided by hybrid synapses | null | Nonlinear Dynamics ,2021 | 10.1007/s11071-021-06704-9 | null | q-bio.NC nlin.AO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Many experiments have evidenced that electrical and chemical synapses --
hybrid synapses -- coexist in most organisms and brain structures. The role of
electrical and chemical synapse connection in diversity of neural activity
generation has been investigated separately in networks of varying
complexities. Nevertheless, theoretical understanding of hybrid synapses in
diverse dynamical states of neural networks for self-organization and
robustness still has not been fully studied. Here, we present a model of neural
network built with hybrid synapses to investigate the emergence of global and
collective dynamics states. This neural networks consists of excitatory and
inhibitory population interacting together. The excitatory population is
connected by excitatory synapses in small world topology and its adjacent
neurons are also connected by gap junctions. The inhibitory population is only
connected by chemical inhibitory synapses with all-to-all interaction. Our
numerical simulations show that in the balanced networks with absence of
electrical coupling, the synchrony states generated by this architecture are
mainly controlled by heterogeneity among neurons and the balance of its
excitatory and inhibitory inputs. In balanced networks with strong electrical
coupling, several dynamical states arise from different combinations of
excitatory and inhibitory weights. More importantly, we find that these states,
such as synchronous firing, cluster synchrony, and various ripples events,
emerge by slight modification of chemical coupling weights. For large enough
electrical synapse coupling, the whole neural networks become synchronized. Our
results pave a way in the study of the dynamical mechanisms and computational
significance of the contribution of mixed synapse in the neural functions.
| [
{
"created": "Wed, 6 Nov 2019 10:49:00 GMT",
"version": "v1"
}
] | 2021-08-11 | [
[
"Xu",
"Kesheng",
""
],
[
"Maidana",
"Jean Paul",
""
],
[
"Orio",
"Patricio",
""
]
] | Many experiments have evidenced that electrical and chemical synapses -- hybrid synapses -- coexist in most organisms and brain structures. The role of electrical and chemical synapse connection in diversity of neural activity generation has been investigated separately in networks of varying complexities. Nevertheless, theoretical understanding of hybrid synapses in diverse dynamical states of neural networks for self-organization and robustness still has not been fully studied. Here, we present a model of neural network built with hybrid synapses to investigate the emergence of global and collective dynamics states. This neural networks consists of excitatory and inhibitory population interacting together. The excitatory population is connected by excitatory synapses in small world topology and its adjacent neurons are also connected by gap junctions. The inhibitory population is only connected by chemical inhibitory synapses with all-to-all interaction. Our numerical simulations show that in the balanced networks with absence of electrical coupling, the synchrony states generated by this architecture are mainly controlled by heterogeneity among neurons and the balance of its excitatory and inhibitory inputs. In balanced networks with strong electrical coupling, several dynamical states arise from different combinations of excitatory and inhibitory weights. More importantly, we find that these states, such as synchronous firing, cluster synchrony, and various ripples events, emerge by slight modification of chemical coupling weights. For large enough electrical synapse coupling, the whole neural networks become synchronized. Our results pave a way in the study of the dynamical mechanisms and computational significance of the contribution of mixed synapse in the neural functions. |
2309.16704 | Alan Smeaton | Lorin Sweeney and Graham Healy and Alan F. Smeaton | Memories in the Making: Predicting Video Memorability with Encoding
Phase EEG | Content-Based Multimedia Indexing, CBMI, September 20-22, Orleans,
France, 2023 | null | null | null | q-bio.NC cs.CV eess.SP | http://creativecommons.org/licenses/by/4.0/ | In a world of ephemeral moments, our brain diligently sieves through a
cascade of experiences, like a skilled gold prospector searching for precious
nuggets amidst the river's relentless flow. This study delves into the elusive
"moment of memorability" -- a fleeting, yet vital instant where experiences are
prioritised for consolidation in our memory. By transforming subjects' encoding
phase electroencephalography (EEG) signals into the visual domain using
scaleograms and leveraging deep learning techniques, we investigate the neural
signatures that underpin this moment, with the aim of predicting
subject-specific recognition of video. Our findings not only support the
involvement of theta band (4-8Hz) oscillations over the right temporal lobe in
the encoding of declarative memory, but also support the existence of a
distinct moment of memorability, akin to the gold nuggets that define our
personal river of experiences.
| [
{
"created": "Wed, 16 Aug 2023 22:39:27 GMT",
"version": "v1"
}
] | 2023-10-02 | [
[
"Sweeney",
"Lorin",
""
],
[
"Healy",
"Graham",
""
],
[
"Smeaton",
"Alan F.",
""
]
] | In a world of ephemeral moments, our brain diligently sieves through a cascade of experiences, like a skilled gold prospector searching for precious nuggets amidst the river's relentless flow. This study delves into the elusive "moment of memorability" -- a fleeting, yet vital instant where experiences are prioritised for consolidation in our memory. By transforming subjects' encoding phase electroencephalography (EEG) signals into the visual domain using scaleograms and leveraging deep learning techniques, we investigate the neural signatures that underpin this moment, with the aim of predicting subject-specific recognition of video. Our findings not only support the involvement of theta band (4-8Hz) oscillations over the right temporal lobe in the encoding of declarative memory, but also support the existence of a distinct moment of memorability, akin to the gold nuggets that define our personal river of experiences. |
2407.16063 | Salil Patel | Salil B Patel, Oliver B Bredemeyer, James J FitzGerald, Chrystalina A
Antoniades | Hierarchical Machine Learning Classification of Parkinsonian Disorders
using Saccadic Eye Movements: A Development and Validation Study | 2 table, 7 Figure, 27 pages | null | null | null | q-bio.QM q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Discriminating between Parkinson's Disease (PD) and Progressive Supranuclear
Palsy (PSP) is difficult due to overlapping symptoms, especially early on.
Saccades (rapid conjugate eye movements between fixation points) are affected
by both diseases but conventional saccade analyses exhibit group level
differences only. We hypothesized analyzing entire saccade raw time series
waveforms would permit superior individual level discrimination between PD,
PSP, and healthy controls (HC). 13,309 saccadic eye movements from 127
participants were analyzed using a novel, calibration-free waveform analysis
and hierarchical machine learning framework. Individual saccades were
classified based on which trained model could reconstruct each waveform with
minimum error, indicating the most likely condition. A hierarchical classifier
then predicted overall status (recently diagnosed and medication-naive 'de
novo' PD, 'established' PD on antiparkinsonian medication, PSP, and healthy
controls) by combining each participant's saccade results. This approach
substantially outperformed conventional metrics, achieving high AUROCs
distinguishing de novo PD from PSP (0.92-0.97), de novo PD from HC (0.72-0.89),
and PSP from HC (0.90-0.95), while the conventional model showed limited
performance (AUROC range: 0.45-0.75). This calibration-free waveform analysis
sets a new standard for precise saccadic classification of PD, PSP, and HC,
increasing potential for clinical adoption, remote monitoring, and screening.
| [
{
"created": "Mon, 22 Jul 2024 21:41:04 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jul 2024 11:24:54 GMT",
"version": "v2"
}
] | 2024-07-25 | [
[
"Patel",
"Salil B",
""
],
[
"Bredemeyer",
"Oliver B",
""
],
[
"FitzGerald",
"James J",
""
],
[
"Antoniades",
"Chrystalina A",
""
]
] | Discriminating between Parkinson's Disease (PD) and Progressive Supranuclear Palsy (PSP) is difficult due to overlapping symptoms, especially early on. Saccades (rapid conjugate eye movements between fixation points) are affected by both diseases but conventional saccade analyses exhibit group level differences only. We hypothesized analyzing entire saccade raw time series waveforms would permit superior individual level discrimination between PD, PSP, and healthy controls (HC). 13,309 saccadic eye movements from 127 participants were analyzed using a novel, calibration-free waveform analysis and hierarchical machine learning framework. Individual saccades were classified based on which trained model could reconstruct each waveform with minimum error, indicating the most likely condition. A hierarchical classifier then predicted overall status (recently diagnosed and medication-naive 'de novo' PD, 'established' PD on antiparkinsonian medication, PSP, and healthy controls) by combining each participant's saccade results. This approach substantially outperformed conventional metrics, achieving high AUROCs distinguishing de novo PD from PSP (0.92-0.97), de novo PD from HC (0.72-0.89), and PSP from HC (0.90-0.95), while the conventional model showed limited performance (AUROC range: 0.45-0.75). This calibration-free waveform analysis sets a new standard for precise saccadic classification of PD, PSP, and HC, increasing potential for clinical adoption, remote monitoring, and screening. |
2310.15753 | Sam Subbey | S. Arabeei and S. Subbey | Efficient CPU-Optimized Parameter Estimation for Modeling Fish Schooling
Behavior in Large Particle Systems | 10pages | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The schooling behavior of fish can be studied through simulations involving a
large number of interacting particles. In such systems, each individual
particle is guided by behavior rules, which include aggregation towards a
centroid, collision avoidance, and direction alignment. The movement vector of
each particle may be expressed as a linear combination of behaviors, with
unknown parameters that define a trade-off among several behavioral
constraints. A fitness function for collective schooling behavior encompasses
all individual particle parameters.
For a large number of interacting particles in a complex environment,
heuristic methods, such as evolutionary algorithms, are used to optimize the
fitness function, ensuring that the resulting decision rule preserves
collective behavior. However, these algorithms exhibit slow convergence, making
them inefficient in terms of CPU time cost.
This paper proposes a CPU-efficient iterative (Cluster, Partition, Refine --
CPR) algorithm for estimating decision rule parameters for a large number of
interacting particles. In the first step, we employ the K-Means (unsupervised
learning) algorithm to cluster candidate solutions. Then, we partition the
search space using Voronoi tessellation over the defined clusters. We assess
the quality of each cluster based on the fitness function, with the centroid of
their Voronoi cells representing the clusters. Subsequently, we refine the
search space by introducing new cells into a number of identified well-fitting
Voronoi cells. This process is repeated until convergence.
A comparison of the performance of the CPR algorithm with a standard Genetic
Algorithm reveals that the former converges faster than the latter. We also
demonstrate that the application of the CPR algorithm results in a schooling
behavior consistent with empirical observations.
| [
{
"created": "Tue, 24 Oct 2023 11:56:13 GMT",
"version": "v1"
}
] | 2023-10-25 | [
[
"Arabeei",
"S.",
""
],
[
"Subbey",
"S.",
""
]
] | The schooling behavior of fish can be studied through simulations involving a large number of interacting particles. In such systems, each individual particle is guided by behavior rules, which include aggregation towards a centroid, collision avoidance, and direction alignment. The movement vector of each particle may be expressed as a linear combination of behaviors, with unknown parameters that define a trade-off among several behavioral constraints. A fitness function for collective schooling behavior encompasses all individual particle parameters. For a large number of interacting particles in a complex environment, heuristic methods, such as evolutionary algorithms, are used to optimize the fitness function, ensuring that the resulting decision rule preserves collective behavior. However, these algorithms exhibit slow convergence, making them inefficient in terms of CPU time cost. This paper proposes a CPU-efficient iterative (Cluster, Partition, Refine -- CPR) algorithm for estimating decision rule parameters for a large number of interacting particles. In the first step, we employ the K-Means (unsupervised learning) algorithm to cluster candidate solutions. Then, we partition the search space using Voronoi tessellation over the defined clusters. We assess the quality of each cluster based on the fitness function, with the centroid of their Voronoi cells representing the clusters. Subsequently, we refine the search space by introducing new cells into a number of identified well-fitting Voronoi cells. This process is repeated until convergence. A comparison of the performance of the CPR algorithm with a standard Genetic Algorithm reveals that the former converges faster than the latter. We also demonstrate that the application of the CPR algorithm results in a schooling behavior consistent with empirical observations. |
1505.01744 | David Krakauer | David Krakauer | Cryptographic Nature | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I consider the many ways in which evolved information-flows are restricted
and metabolic resources protected and hidden -- the thesis of living phenomena
as evolutionary cryptosystems. I present the information theory of secrecy
systems and discuss mechanisms acquired by evolved lineages that encrypt
sensitive heritable information with random keys. I explore the idea that
complexity science is a cryptographic discipline as "frozen accidents", or
various forms of regularized randomness, historically encrypt adaptive
dynamics.
| [
{
"created": "Thu, 7 May 2015 15:27:25 GMT",
"version": "v1"
}
] | 2015-05-08 | [
[
"Krakauer",
"David",
""
]
] | I consider the many ways in which evolved information-flows are restricted and metabolic resources protected and hidden -- the thesis of living phenomena as evolutionary cryptosystems. I present the information theory of secrecy systems and discuss mechanisms acquired by evolved lineages that encrypt sensitive heritable information with random keys. I explore the idea that complexity science is a cryptographic discipline as "frozen accidents", or various forms of regularized randomness, historically encrypt adaptive dynamics. |
1707.01962 | Kamesh Krishnamurthy | Kamesh Krishnamurthy, Ann M Hermundstad, Thierry Mora, Aleksandra M
Walczak, Vijay Balasubramanian | Disorder and the neural representation of complex odors: smelling in the
real world | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animals smelling in the real world use a small number of receptors to sense a
vast number of natural molecular mixtures, and proceed to learn arbitrary
associations between odors and valences. Here, we propose a new interpretation
of how the architecture of olfactory circuits is adapted to meet these immense
complementary challenges. First, the diffuse binding of receptors to many
molecules compresses a vast odor space into a tiny receptor space, while
preserving similarity. Next, lateral interactions "densify" and decorrelate the
response, enhancing robustness to noise. Finally, disordered projections from
the periphery to the central brain reconfigure the densely packed information
into a format suitable for flexible learning of associations and valences. We
test our theory empirically using data from Drosophila. Our theory suggests
that the neural processing of olfactory information differs from the other
senses in its fundamental use of disorder.
| [
{
"created": "Thu, 6 Jul 2017 20:52:50 GMT",
"version": "v1"
}
] | 2017-07-10 | [
[
"Krishnamurthy",
"Kamesh",
""
],
[
"Hermundstad",
"Ann M",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M",
""
],
[
"Balasubramanian",
"Vijay",
""
]
] | Animals smelling in the real world use a small number of receptors to sense a vast number of natural molecular mixtures, and proceed to learn arbitrary associations between odors and valences. Here, we propose a new interpretation of how the architecture of olfactory circuits is adapted to meet these immense complementary challenges. First, the diffuse binding of receptors to many molecules compresses a vast odor space into a tiny receptor space, while preserving similarity. Next, lateral interactions "densify" and decorrelate the response, enhancing robustness to noise. Finally, disordered projections from the periphery to the central brain reconfigure the densely packed information into a format suitable for flexible learning of associations and valences. We test our theory empirically using data from Drosophila. Our theory suggests that the neural processing of olfactory information differs from the other senses in its fundamental use of disorder. |
1211.6664 | Fabien Campagne | Fabien Campagne, Kevin C. Dorff, Nyasha Chambwe, James T. Robinson,
Jill P. Mesirov and Thomas D. Wu | Compression of structured high-throughput sequencing data | main article: 2 figures, 2 tables. Supplementary material: 2 figures,
4 tables. Comment on this manuscript on Twitter or Google Plus using handle
#Goby2Paper | null | 10.1371/journal.pone.0079871 | null | q-bio.QM cs.DB q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large biological datasets are being produced at a rapid pace and create
substantial storage challenges, particularly in the domain of high-throughput
sequencing (HTS). Most approaches currently used to store HTS data are either
unable to quickly adapt to the requirements of new sequencing or analysis
methods (because they do not support schema evolution), or fail to provide
state of the art compression of the datasets. We have devised new approaches to
store HTS data that support seamless data schema evolution and compress
datasets substantially better than existing approaches. Building on these new
approaches, we discuss and demonstrate how a multi-tier data organization can
dramatically reduce the storage, computational and network burden of
collecting, analyzing, and archiving large sequencing datasets. For instance,
we show that spliced RNA-Seq alignments can be stored in less than 4% the size
of a BAM file with perfect data fidelity. Compared to the previous compression
state of the art, these methods reduce dataset size more than 20% when storing
gene expression and epigenetic datasets. The approaches have been integrated in
a comprehensive suite of software tools (http://goby.campagnelab.org) that
support common analyses for a range of high-throughput sequencing assays.
| [
{
"created": "Wed, 28 Nov 2012 17:11:54 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Campagne",
"Fabien",
""
],
[
"Dorff",
"Kevin C.",
""
],
[
"Chambwe",
"Nyasha",
""
],
[
"Robinson",
"James T.",
""
],
[
"Mesirov",
"Jill P.",
""
],
[
"Wu",
"Thomas D.",
""
]
] | Large biological datasets are being produced at a rapid pace and create substantial storage challenges, particularly in the domain of high-throughput sequencing (HTS). Most approaches currently used to store HTS data are either unable to quickly adapt to the requirements of new sequencing or analysis methods (because they do not support schema evolution), or fail to provide state of the art compression of the datasets. We have devised new approaches to store HTS data that support seamless data schema evolution and compress datasets substantially better than existing approaches. Building on these new approaches, we discuss and demonstrate how a multi-tier data organization can dramatically reduce the storage, computational and network burden of collecting, analyzing, and archiving large sequencing datasets. For instance, we show that spliced RNA-Seq alignments can be stored in less than 4% the size of a BAM file with perfect data fidelity. Compared to the previous compression state of the art, these methods reduce dataset size more than 20% when storing gene expression and epigenetic datasets. The approaches have been integrated in a comprehensive suite of software tools (http://goby.campagnelab.org) that support common analyses for a range of high-throughput sequencing assays. |
2408.08069 | Murat Ersalman | Murat Ersalman, Mervi Kunnasranta, Markus Ahola, Anja M. Carlsson,
Sara Persson, Britt-Marie B\"acklin, Inari Helle, Linnea Cervin, Jarno
Vanhatalo | Integrated population model reveals human and environment driven changes
in Baltic ringed seal (Pusa hispida botnica) demography and behavior | null | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrated population models (IPMs) are a promising approach to test
ecological theories and assess wildlife populations in dynamic and uncertain
conditions. By combining multiple data sources into a single, unified model,
they enable the parametrization of versatile, mechanistic models that can
predict population dynamics in novel circumstances. Here, we present a Bayesian
IPM for the ringed seal (Pusa hispida botnica) population inhabiting the
Bothnian Bay in the Baltic Sea. Despite the availability of long-term
monitoring data, traditional assessment methods have faltered due to dynamic
environmental conditions, varying reproductive rates, and the recently
re-introduced hunting, thus limiting the quality of information available to
managers. We fit our model to census and various demographic, reproductive and
harvest data from 1988 to 2023 to provide a comprehensive assessment of past
population trends, and predict population response to alternative hunting
scenarios. We estimated that 20,000 to 36,000 ringed seals inhabit the Bothnian
Bay, and the population is increasing 3% to 6% per year. Reproductive rates
have increased since 1988, leading to a substantial increase in the growth rate
up until 2015. However, the re-introduction of hunting has since reduced the
growth rate, and even minor quota increases are likely to reduce it further.
Our results also support the hypothesis that a greater proportion of seals
haul-out under lower ice cover circumstances, leading to higher aerial survey
counts in such years. In general, our study demonstrates the value of IPMs for
monitoring natural populations under changing environments, and supporting
science-based management decisions.
| [
{
"created": "Thu, 15 Aug 2024 10:34:23 GMT",
"version": "v1"
}
] | 2024-08-16 | [
[
"Ersalman",
"Murat",
""
],
[
"Kunnasranta",
"Mervi",
""
],
[
"Ahola",
"Markus",
""
],
[
"Carlsson",
"Anja M.",
""
],
[
"Persson",
"Sara",
""
],
[
"Bäcklin",
"Britt-Marie",
""
],
[
"Helle",
"Inari",
""
],
[
"Cervin",
"Linnea",
""
],
[
"Vanhatalo",
"Jarno",
""
]
] | Integrated population models (IPMs) are a promising approach to test ecological theories and assess wildlife populations in dynamic and uncertain conditions. By combining multiple data sources into a single, unified model, they enable the parametrization of versatile, mechanistic models that can predict population dynamics in novel circumstances. Here, we present a Bayesian IPM for the ringed seal (Pusa hispida botnica) population inhabiting the Bothnian Bay in the Baltic Sea. Despite the availability of long-term monitoring data, traditional assessment methods have faltered due to dynamic environmental conditions, varying reproductive rates, and the recently re-introduced hunting, thus limiting the quality of information available to managers. We fit our model to census and various demographic, reproductive and harvest data from 1988 to 2023 to provide a comprehensive assessment of past population trends, and predict population response to alternative hunting scenarios. We estimated that 20,000 to 36,000 ringed seals inhabit the Bothnian Bay, and the population is increasing 3% to 6% per year. Reproductive rates have increased since 1988, leading to a substantial increase in the growth rate up until 2015. However, the re-introduction of hunting has since reduced the growth rate, and even minor quota increases are likely to reduce it further. Our results also support the hypothesis that a greater proportion of seals haul-out under lower ice cover circumstances, leading to higher aerial survey counts in such years. In general, our study demonstrates the value of IPMs for monitoring natural populations under changing environments, and supporting science-based management decisions. |
2312.10707 | Li Kun | Kun Li, Wenbin Hu | CLDR: Contrastive Learning Drug Response Models from Natural Language
Supervision | 9 pages, 4 figures, 3 tables | null | null | null | q-bio.BM cs.AI cs.LG q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning-based drug response prediction (DRP) methods can accelerate the
drug discovery process and reduce R\&D costs. Although the mainstream methods
achieve high accuracy in predicting response regression values, the
regression-aware representations of these methods are fragmented and fail to
capture the continuity of the sample order. This phenomenon leads to models
optimized to sub-optimal solution spaces, reducing generalization ability and
may result in significant wasted costs in the drug discovery phase. In this
paper, we propose \MN, a contrastive learning framework with natural language
supervision for the DRP. The \MN~converts regression labels into text, which is
merged with the captions text of the drug response as a second modality of the
samples compared to the traditional modalities (graph, sequence). In each
batch, two modalities of one sample are considered positive pairs and the other
pairs are considered negative pairs. At the same time, in order to enhance the
continuous representation capability of the numerical text, a common-sense
numerical knowledge graph is introduced. We validated several hundred thousand
samples from the Genomics of Drug Sensitivity in Cancer dataset, observing the
average improvement of the DRP method ranges from 7.8\% to 31.4\% with the
application of our framework. The experiments prove that the \MN~effectively
constrains the samples to a continuous distribution in the representation
space, and achieves impressive prediction performance with only a few epochs of
fine-tuning after pre-training. The code is available at:
\url{https://gitee.com/xiaoyibang/clipdrug.git}.
| [
{
"created": "Sun, 17 Dec 2023 12:51:49 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Li",
"Kun",
""
],
[
"Hu",
"Wenbin",
""
]
] | Deep learning-based drug response prediction (DRP) methods can accelerate the drug discovery process and reduce R\&D costs. Although the mainstream methods achieve high accuracy in predicting response regression values, the regression-aware representations of these methods are fragmented and fail to capture the continuity of the sample order. This phenomenon leads to models optimized to sub-optimal solution spaces, reducing generalization ability and may result in significant wasted costs in the drug discovery phase. In this paper, we propose \MN, a contrastive learning framework with natural language supervision for the DRP. The \MN~converts regression labels into text, which is merged with the captions text of the drug response as a second modality of the samples compared to the traditional modalities (graph, sequence). In each batch, two modalities of one sample are considered positive pairs and the other pairs are considered negative pairs. At the same time, in order to enhance the continuous representation capability of the numerical text, a common-sense numerical knowledge graph is introduced. We validated several hundred thousand samples from the Genomics of Drug Sensitivity in Cancer dataset, observing the average improvement of the DRP method ranges from 7.8\% to 31.4\% with the application of our framework. The experiments prove that the \MN~effectively constrains the samples to a continuous distribution in the representation space, and achieves impressive prediction performance with only a few epochs of fine-tuning after pre-training. The code is available at: \url{https://gitee.com/xiaoyibang/clipdrug.git}. |
2106.00628 | Martin Frasch | Martin G. Frasch, Shadrian B. Strong, David Nilosek, Joshua Leaverton,
Barry S. Schifrin | Detection of preventable fetal distress during labor from scanned
cardiotocogram tracings using deep learning | null | null | 10.3389/fped.2021.736834 | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite broad application during labor and delivery, there remains
considerable debate about the value of electronic fetal monitoring (EFM). EFM
includes the surveillance of the fetal heart rate (FHR) patterns in conjunction
with the maternal uterine contractions providing a wealth of data about fetal
behavior and the threat of diminished oxygenation and perfusion. Adverse
outcomes universally associate a fetal injury with the failure to timely
respond to FHR pattern information. Historically, the EFM data, stored
digitally, are available only as rasterized pdf images for contemporary or
historical discussion and examination. In reality, however, they are rarely
reviewed systematically. Using a unique archive of EFM collected over 50 years
of practice in conjunction with adverse outcomes, we present a deep learning
framework for training and detection of incipient or past fetal injury. We
report 94% accuracy in identifying early, preventable fetal injury intrapartum.
This framework is suited for automating an early warning and decision support
system for maintaining fetal well-being during the stresses of labor.
Ultimately, such a system could enable a physician to timely respond during
labor and prevent adverse outcomes. When adverse outcomes cannot be avoided,
they can provide guidance to the early neuroprotective treatment of the
newborn.
| [
{
"created": "Tue, 1 Jun 2021 16:40:50 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Jul 2021 20:09:13 GMT",
"version": "v2"
}
] | 2021-12-06 | [
[
"Frasch",
"Martin G.",
""
],
[
"Strong",
"Shadrian B.",
""
],
[
"Nilosek",
"David",
""
],
[
"Leaverton",
"Joshua",
""
],
[
"Schifrin",
"Barry S.",
""
]
] | Despite broad application during labor and delivery, there remains considerable debate about the value of electronic fetal monitoring (EFM). EFM includes the surveillance of the fetal heart rate (FHR) patterns in conjunction with the maternal uterine contractions providing a wealth of data about fetal behavior and the threat of diminished oxygenation and perfusion. Adverse outcomes universally associate a fetal injury with the failure to timely respond to FHR pattern information. Historically, the EFM data, stored digitally, are available only as rasterized pdf images for contemporary or historical discussion and examination. In reality, however, they are rarely reviewed systematically. Using a unique archive of EFM collected over 50 years of practice in conjunction with adverse outcomes, we present a deep learning framework for training and detection of incipient or past fetal injury. We report 94% accuracy in identifying early, preventable fetal injury intrapartum. This framework is suited for automating an early warning and decision support system for maintaining fetal well-being during the stresses of labor. Ultimately, such a system could enable a physician to timely respond during labor and prevent adverse outcomes. When adverse outcomes cannot be avoided, they can provide guidance to the early neuroprotective treatment of the newborn. |
0911.4032 | Peter Klimek | Peter Klimek, Stefan Thurner, Rudolf Hanel | Evolutionary dynamics from a variational principle | 13 pages, 3 figures, 2 tables | null | 10.1103/PhysRevE.82.011901 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate with a thought experiment that fitness-based population
dynamical approaches to evolution are not able to make quantitative,
falsifiable predictions about the long-term behavior of evolutionary systems. A
key characteristic of evolutionary systems is the ongoing endogenous production
of new species. These novel entities change the conditions for already existing
species. Even {\em Darwin's Demon}, a hypothetical entity with exact knowledge
of the abundance of all species and their fitness functions at a given time,
could not pre-state the impact of these novelties on established populations.
We argue that fitness is always {\it a posteriori} knowledge -- it measures but
does not explain why a species has reproductive success or not. To overcome
these conceptual limitations, a variational principle is proposed in a
spin-model-like setup of evolutionary systems. We derive a functional which is
minimized under the most general evolutionary formulation of a dynamical
system, i.e. evolutionary trajectories causally emerge as a minimization of a
functional. This functional allows the derivation of analytic solutions of the
asymptotic diversity for stochastic evolutionary systems within a mean-field
approximation. We test these approximations by numerical simulations of the
corresponding model and find good agreement in the position of phase
transitions in diversity curves. The model is further able to reproduce
stylized facts of timeseries from several man-made and natural evolutionary
systems. Light will be thrown on how species and their fitness landscapes
dynamically co-evolve.
| [
{
"created": "Fri, 20 Nov 2009 12:52:42 GMT",
"version": "v1"
}
] | 2015-05-14 | [
[
"Klimek",
"Peter",
""
],
[
"Thurner",
"Stefan",
""
],
[
"Hanel",
"Rudolf",
""
]
] | We demonstrate with a thought experiment that fitness-based population dynamical approaches to evolution are not able to make quantitative, falsifiable predictions about the long-term behavior of evolutionary systems. A key characteristic of evolutionary systems is the ongoing endogenous production of new species. These novel entities change the conditions for already existing species. Even {\em Darwin's Demon}, a hypothetical entity with exact knowledge of the abundance of all species and their fitness functions at a given time, could not pre-state the impact of these novelties on established populations. We argue that fitness is always {\it a posteriori} knowledge -- it measures but does not explain why a species has reproductive success or not. To overcome these conceptual limitations, a variational principle is proposed in a spin-model-like setup of evolutionary systems. We derive a functional which is minimized under the most general evolutionary formulation of a dynamical system, i.e. evolutionary trajectories causally emerge as a minimization of a functional. This functional allows the derivation of analytic solutions of the asymptotic diversity for stochastic evolutionary systems within a mean-field approximation. We test these approximations by numerical simulations of the corresponding model and find good agreement in the position of phase transitions in diversity curves. The model is further able to reproduce stylized facts of timeseries from several man-made and natural evolutionary systems. Light will be thrown on how species and their fitness landscapes dynamically co-evolve. |
2204.01675 | Tilo Schwalger | Bastian Pietras, Valentin Schmutz, Tilo Schwalger | Mesoscopic description of hippocampal replay and metastability in
spiking neural networks with short-term plasticity | 43 pages, 8 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Bottom-up models of functionally relevant patterns of neural activity provide
an explicit link between neuronal dynamics and computation. A prime example of
functional activity pattern is hippocampal replay, which is critical for memory
consolidation. The switchings between replay events and a low-activity state in
neural recordings suggests metastable neural circuit dynamics. As metastability
has been attributed to noise and/or slow fatigue mechanisms, we propose a
concise mesoscopic model which accounts for both. Crucially, our model is
bottom-up: it is analytically derived from the dynamics of finite-size networks
of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As
such, noise is explicitly linked to spike noise and network size, and fatigue
is explicitly linked to synaptic dynamics. To derive the mesosocpic model, we
first consider a homogeneous spiking neural network and follow the temporal
coarse-graining approach of Gillespie ("chemical Langevin equation"), which can
be naturally interpreted as a stochastic neural mass model. The Langevin
equation is computationally inexpensive to simulate and enables a thorough
study of metastable dynamics in classical setups (population spikes and Up-Down
states dynamics) by means of phase-plane analysis. This stochastic neural mass
model is the basic component of our mesoscopic model for replay. We show that
our model faithfully captures the stochastic nature of individual replayed
trajectories. Moreover, compared to the deterministic Romani-Tsodyks model of
place cell dynamics, it exhibits a higher level of variability in terms of
content, direction and timing of replay events, compatible with biological
evidence and could be functionally desirable. This variability is the product
of a new dynamical regime where metastability emerges from a complex interplay
between finite-size fluctuations and local fatigue.
| [
{
"created": "Mon, 4 Apr 2022 17:45:10 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Apr 2022 21:47:09 GMT",
"version": "v2"
}
] | 2022-05-03 | [
[
"Pietras",
"Bastian",
""
],
[
"Schmutz",
"Valentin",
""
],
[
"Schwalger",
"Tilo",
""
]
] | Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity pattern is hippocampal replay, which is critical for memory consolidation. The switchings between replay events and a low-activity state in neural recordings suggests metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to spike noise and network size, and fatigue is explicitly linked to synaptic dynamics. To derive the mesosocpic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie ("chemical Langevin equation"), which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down states dynamics) by means of phase-plane analysis. This stochastic neural mass model is the basic component of our mesoscopic model for replay. We show that our model faithfully captures the stochastic nature of individual replayed trajectories. Moreover, compared to the deterministic Romani-Tsodyks model of place cell dynamics, it exhibits a higher level of variability in terms of content, direction and timing of replay events, compatible with biological evidence and could be functionally desirable. This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue. |
2006.11495 | Manuel Baltieri Dr | Manuel Baltieri, Christopher L. Buckley, Jelle Bruineberg | Predictions in the eye of the beholder: an active inference account of
Watt governors | Accepted at ALife 2020 | null | 10.1162/isal_a_00288 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Active inference introduces a theory describing action-perception loops via
the minimisation of variational (and expected) free energy or, under
simplifying assumptions, (weighted) prediction error. Recently, active
inference has been proposed as part of a new and unifying framework in the
cognitive sciences: predictive processing. Predictive processing is often
associated with traditional computational theories of the mind, strongly
relying on internal representations presented in the form of generative models
thought to explain different functions of living and cognitive systems. In this
work, we introduce an active inference formulation of the Watt centrifugal
governor, a system often portrayed as the canonical "anti-representational"
metaphor for cognition. We identify a generative model of a steam engine for
the governor, and derive a set of equations describing "perception" and
"action" processes as a form of prediction error minimisation. In doing so, we
firstly challenge the idea of generative models as explicit internal
representations for cognitive systems, suggesting that such models serve only
as implicit descriptions for an observer. Secondly, we consider current
proposals of predictive processing as a theory of cognition, focusing on some
of its potential shortcomings and in particular on the idea that virtually any
system admits a description in terms of prediction error minimisation,
suggesting that this theory may offer limited explanatory power for cognitive
systems. Finally, as a silver lining we emphasise the instrumental role this
framework can nonetheless play as a mathematical tool for modelling cognitive
architectures interpreted in terms of Bayesian (active) inference.
| [
{
"created": "Sat, 20 Jun 2020 04:55:39 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jun 2020 03:02:44 GMT",
"version": "v2"
}
] | 2022-03-10 | [
[
"Baltieri",
"Manuel",
""
],
[
"Buckley",
"Christopher L.",
""
],
[
"Bruineberg",
"Jelle",
""
]
] | Active inference introduces a theory describing action-perception loops via the minimisation of variational (and expected) free energy or, under simplifying assumptions, (weighted) prediction error. Recently, active inference has been proposed as part of a new and unifying framework in the cognitive sciences: predictive processing. Predictive processing is often associated with traditional computational theories of the mind, strongly relying on internal representations presented in the form of generative models thought to explain different functions of living and cognitive systems. In this work, we introduce an active inference formulation of the Watt centrifugal governor, a system often portrayed as the canonical "anti-representational" metaphor for cognition. We identify a generative model of a steam engine for the governor, and derive a set of equations describing "perception" and "action" processes as a form of prediction error minimisation. In doing so, we firstly challenge the idea of generative models as explicit internal representations for cognitive systems, suggesting that such models serve only as implicit descriptions for an observer. Secondly, we consider current proposals of predictive processing as a theory of cognition, focusing on some of its potential shortcomings and in particular on the idea that virtually any system admits a description in terms of prediction error minimisation, suggesting that this theory may offer limited explanatory power for cognitive systems. Finally, as a silver lining we emphasise the instrumental role this framework can nonetheless play as a mathematical tool for modelling cognitive architectures interpreted in terms of Bayesian (active) inference. |
1907.01092 | Dumitru Trucu | Robyn Shuttleworth and Dumitru Trucu | Multiscale dynamics of a heterotypic cancer cell population within a
fibrous extracellular matrix | null | null | null | null | q-bio.TO math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local cancer cell invasion is a complex process involving many cellular and
tissue interactions and is an important prerequisite for metastatic spread, the
main cause of cancer related deaths. Occurring over many different temporal and
spatial scales, the first stage of local invasion is the secretion of
matrix-degrading enzymes (MDEs) and the resulting degradation of the
extra-cellular matrix (ECM). This process creates space in which the cells can
invade and thus enlarge the tumour. As a tumour increases in malignancy, the
cancer cells adopt the ability to mutate into secondary cell subpopulations
giving rise to a heterogeneous tumour. This new cell subpopulation often
carries higher invasive qualities and permits a quicker spread of the tumour.
Building upon the recent multiscale modelling framework for cancer invasion
within a fibrous ECM introduced in Shuttleworth and Trucu (2019), in this paper
we consider the process of local invasion by a heterotypic tumour consisting of
two cancer cell populations mixed with a two-phase ECM. To that end, we address
the double feedback link between the tissue-scale cancer dynamics and the
cell-scale molecular processes through the development of a two-part modelling
framework that crucially incorporates the multiscale dynamic redistribution of
oriented fibres occurring within a two-phase extra-cellular matrix and combines
this with the multiscale leading edge dynamics exploring key matrix-degrading
enzymes molecular processes along the tumour interface that drive the movement
of the cancer boundary. The modelling framework will be accompanied by
computational results that explore the effects of the underlying fibre network
on the overall pattern of cancer invasion.
| [
{
"created": "Mon, 1 Jul 2019 23:02:19 GMT",
"version": "v1"
}
] | 2019-07-03 | [
[
"Shuttleworth",
"Robyn",
""
],
[
"Trucu",
"Dumitru",
""
]
] | Local cancer cell invasion is a complex process involving many cellular and tissue interactions and is an important prerequisite for metastatic spread, the main cause of cancer related deaths. Occurring over many different temporal and spatial scales, the first stage of local invasion is the secretion of matrix-degrading enzymes (MDEs) and the resulting degradation of the extra-cellular matrix (ECM). This process creates space in which the cells can invade and thus enlarge the tumour. As a tumour increases in malignancy, the cancer cells adopt the ability to mutate into secondary cell subpopulations giving rise to a heterogeneous tumour. This new cell subpopulation often carries higher invasive qualities and permits a quicker spread of the tumour. Building upon the recent multiscale modelling framework for cancer invasion within a fibrous ECM introduced in Shuttleworth and Trucu (2019), in this paper we consider the process of local invasion by a heterotypic tumour consisting of two cancer cell populations mixed with a two-phase ECM. To that end, we address the double feedback link between the tissue-scale cancer dynamics and the cell-scale molecular processes through the development of a two-part modelling framework that crucially incorporates the multiscale dynamic redistribution of oriented fibres occurring within a two-phase extra-cellular matrix and combines this with the multiscale leading edge dynamics exploring key matrix-degrading enzymes molecular processes along the tumour interface that drive the movement of the cancer boundary. The modelling framework will be accompanied by computational results that explore the effects of the underlying fibre network on the overall pattern of cancer invasion. |
1801.04901 | Grace Brannigan | Reza Salari and Thomas Joseph and Ruchi Lohia and Jerome Henin and
Grace Brannigan | A streamlined, general approach for computing ligand binding free
energies and its application to GPCR-bound cholesterol | 4 figures | null | null | null | q-bio.QM cond-mat.soft cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of receptor-ligand binding equilibria has long been
well-established in biochemistry, and was primarily constructed to describe
dilute aqueous solutions. Accordingly, few computational approaches have been
developed for making quantitative predictions of binding probabilities in
environments other than dilute isotropic solution. Existing techniques, ranging
from simple automated docking procedures to sophisticated thermodynamics-based
methods, have been developed with soluble proteins in mind. Biologically and
pharmacologically relevant protein-ligand interactions often occur in complex
environments, including lamellar phases like membranes and crowded, non-dilute
solutions. Here we revisit the theoretical bases of ligand binding equilibria,
avoiding overly specific assumptions that are nearly always made when
describing receptor-ligand binding. Building on this formalism, we extend the
asymptotically exact Alchemical Free Energy Perturbation technique to
quantifying occupancies of sites on proteins in a complex bulk, including
phase-separated, anisotropic, or non-dilute solutions, using a
thermodynamically consistent and easily generalized approach that resolves
several ambiguities of current frameworks. To incorporate the complex bulk
without overcomplicating the overall thermodynamic cycle, we simplify the
common approach for ligand restraints by using a single
distance-from-bound-configuration (DBC) ligand restraint during AFEP decoupling
from protein. DBC restraints should be generalizable to binding modes of most
small molecules, even those with strong orientational dependence. We apply this
approach to compute the likelihood that membrane cholesterol binds to known
crystallographic sites on 3 GPCRs at a range of concentrations. Non-ideality of
cholesterol in a binary cholesterol:POPC bilayer is characterized and
consistently incorporated into the interpretation.
| [
{
"created": "Mon, 15 Jan 2018 18:18:35 GMT",
"version": "v1"
},
{
"created": "Tue, 8 May 2018 21:45:49 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Sep 2018 18:27:08 GMT",
"version": "v3"
}
] | 2018-10-02 | [
[
"Salari",
"Reza",
""
],
[
"Joseph",
"Thomas",
""
],
[
"Lohia",
"Ruchi",
""
],
[
"Henin",
"Jerome",
""
],
[
"Brannigan",
"Grace",
""
]
] | The theory of receptor-ligand binding equilibria has long been well-established in biochemistry, and was primarily constructed to describe dilute aqueous solutions. Accordingly, few computational approaches have been developed for making quantitative predictions of binding probabilities in environments other than dilute isotropic solution. Existing techniques, ranging from simple automated docking procedures to sophisticated thermodynamics-based methods, have been developed with soluble proteins in mind. Biologically and pharmacologically relevant protein-ligand interactions often occur in complex environments, including lamellar phases like membranes and crowded, non-dilute solutions. Here we revisit the theoretical bases of ligand binding equilibria, avoiding overly specific assumptions that are nearly always made when describing receptor-ligand binding. Building on this formalism, we extend the asymptotically exact Alchemical Free Energy Perturbation technique to quantifying occupancies of sites on proteins in a complex bulk, including phase-separated, anisotropic, or non-dilute solutions, using a thermodynamically consistent and easily generalized approach that resolves several ambiguities of current frameworks. To incorporate the complex bulk without overcomplicating the overall thermodynamic cycle, we simplify the common approach for ligand restraints by using a single distance-from-bound-configuration (DBC) ligand restraint during AFEP decoupling from protein. DBC restraints should be generalizable to binding modes of most small molecules, even those with strong orientational dependence. We apply this approach to compute the likelihood that membrane cholesterol binds to known crystallographic sites on 3 GPCRs at a range of concentrations. Non-ideality of cholesterol in a binary cholesterol:POPC bilayer is characterized and consistently incorporated into the interpretation. |
1811.01175 | Lee Worden | Lee Worden, Rae Wannier, Nicole A. Hoff, Kamy Musene, Bernice Selo,
Mathias Mossoko, Emile Okitolonda-Wemakoy, Jean Jacques Muyembe-Tamfum,
George W. Rutherford, Thomas M. Lietman, Anne W. Rimoin, Travis C. Porco, J.
Daniel Kelly | Real-time projections of epidemic transmission and estimation of
vaccination impact during an Ebola virus disease outbreak in the Eastern
region of the Democratic Republic of Congo | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As of October 12, 2018, 211 cases of Ebola virus disease (EVD) were reported
in North Kivu Province, Democratic Republic of Congo. Since the beginning of
October the outbreak has largely shifted into regions in which active armed
conflict is occurring, and in which EVD cases and their contacts are difficult
for health workers to reach. We modeled EVD transmission using a branching
process with gradually quenching transmission estimated from past EVD
outbreaks, with outbreak trajectories conditioned on agreement with the course
of the current outbreak, and with multiple levels of vaccination coverage. We
used an autoregression for short-term projections, a regression model for final
sizes, and a simple Gott's law rule as an ensemble of forecasts. Short-term
model projections were validated against actual case counts. During validation
of short-term projections, models consistently scored higher on shorter-term
forecasts. Based on case counts as of October 13, the stochastic model
projected a median case count of 226 by October 27 (95% prediction interval:
205-268) and 245 by November 10 (95% PI: 208-315), while the auto-regression
model projected median case counts of 240 (95% PI: 215-307) and 259 (95% PI:
216-395) for those dates, respectively. Projected median final counts range
from 274 to 421. Except for Gott's law, the projected probability of an
outbreak surpassing 2013-2016 is exceedingly small. The stochastic model
estimates that vaccine coverage in this outbreak is lower than reported in its
trial. Based on our projections we believe that the epidemic had not yet peaked
at the time of these estimates, though an outbreak like 2013-2016 is not
likely. We estimate that transmission rates are higher than under target levels
of vaccine coverage, and this model estimate may offer a surrogate indicator
for the outbreak response challenges.
| [
{
"created": "Sat, 3 Nov 2018 08:31:41 GMT",
"version": "v1"
}
] | 2018-11-06 | [
[
"Worden",
"Lee",
""
],
[
"Wannier",
"Rae",
""
],
[
"Hoff",
"Nicole A.",
""
],
[
"Musene",
"Kamy",
""
],
[
"Selo",
"Bernice",
""
],
[
"Mossoko",
"Mathias",
""
],
[
"Okitolonda-Wemakoy",
"Emile",
""
],
[
"Muyembe-Tamfum",
"Jean Jacques",
""
],
[
"Rutherford",
"George W.",
""
],
[
"Lietman",
"Thomas M.",
""
],
[
"Rimoin",
"Anne W.",
""
],
[
"Porco",
"Travis C.",
""
],
[
"Kelly",
"J. Daniel",
""
]
] | As of October 12, 2018, 211 cases of Ebola virus disease (EVD) were reported in North Kivu Province, Democratic Republic of Congo. Since the beginning of October the outbreak has largely shifted into regions in which active armed conflict is occurring, and in which EVD cases and their contacts are difficult for health workers to reach. We modeled EVD transmission using a branching process with gradually quenching transmission estimated from past EVD outbreaks, with outbreak trajectories conditioned on agreement with the course of the current outbreak, and with multiple levels of vaccination coverage. We used an autoregression for short-term projections, a regression model for final sizes, and a simple Gott's law rule as an ensemble of forecasts. Short-term model projections were validated against actual case counts. During validation of short-term projections, models consistently scored higher on shorter-term forecasts. Based on case counts as of October 13, the stochastic model projected a median case count of 226 by October 27 (95% prediction interval: 205-268) and 245 by November 10 (95% PI: 208-315), while the auto-regression model projected median case counts of 240 (95% PI: 215-307) and 259 (95% PI: 216-395) for those dates, respectively. Projected median final counts range from 274 to 421. Except for Gott's law, the projected probability of an outbreak surpassing 2013-2016 is exceedingly small. The stochastic model estimates that vaccine coverage in this outbreak is lower than reported in its trial. Based on our projections we believe that the epidemic had not yet peaked at the time of these estimates, though an outbreak like 2013-2016 is not likely. We estimate that transmission rates are higher than under target levels of vaccine coverage, and this model estimate may offer a surrogate indicator for the outbreak response challenges. |
1208.0133 | Max Alekseyev | Sergey Aganezov, Jr. and Max A. Alekseyev | On pairwise distances and median score of three genomes under DCJ | Proceedings of the 10-th Annual RECOMB Satellite Workshop on
Comparative Genomics (RECOMB-CG), 2012. (to appear) | BMC Bioinformatics 2012, 13(Suppl 19):S1 | 10.1186/1471-2105-13-S19-S1 | null | q-bio.GN cs.DM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In comparative genomics, the rearrangement distance between two genomes
(equal the minimal number of genome rearrangements required to transform them
into a single genome) is often used for measuring their evolutionary
remoteness. Generalization of this measure to three genomes is known as the
median score (while a resulting genome is called median genome). In contrast to
the rearrangement distance between two genomes which can be computed in linear
time, computing the median score for three genomes is NP-hard. This inspires a
quest for simpler and faster approximations for the median score, the most
natural of which appears to be the halved sum of pairwise distances which in
fact represents a lower bound for the median score.
In this work, we study relationship and interplay of pairwise distances
between three genomes and their median score under the model of
Double-Cut-and-Join (DCJ) rearrangements. Most remarkably we show that while a
rearrangement may change the sum of pairwise distances by at most 2 (and thus
change the lower bound by at most 1), even the most "powerful" rearrangements
in this respect that increase the lower bound by 1 (by moving one genome
farther away from each of the other two genomes), which we call strong, do not
necessarily affect the median score. This observation implies that the two
measures are not as well-correlated as one's intuition may suggest.
We further prove that the median score attains the lower bound exactly on the
triples of genomes that can be obtained from a single genome with strong
rearrangements. While the sum of pairwise distances with the factor 2/3
represents an upper bound for the median score, its tightness remains unclear.
Nonetheless, we show that the difference of the median score and its lower
bound is not bounded by a constant.
| [
{
"created": "Wed, 1 Aug 2012 08:20:16 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Oct 2012 21:54:36 GMT",
"version": "v2"
}
] | 2014-01-03 | [
[
"Aganezov,",
"Sergey",
"Jr."
],
[
"Alekseyev",
"Max A.",
""
]
] | In comparative genomics, the rearrangement distance between two genomes (equal the minimal number of genome rearrangements required to transform them into a single genome) is often used for measuring their evolutionary remoteness. Generalization of this measure to three genomes is known as the median score (while a resulting genome is called median genome). In contrast to the rearrangement distance between two genomes which can be computed in linear time, computing the median score for three genomes is NP-hard. This inspires a quest for simpler and faster approximations for the median score, the most natural of which appears to be the halved sum of pairwise distances which in fact represents a lower bound for the median score. In this work, we study relationship and interplay of pairwise distances between three genomes and their median score under the model of Double-Cut-and-Join (DCJ) rearrangements. Most remarkably we show that while a rearrangement may change the sum of pairwise distances by at most 2 (and thus change the lower bound by at most 1), even the most "powerful" rearrangements in this respect that increase the lower bound by 1 (by moving one genome farther away from each of the other two genomes), which we call strong, do not necessarily affect the median score. This observation implies that the two measures are not as well-correlated as one's intuition may suggest. We further prove that the median score attains the lower bound exactly on the triples of genomes that can be obtained from a single genome with strong rearrangements. While the sum of pairwise distances with the factor 2/3 represents an upper bound for the median score, its tightness remains unclear. Nonetheless, we show that the difference of the median score and its lower bound is not bounded by a constant. |
1011.2797 | Andrea Barreiro | Andrea K. Barreiro, Julijana Gjorgjieva, Fred Rieke, Eric Shea-Brown | When are microcircuits well-modeled by maximum entropy methods? | Submitted Nov 1, 2010; Updated version submitted Feb 24, 2011;
Revised version submitted Feb 1, 2012 | null | null | null | q-bio.NC cond-mat.dis-nn cs.IT math.IT physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Describing the collective activity of neural populations is a daunting task:
the number of possible patterns grows exponentially with the number of cells,
resulting in practically unlimited complexity. Recent empirical studies,
however, suggest a vast simplification in how multi-neuron spiking occurs: the
activity patterns of some circuits are nearly completely captured by pairwise
interactions among neurons. Why are such pairwise models so successful in some
instances, but insufficient in others? Here, we study the emergence of
higher-order interactions in simple circuits with different architectures and
inputs. We quantify the impact of higher-order interactions by comparing the
responses of mechanistic circuit models vs. "null" descriptions in which all
higher-than-pairwise correlations have been accounted for by lower order
statistics, known as pairwise maximum entropy models.
We find that bimodal input signals produce larger deviations from pairwise
predictions than unimodal inputs for circuits with local and global
connectivity. Moreover, recurrent coupling can accentuate these deviations, if
coupling strengths are neither too weak nor too strong. A circuit model based
on intracellular recordings from ON parasol retinal ganglion cells shows that a
broad range of light signals induce unimodal inputs to spike generators, and
that coupling strengths produce weak effects on higher-order interactions. This
provides a novel explanation for the success of pairwise models in this system.
Overall, our findings identify circuit-level mechanisms that produce and fail
to produce higher-order spiking statistics in neural ensembles.
| [
{
"created": "Thu, 11 Nov 2010 23:38:55 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Feb 2011 18:31:28 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Feb 2012 16:14:01 GMT",
"version": "v3"
}
] | 2012-02-02 | [
[
"Barreiro",
"Andrea K.",
""
],
[
"Gjorgjieva",
"Julijana",
""
],
[
"Rieke",
"Fred",
""
],
[
"Shea-Brown",
"Eric",
""
]
] | Describing the collective activity of neural populations is a daunting task: the number of possible patterns grows exponentially with the number of cells, resulting in practically unlimited complexity. Recent empirical studies, however, suggest a vast simplification in how multi-neuron spiking occurs: the activity patterns of some circuits are nearly completely captured by pairwise interactions among neurons. Why are such pairwise models so successful in some instances, but insufficient in others? Here, we study the emergence of higher-order interactions in simple circuits with different architectures and inputs. We quantify the impact of higher-order interactions by comparing the responses of mechanistic circuit models vs. "null" descriptions in which all higher-than-pairwise correlations have been accounted for by lower order statistics, known as pairwise maximum entropy models. We find that bimodal input signals produce larger deviations from pairwise predictions than unimodal inputs for circuits with local and global connectivity. Moreover, recurrent coupling can accentuate these deviations, if coupling strengths are neither too weak nor too strong. A circuit model based on intracellular recordings from ON parasol retinal ganglion cells shows that a broad range of light signals induce unimodal inputs to spike generators, and that coupling strengths produce weak effects on higher-order interactions. This provides a novel explanation for the success of pairwise models in this system. Overall, our findings identify circuit-level mechanisms that produce and fail to produce higher-order spiking statistics in neural ensembles. |
2312.01833 | Leonardo Novelli | Leonardo Novelli, Lionel Barnett, Anil Seth, Adeel Razi | Minimum-phase property of the hemodynamic response function, and
implications for Granger Causality in fMRI | null | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Granger Causality (GC) is widely used in neuroimaging to estimate directed
statistical dependence among brain regions using time series of brain activity.
An important issue is that functional MRI (fMRI) measures brain activity
indirectly via the blood-oxygen-level-dependent (BOLD) signal, which affects
the temporal structure of the signals and distorts GC estimates. However, some
notable applications of GC are not concerned with the GC magnitude but its
statistical significance. This is the case for network inference, which aims to
build a statistical model of the system based on directed relationships among
its elements. The critical question for the viability of network inference in
fMRI is whether the hemodynamic response function (HRF) and its variability
across brain regions introduce spurious relationships, i.e., statistically
significant GC values between BOLD signals, even if the GC between the neuronal
signals is zero. It has been mathematically proven that such spurious
statistical relationships are not induced if the HRF is minimum-phase, i.e., if
both the HRF and its inverse are stable (producing finite responses to finite
inputs). However, whether the HRF is minimum-phase has remained contentious.
Here, we address this issue using multiple realistic biophysical models from
the literature and studying their transfer functions. We find that these models
are minimum-phase for a wide range of physiologically plausible parameter
values. Therefore, statistical testing of GC is plausible even if the HRF
varies across brain regions, with the following limitations. First, the
minimum-phase condition is violated for parameter combinations that generate an
initial dip in the HRF, confirming a previous mathematical proof. Second, the
slow sampling of the BOLD signal (in seconds) compared to the timescales of
neural signal propagation (milliseconds) may still introduce spurious GC.
| [
{
"created": "Mon, 4 Dec 2023 12:12:26 GMT",
"version": "v1"
}
] | 2023-12-05 | [
[
"Novelli",
"Leonardo",
""
],
[
"Barnett",
"Lionel",
""
],
[
"Seth",
"Anil",
""
],
[
"Razi",
"Adeel",
""
]
] | Granger Causality (GC) is widely used in neuroimaging to estimate directed statistical dependence among brain regions using time series of brain activity. An important issue is that functional MRI (fMRI) measures brain activity indirectly via the blood-oxygen-level-dependent (BOLD) signal, which affects the temporal structure of the signals and distorts GC estimates. However, some notable applications of GC are not concerned with the GC magnitude but its statistical significance. This is the case for network inference, which aims to build a statistical model of the system based on directed relationships among its elements. The critical question for the viability of network inference in fMRI is whether the hemodynamic response function (HRF) and its variability across brain regions introduce spurious relationships, i.e., statistically significant GC values between BOLD signals, even if the GC between the neuronal signals is zero. It has been mathematically proven that such spurious statistical relationships are not induced if the HRF is minimum-phase, i.e., if both the HRF and its inverse are stable (producing finite responses to finite inputs). However, whether the HRF is minimum-phase has remained contentious. Here, we address this issue using multiple realistic biophysical models from the literature and studying their transfer functions. We find that these models are minimum-phase for a wide range of physiologically plausible parameter values. Therefore, statistical testing of GC is plausible even if the HRF varies across brain regions, with the following limitations. First, the minimum-phase condition is violated for parameter combinations that generate an initial dip in the HRF, confirming a previous mathematical proof. Second, the slow sampling of the BOLD signal (in seconds) compared to the timescales of neural signal propagation (milliseconds) may still introduce spurious GC. |
1808.00065 | Nassim Nicholas Taleb | Nassim Nicholas Taleb | (Anti)Fragility and Convex Responses in Medicine | Proceedings of the Ninth International Conference on Complex Systems
(2018) | Unifying Themes in Complex Systems IX, Proceedings of the Ninth
International Conference on Complex Systems (2018), in Alfredo J. Morales ,
Carlos Gershenson, Dan Braha, Ali A. Minai, & Yaneer Bar-Yam, Eds., Springer,
pp. 299-325 | null | null | q-bio.QM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper applies risk analysis to medical problems, through the properties
of nonlinear responses (convex or concave). It shows 1) necessary relations
between the nonlinearity of dose-response and the statistical properties of the
outcomes, particularly the effect of the variance (i.e., the expected frequency
of the various results and other properties such as their average and
variations); 2) The description of "antifragility" as a mathematical property
for local convex response and its generalization and the designation
"fragility" as its opposite, locally concave; 3) necessary relations between
dosage, severity of conditions, and iatrogenics. Iatrogenics seen as the tail
risk from a given intervention can be analyzed in a probabilistic
decision-theoretic way, linking probability to nonlinearity of response. There
is a necessary two-way mathematical relation between nonlinear response and the
tail risk of a given intervention. In short we propose a framework to integrate
the necessary consequences of nonlinearities in evidence-based medicine and
medical risk management.
Keywords: evidence based medicine, risk management, nonlinear responses
| [
{
"created": "Tue, 31 Jul 2018 20:35:26 GMT",
"version": "v1"
}
] | 2018-08-02 | [
[
"Taleb",
"Nassim Nicholas",
""
]
] | This paper applies risk analysis to medical problems, through the properties of nonlinear responses (convex or concave). It shows 1) necessary relations between the nonlinearity of dose-response and the statistical properties of the outcomes, particularly the effect of the variance (i.e., the expected frequency of the various results and other properties such as their average and variations); 2) The description of "antifragility" as a mathematical property for local convex response and its generalization and the designation "fragility" as its opposite, locally concave; 3) necessary relations between dosage, severity of conditions, and iatrogenics. Iatrogenics seen as the tail risk from a given intervention can be analyzed in a probabilistic decision-theoretic way, linking probability to nonlinearity of response. There is a necessary two-way mathematical relation between nonlinear response and the tail risk of a given intervention. In short we propose a framework to integrate the necessary consequences of nonlinearities in evidence-based medicine and medical risk management. Keywords: evidence based medicine, risk management, nonlinear responses |
1210.0168 | Sepehr Ehsani | Sepehr Ehsani | Time in the cell: a plausible role for the plasma membrane | 6 pages, 2 figures | null | null | null | q-bio.SC q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All cells must keep time to consistently perform vital biological functions.
To that end, the coupling and interrelatedness of diverse subsecond events in
the complex cellular environment, such as protein folding or translation rates,
cannot simply result from the chance convergence of the inherent chemical
properties of these phenomena, but may instead be synchronized through a
cell-wide pacemaking mechanism. Picosecond vibrations of lipid membranes may
play a role in such a mechanism.
| [
{
"created": "Sun, 30 Sep 2012 04:38:45 GMT",
"version": "v1"
}
] | 2012-10-02 | [
[
"Ehsani",
"Sepehr",
""
]
] | All cells must keep time to consistently perform vital biological functions. To that end, the coupling and interrelatedness of diverse subsecond events in the complex cellular environment, such as protein folding or translation rates, cannot simply result from the chance convergence of the inherent chemical properties of these phenomena, but may instead be synchronized through a cell-wide pacemaking mechanism. Picosecond vibrations of lipid membranes may play a role in such a mechanism. |
2009.03857 | William Podlaski | Michele Nardin, James W Phillips, William F Podlaski, Sander W Keemink | Nonlinear computations in spiking neural networks through multiplicative
synapses | This article has been peer-reviewed and recommended by Peer Community
In Neuroscience | Peer Community In Neuroscience, 2021 | 10.24072/pci.cneuro.100003 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain efficiently performs nonlinear computations through its intricate
networks of spiking neurons, but how this is done remains elusive. While
nonlinear computations can be implemented successfully in spiking neural
networks, this requires supervised training and the resulting connectivity can
be hard to interpret. In contrast, the required connectivity for any
computation in the form of a linear dynamical system can be directly derived
and understood with the spike coding network (SCN) framework. These networks
also have biologically realistic activity patterns and are highly robust to
cell death. Here we extend the SCN framework to directly implement any
polynomial dynamical system, without the need for training. This results in
networks requiring a mix of synapse types (fast, slow, and multiplicative),
which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we
demonstrate how to directly derive the required connectivity for several
nonlinear dynamical systems. We also show how to carry out higher-order
polynomials with coupled networks that use only pair-wise multiplicative
synapses, and provide expected numbers of connections for each synapse type.
Overall, our work demonstrates a novel method for implementing nonlinear
computations in spiking neural networks, while keeping the attractive features
of standard SCNs (robustness, realistic activity patterns, and interpretable
connectivity). Finally, we discuss the biological plausibility of our approach,
and how the high accuracy and robustness of the approach may be of interest for
neuromorphic computing.
| [
{
"created": "Tue, 8 Sep 2020 16:47:27 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 16:36:28 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Aug 2021 10:45:47 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Nov 2021 10:41:58 GMT",
"version": "v4"
}
] | 2021-11-23 | [
[
"Nardin",
"Michele",
""
],
[
"Phillips",
"James W",
""
],
[
"Podlaski",
"William F",
""
],
[
"Keemink",
"Sander W",
""
]
] | The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While nonlinear computations can be implemented successfully in spiking neural networks, this requires supervised training and the resulting connectivity can be hard to interpret. In contrast, the required connectivity for any computation in the form of a linear dynamical system can be directly derived and understood with the spike coding network (SCN) framework. These networks also have biologically realistic activity patterns and are highly robust to cell death. Here we extend the SCN framework to directly implement any polynomial dynamical system, without the need for training. This results in networks requiring a mix of synapse types (fast, slow, and multiplicative), which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we demonstrate how to directly derive the required connectivity for several nonlinear dynamical systems. We also show how to carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work demonstrates a novel method for implementing nonlinear computations in spiking neural networks, while keeping the attractive features of standard SCNs (robustness, realistic activity patterns, and interpretable connectivity). Finally, we discuss the biological plausibility of our approach, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing. |
2110.03526 | Mohammadreza Ahmadi | Mohammadreza Ahmadi | Tissue Engineering of Skin Regeneration and Hair Growth | null | null | null | null | q-bio.TO | http://creativecommons.org/publicdomain/zero/1.0/ | Many people suffering from skin disorders such as chronic wounds, non-healing
ulcers, and diabetic ulcers need skin repair and regeneration. Aside from the
diseases listed above, the industry needed a skin rejuvenation system and
regeneration for cosmetic purposes. The procedure used to deliver pluripotent
stem cells to the desired tissue was known as reconstructive medicine.
Mesenchymal stem cells are the most fascinating since, when put in the correct
setting and stimulated with the proper growth factors, they could choose a
pathway to differentiate into the desired tissue. They are also very available,
inexpensive, simple to extract, and reproducible. These mesenchymal stem cells
are derived from bone marrow, bone, connective tissues, fats, and other
tissues. Fat is the ideal reservoir for mesenchymal cells that have recently
been identified. Fibrous, collagen fibers and fibroblasts create fat.
| [
{
"created": "Sat, 2 Oct 2021 23:53:58 GMT",
"version": "v1"
}
] | 2021-10-08 | [
[
"Ahmadi",
"Mohammadreza",
""
]
] | Many people suffering from skin disorders such as chronic wounds, non-healing ulcers, and diabetic ulcers need skin repair and regeneration. Aside from the diseases listed above, the industry needed a skin rejuvenation system and regeneration for cosmetic purposes. The procedure used to deliver pluripotent stem cells to the desired tissue was known as reconstructive medicine. Mesenchymal stem cells are the most fascinating since, when put in the correct setting and stimulated with the proper growth factors, they could choose a pathway to differentiate into the desired tissue. They are also very available, inexpensive, simple to extract, and reproducible. These mesenchymal stem cells are derived from bone marrow, bone, connective tissues, fats, and other tissues. Fat is the ideal reservoir for mesenchymal cells that have recently been identified. Fibrous, collagen fibers and fibroblasts create fat. |
2107.01376 | Sergii Kovalchuk | Sergii Kovalchuk (Geolab, Odessa, Ukraine) | Study of conduction, block and reflection at the excitable tissues
boundary in terms of the interval model of action potential | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Some mechanisms of cardiac arrhythmias can be presented as a composition of
elementary acts of block and reflection on the contacts of homogeneous areas of
the conducting tissue. For study this phenomena we use an axiomatic
one-dimensional model of interaction of cells of excitable tissue. The model
has four functional parameters that determine the functional states durations
of the cell. We show that the cells of a homogeneous excitable tissue,
depending on the ratio of the durations of the functional intervals, can
operate in the mode of solitary waves conduction or in one of three modes of
selfgeneration. It is proved that the propagation of a solitary wave through
the boundary of homogeneous conducting tissues can be accompanied by a block or
multiplex reflection. Block and reflection are unidirectional phenomena, and
there are not compatible on the same boundary. Systematized rules of
transmitting, block and reflection waves at the boundary of homogeneous
conducting tissues open up new possibilities for design mechanisms of
generation and analyzing complex heart rate patterns.
| [
{
"created": "Sat, 3 Jul 2021 08:17:32 GMT",
"version": "v1"
}
] | 2021-07-06 | [
[
"Kovalchuk",
"Sergii",
"",
"Geolab, Odessa, Ukraine"
]
] | Some mechanisms of cardiac arrhythmias can be presented as a composition of elementary acts of block and reflection on the contacts of homogeneous areas of the conducting tissue. For study this phenomena we use an axiomatic one-dimensional model of interaction of cells of excitable tissue. The model has four functional parameters that determine the functional states durations of the cell. We show that the cells of a homogeneous excitable tissue, depending on the ratio of the durations of the functional intervals, can operate in the mode of solitary waves conduction or in one of three modes of selfgeneration. It is proved that the propagation of a solitary wave through the boundary of homogeneous conducting tissues can be accompanied by a block or multiplex reflection. Block and reflection are unidirectional phenomena, and there are not compatible on the same boundary. Systematized rules of transmitting, block and reflection waves at the boundary of homogeneous conducting tissues open up new possibilities for design mechanisms of generation and analyzing complex heart rate patterns. |
2109.00038 | Magdalena Djordjevic | Sofija Markovic, Andjela Rodic, Igor Salom, Ognjen Milicevic,
Magdalena Djordjevic, Marko Djordjevic | COVID-19 severity determinants inferred through ecological and
epidemiological modeling | 14 pages, 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Determinants of COVID-19 clinical severity are commonly assessed by
transverse or longitudinal studies of the fatality counts. However, the
fatality counts depend both on disease clinical severity and transmissibility,
as more infected also lead to more deaths. Moreover, fatality counts (and
related measures such as Case Fatality Rate) are dynamic quantities, as they
appear with a delay to infections, while different geographic regions generally
belong to different points on the epidemics curve. Instead, we use
epidemiological modeling to propose a disease severity measure, which accounts
for the underlying disease dynamics. The measure corresponds to the ratio of
population averaged mortality and recovery rates (m/r). It is independent of
the disease transmission dynamics (i.e., the basic reproduction number) and has
a direct mechanistic interpretation. We use this measure to assess demographic,
medical, meteorological and environmental factors associated with the disease
severity. For this, we employ an ecological regression study design and analyze
different US states during the first disease outbreak. Principal Component
Analysis, followed by univariate and multivariate analyses based on machine
learning techniques, is used for selecting important predictors. Without using
prior knowledge from clinical studies, we recover significant predictors known
to influence disease severity, in particular age, chronic diseases, and racial
factors. Additionally, we identify long-term pollution exposure and population
density as not widely recognized (though for the pollution previously
hypothesized) predictors of the disease severity. Overall, the proposed measure
is useful for inferring severity determinants of COVID-19 and other infectious
diseases, and the obtained results may aid a better understanding of COVID-19
risks.
| [
{
"created": "Tue, 31 Aug 2021 18:49:16 GMT",
"version": "v1"
}
] | 2021-09-02 | [
[
"Markovic",
"Sofija",
""
],
[
"Rodic",
"Andjela",
""
],
[
"Salom",
"Igor",
""
],
[
"Milicevic",
"Ognjen",
""
],
[
"Djordjevic",
"Magdalena",
""
],
[
"Djordjevic",
"Marko",
""
]
] | Determinants of COVID-19 clinical severity are commonly assessed by transverse or longitudinal studies of the fatality counts. However, the fatality counts depend both on disease clinical severity and transmissibility, as more infected also lead to more deaths. Moreover, fatality counts (and related measures such as Case Fatality Rate) are dynamic quantities, as they appear with a delay to infections, while different geographic regions generally belong to different points on the epidemics curve. Instead, we use epidemiological modeling to propose a disease severity measure, which accounts for the underlying disease dynamics. The measure corresponds to the ratio of population averaged mortality and recovery rates (m/r). It is independent of the disease transmission dynamics (i.e., the basic reproduction number) and has a direct mechanistic interpretation. We use this measure to assess demographic, medical, meteorological and environmental factors associated with the disease severity. For this, we employ an ecological regression study design and analyze different US states during the first disease outbreak. Principal Component Analysis, followed by univariate and multivariate analyses based on machine learning techniques, is used for selecting important predictors. Without using prior knowledge from clinical studies, we recover significant predictors known to influence disease severity, in particular age, chronic diseases, and racial factors. Additionally, we identify long-term pollution exposure and population density as not widely recognized (though for the pollution previously hypothesized) predictors of the disease severity. Overall, the proposed measure is useful for inferring severity determinants of COVID-19 and other infectious diseases, and the obtained results may aid a better understanding of COVID-19 risks. |
1711.10814 | Hiroshi Tsukimoto | Hiroshi Tsukimoto and Takefumi Matsubara | A new fMRI data analysis method using cross validation: Negative BOLD
responses may be the deactivations of interneurons | 23 pages, 2 figures, 8 tables | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although functional magnetic resonance imaging (fMRI) is widely used for the
study of brain functions, the blood oxygenation level dependent (BOLD) effect
is incompletely understood. Particularly, negative BOLD responses(NBRs) is
controversial. This paper presents a new fMRI data analysis method, which is
more accurate than the typical conventional method. The authors conducted the
experiments of simple repetition, and analyzed the data by the new method. The
results strongly suggest that the deactivations(NBRs) detected by the new
method are the deactivations of interneurons, because the deactivation ratios
obtained by the new method approximately equals the deactivation ratios of
interneurons obtained by the study of interneurons. The (de)activations
detected by the new method are largely different from those detected by the
conventional method. The new method is more accurate than the conventional
method, and therefore the (de)activations detected by the new method may be
correct and the (de)activations detected by the conventional method may be
incorrect. A large portion of the deactivations of inhibitory interneurons is
also considered to be activations. Therefore, the right-tailed t-test, which is
usually performed in the conventional method, does not detect the whole
activation, because the right-tailed t-test only detects the activations of
excitatory neurons, and neglect the deactivations of inhibitory interneurons. A
lot of fMRI studies so far by the conventional method should be re-examined by
the new method, and many results obtained so far will be modified.
| [
{
"created": "Wed, 29 Nov 2017 12:32:51 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Dec 2017 08:43:26 GMT",
"version": "v2"
}
] | 2017-12-12 | [
[
"Tsukimoto",
"Hiroshi",
""
],
[
"Matsubara",
"Takefumi",
""
]
] | Although functional magnetic resonance imaging (fMRI) is widely used for the study of brain functions, the blood oxygenation level dependent (BOLD) effect is incompletely understood. Particularly, negative BOLD responses(NBRs) is controversial. This paper presents a new fMRI data analysis method, which is more accurate than the typical conventional method. The authors conducted the experiments of simple repetition, and analyzed the data by the new method. The results strongly suggest that the deactivations(NBRs) detected by the new method are the deactivations of interneurons, because the deactivation ratios obtained by the new method approximately equals the deactivation ratios of interneurons obtained by the study of interneurons. The (de)activations detected by the new method are largely different from those detected by the conventional method. The new method is more accurate than the conventional method, and therefore the (de)activations detected by the new method may be correct and the (de)activations detected by the conventional method may be incorrect. A large portion of the deactivations of inhibitory interneurons is also considered to be activations. Therefore, the right-tailed t-test, which is usually performed in the conventional method, does not detect the whole activation, because the right-tailed t-test only detects the activations of excitatory neurons, and neglect the deactivations of inhibitory interneurons. A lot of fMRI studies so far by the conventional method should be re-examined by the new method, and many results obtained so far will be modified. |
1005.4830 | Anca Radulescu | Anca Radulescu | Mechanisms explaining transitions between tonic and phasic firing in
neuronal populations as predicted by a low dimensional firing rate model | 25 pages (including references and appendices); 12 figures uploaded
as separate files | null | 10.1371/journal.pone.0012695 | null | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several firing patterns experimentally observed in neural populations have
been successfully correlated to animal behavior. Population bursting, hereby
regarded as a period of high firing rate followed by a period of quiescence, is
typically observed in groups of neurons during behavior. Biophysical
membrane-potential models of single cell bursting involve at least three
equations. Extending such models to study the collective behavior of neural
populations involves thousands of equations and can be very expensive
computationally. For this reason, low dimensional population models that
capture biophysical aspects of networks are needed.
\noindent The present paper uses a firing-rate model to study mechanisms that
trigger and stop transitions between tonic and phasic population firing. These
mechanisms are captured through a two-dimensional system, which can potentially
be extended to include interactions between different areas of the nervous
system with a small number of equations. The typical behavior of midbrain
dopaminergic neurons in the rodent is used as an example to illustrate and
interpret our results.
\noindent The model presented here can be used as a building block to study
interactions between networks of neurons. This theoretical approach may help
contextualize and understand the factors involved in regulating burst firing in
populations and how it may modulate distinct aspects of behavior.
| [
{
"created": "Wed, 26 May 2010 14:34:30 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"Radulescu",
"Anca",
""
]
] | Several firing patterns experimentally observed in neural populations have been successfully correlated to animal behavior. Population bursting, hereby regarded as a period of high firing rate followed by a period of quiescence, is typically observed in groups of neurons during behavior. Biophysical membrane-potential models of single cell bursting involve at least three equations. Extending such models to study the collective behavior of neural populations involves thousands of equations and can be very expensive computationally. For this reason, low dimensional population models that capture biophysical aspects of networks are needed. \noindent The present paper uses a firing-rate model to study mechanisms that trigger and stop transitions between tonic and phasic population firing. These mechanisms are captured through a two-dimensional system, which can potentially be extended to include interactions between different areas of the nervous system with a small number of equations. The typical behavior of midbrain dopaminergic neurons in the rodent is used as an example to illustrate and interpret our results. \noindent The model presented here can be used as a building block to study interactions between networks of neurons. This theoretical approach may help contextualize and understand the factors involved in regulating burst firing in populations and how it may modulate distinct aspects of behavior. |
2106.15733 | Aritro Sinha Roy | Aritro Sinha Roy | Optimal Background Correction in Double Quantum Coherence Electron Spin
Resonance Spectroscopy for Accurate Data Analysis | null | null | null | null | q-bio.QM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electron spin resonance (ESR) pulsed dipolar spectroscopy (PDS) is used in
protein 3D structure determination. However, the accuracy of the signal
analysis depends heavily on the background correction process. In this work, we
derive the functional forms of double quantum coherence (DQC) ESR signal in
typical frozen samples of micro-molar concentration, quantifying both the
intramolecular and the background contributions.
| [
{
"created": "Tue, 29 Jun 2021 21:39:27 GMT",
"version": "v1"
}
] | 2021-07-01 | [
[
"Roy",
"Aritro Sinha",
""
]
] | Electron spin resonance (ESR) pulsed dipolar spectroscopy (PDS) is used in protein 3D structure determination. However, the accuracy of the signal analysis depends heavily on the background correction process. In this work, we derive the functional forms of double quantum coherence (DQC) ESR signal in typical frozen samples of micro-molar concentration, quantifying both the intramolecular and the background contributions. |
1506.01142 | Gerardo F. Goya | G.F. Goya, L. Asin, M. P. Calatayud, A. Tres and M.R. Ibarra | Cell bystander effect induced by radiofrequency electromagnetic fields
and magnetic nanoparticles | 16 pages, 4 figures, submitted to International Journal of Radiation
Biology | null | null | null | q-bio.SC physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Induced effects by direct exposure to ionizing radiation (IR) are a central
issue in many fields like radiation protection, clinic diagnosis and
oncological therapies. Direct irradiation at certain doses induce cell death,
but similar effects can also occur in cells no directly exposed to IR, a
mechanism known as bystander effect. Non-IR (radiofrequency waves) can induce
the death of cells loaded with MNPs in a focused oncological therapy known as
magnetic hyperthermia. Indirect mechanisms are also able to induce the death of
unloaded MNPs cells. Using in vitro cell models, we found that colocalization
of the MNPs at the lysosomes and the non-increase of the temperature induces
bystander effect under non-IR. Our results provide a landscape in which
bystander effects are a more general mechanism, up to now only observed and
clinically used in the field of radiotherapy.
| [
{
"created": "Wed, 3 Jun 2015 06:58:47 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Goya",
"G. F.",
""
],
[
"Asin",
"L.",
""
],
[
"Calatayud",
"M. P.",
""
],
[
"Tres",
"A.",
""
],
[
"Ibarra",
"M. R.",
""
]
] | Induced effects by direct exposure to ionizing radiation (IR) are a central issue in many fields like radiation protection, clinic diagnosis and oncological therapies. Direct irradiation at certain doses induce cell death, but similar effects can also occur in cells no directly exposed to IR, a mechanism known as bystander effect. Non-IR (radiofrequency waves) can induce the death of cells loaded with MNPs in a focused oncological therapy known as magnetic hyperthermia. Indirect mechanisms are also able to induce the death of unloaded MNPs cells. Using in vitro cell models, we found that colocalization of the MNPs at the lysosomes and the non-increase of the temperature induces bystander effect under non-IR. Our results provide a landscape in which bystander effects are a more general mechanism, up to now only observed and clinically used in the field of radiotherapy. |
0907.2192 | Charles Ross FBCS FIAP FIMIS | Charles Ross and Shirley Redpath | Physical Foundations of Consciousness: Brain Organisation: The Role of
Synapses | 9 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have analysed the many facets of Consciousness into two distinct
categories. First: the organisational state of the neural networks at any one
time, which determines whether a person is conscious - awake, or unconscious -
asleep. Second: the processes that underlie the traffic of electrical signals
across these networks that accounts for all the experiences of conscious
awareness. This paper addresses the former; namely, how the state of the
billions of neural networks and the trillions of additional axons, dendrites
and synapses varies over the daily cycle - what physically changes when we go
to sleep - what happens when we wake up. We submit that the widths of synaptic
clefts are not fixed, but are variable, and that this variable tension across
the synapses is the neural correlate of consciousness.
| [
{
"created": "Mon, 13 Jul 2009 15:39:43 GMT",
"version": "v1"
}
] | 2009-07-14 | [
[
"Ross",
"Charles",
""
],
[
"Redpath",
"Shirley",
""
]
] | We have analysed the many facets of Consciousness into two distinct categories. First: the organisational state of the neural networks at any one time, which determines whether a person is conscious - awake, or unconscious - asleep. Second: the processes that underlie the traffic of electrical signals across these networks that accounts for all the experiences of conscious awareness. This paper addresses the former; namely, how the state of the billions of neural networks and the trillions of additional axons, dendrites and synapses varies over the daily cycle - what physically changes when we go to sleep - what happens when we wake up. We submit that the widths of synaptic clefts are not fixed, but are variable, and that this variable tension across the synapses is the neural correlate of consciousness. |
2305.15590 | Marc Boubnovski Martell | Marc Boubnovski Martell, Kristofer Linton-Reid, Sumeet Hindocha,
Mitchell Chen, OCTAPUS-AI, Paula Moreno, Marina \'Alvarez-Benito, \'Angel
Salvatierra, Richard Lee, Joram M. Posma, Marco A Calzado and Eric O Aboagye | Deep Representation Learning of Tissue Metabolome and Computed
Tomography Images Annotates Non-invasive Classification and Prognosis
Prediction of NSCLC | null | null | null | null | q-bio.QM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rich chemical information from tissue metabolomics provides a powerful
means to elaborate tissue physiology or tumor characteristics at cellular and
tumor microenvironment levels. However, the process of obtaining such
information requires invasive biopsies, is costly, and can delay clinical
patient management. Conversely, computed tomography (CT) is a clinical standard
of care but does not intuitively harbor histological or prognostic information.
Furthermore, the ability to embed metabolome information into CT to
subsequently use the learned representation for classification or prognosis has
yet to be described. This study develops a deep learning-based framework --
tissue-metabolomic-radiomic-CT (TMR-CT) by combining 48 paired CT images and
tumor/normal tissue metabolite intensities to generate ten image embeddings to
infer metabolite-derived representation from CT alone. In clinical NSCLC
settings, we ascertain whether TMR-CT achieves state-of-the-art results in
solving histology classification/prognosis tasks in an unseen international CT
dataset of 742 patients. TMR-CT non-invasively determines histological classes
- adenocarcinoma/ squamous cell carcinoma with an F1-score=0.78 and further
asserts patients' prognosis with a c-index=0.72, surpassing the performance of
radiomics models and clinical features. Additionally, our work shows the
potential to generate informative biology-inspired CT-led features to explore
connections between hard-to-obtain tissue metabolic profiles and routine
lesion-derived image data.
| [
{
"created": "Wed, 24 May 2023 21:57:29 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2023 09:47:03 GMT",
"version": "v2"
}
] | 2023-05-29 | [
[
"Martell",
"Marc Boubnovski",
""
],
[
"Linton-Reid",
"Kristofer",
""
],
[
"Hindocha",
"Sumeet",
""
],
[
"Chen",
"Mitchell",
""
],
[
"OCTAPUS-AI",
"",
""
],
[
"Moreno",
"Paula",
""
],
[
"Álvarez-Benito",
"Marina",
""
],
[
"Salvatierra",
"Ángel",
""
],
[
"Lee",
"Richard",
""
],
[
"Posma",
"Joram M.",
""
],
[
"Calzado",
"Marco A",
""
],
[
"Aboagye",
"Eric O",
""
]
] | The rich chemical information from tissue metabolomics provides a powerful means to elaborate tissue physiology or tumor characteristics at cellular and tumor microenvironment levels. However, the process of obtaining such information requires invasive biopsies, is costly, and can delay clinical patient management. Conversely, computed tomography (CT) is a clinical standard of care but does not intuitively harbor histological or prognostic information. Furthermore, the ability to embed metabolome information into CT to subsequently use the learned representation for classification or prognosis has yet to be described. This study develops a deep learning-based framework -- tissue-metabolomic-radiomic-CT (TMR-CT) by combining 48 paired CT images and tumor/normal tissue metabolite intensities to generate ten image embeddings to infer metabolite-derived representation from CT alone. In clinical NSCLC settings, we ascertain whether TMR-CT achieves state-of-the-art results in solving histology classification/prognosis tasks in an unseen international CT dataset of 742 patients. TMR-CT non-invasively determines histological classes - adenocarcinoma/ squamous cell carcinoma with an F1-score=0.78 and further asserts patients' prognosis with a c-index=0.72, surpassing the performance of radiomics models and clinical features. Additionally, our work shows the potential to generate informative biology-inspired CT-led features to explore connections between hard-to-obtain tissue metabolic profiles and routine lesion-derived image data. |
2404.12565 | Andreas Tiffeau-Mayer | James Henderson, Yuta Nagano, Martina Milighetti, Andreas
Tiffeau-Mayer | Limits on Inferring T-cell Specificity from Partial Information | 24 pages, 15 figures | null | null | null | q-bio.BM cond-mat.stat-mech cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | A key challenge in molecular biology is to decipher the mapping of protein
sequence to function. To perform this mapping requires the identification of
sequence features most informative about function. Here, we quantify the amount
of information (in bits) that T-cell receptor (TCR) sequence features provide
about antigen specificity. We identify informative features by their degree of
conservation among antigen-specific receptors relative to null expectations. We
find that TCR specificity synergistically depends on the hypervariable regions
of both receptor chains, with a degree of synergy that strongly depends on the
ligand. Using a coincidence-based approach to measuring information enables us
to directly bound the accuracy with which TCR specificity can be predicted from
partial matches to reference sequences. We anticipate that our statistical
framework will be of use for developing machine learning models for TCR
specificity prediction and for optimizing TCRs for cell therapies. The proposed
coincidence-based information measures might find further applications in
bounding the performance of pairwise classifiers in other fields.
| [
{
"created": "Fri, 19 Apr 2024 01:02:08 GMT",
"version": "v1"
}
] | 2024-04-22 | [
[
"Henderson",
"James",
""
],
[
"Nagano",
"Yuta",
""
],
[
"Milighetti",
"Martina",
""
],
[
"Tiffeau-Mayer",
"Andreas",
""
]
] | A key challenge in molecular biology is to decipher the mapping of protein sequence to function. To perform this mapping requires the identification of sequence features most informative about function. Here, we quantify the amount of information (in bits) that T-cell receptor (TCR) sequence features provide about antigen specificity. We identify informative features by their degree of conservation among antigen-specific receptors relative to null expectations. We find that TCR specificity synergistically depends on the hypervariable regions of both receptor chains, with a degree of synergy that strongly depends on the ligand. Using a coincidence-based approach to measuring information enables us to directly bound the accuracy with which TCR specificity can be predicted from partial matches to reference sequences. We anticipate that our statistical framework will be of use for developing machine learning models for TCR specificity prediction and for optimizing TCRs for cell therapies. The proposed coincidence-based information measures might find further applications in bounding the performance of pairwise classifiers in other fields. |
1112.2630 | Leonid Shapiro | Leonid A. Shapiro | Generalized Functions & Experimental Methods of Obtaining Statistical
Variable-Quantities Which Fully Determine Preferences in Choice-Rich
Environments | 26 pages, 1 figures, Version 3 (January 19), Version 2 (December 31),
Version 1 (December 12), in review process | null | null | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preferences of individuals are distributions of elements generated by
generalized functions. Models of economic decision-making derived from such
distributions are consistent with results of physiological experiments, and
explain any behavioral situations without simplifying assumptions. Quantities
in such models precisely correspond to experimentally obtainable physiological
observables which determine statistical properties of central nervous system as
it represents different stimuli. Graphical method of consistently and
quantitatively at-a-glance interpreting or visualizing physiological data
within context of economic models is demonstrated.
| [
{
"created": "Mon, 12 Dec 2011 17:19:26 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jan 2012 20:55:17 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Jan 2012 20:46:53 GMT",
"version": "v3"
}
] | 2012-01-20 | [
[
"Shapiro",
"Leonid A.",
""
]
] | Preferences of individuals are distributions of elements generated by generalized functions. Models of economic decision-making derived from such distributions are consistent with results of physiological experiments, and explain any behavioral situations without simplifying assumptions. Quantities in such models precisely correspond to experimentally obtainable physiological observables which determine statistical properties of central nervous system as it represents different stimuli. Graphical method of consistently and quantitatively at-a-glance interpreting or visualizing physiological data within context of economic models is demonstrated. |
2407.04799 | Noel Cadigan Dr | Noel G. Cadigan, Andrea M. Perreault, Hoang Nguyen, Jiaying Chen,
Andres Beita-Jimenez, Natalie Fuller, Krista Ransier | A state-space catch-at-length assessment model for redfish on the
Eastern Grand Bank of Newfoundland reveals large uncertainties in data and
stock dynamics | 27 pages including references, tables, and figures. In addition 12
pages figures in an Appendix | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We developed a state-space age-structured catch-at-length (ACL) assessment
model for redfish in NAFO Divisions 3LN. The model was developed to address
limitations in the surplus production model that was previously used to assess
this stock. The ACL model included temporal variations in recruitment, growth,
and mortality rates, which were limitations identified for the surplus
production model. Our ACL model revealed some important discrepancies in survey
and fishery length compositions. Our model also required large population
dynamics process errors to achieve good fits to survey indices and catch
estimates, which also demonstrated that additional understanding of these data
and other model assumptions is required. As such, we do not propose the ACL
model to provide management advice for 3LN redfish, but we do provide research
recommendations that should provide a better basis to model the 3LN redfish
stock dynamics. Recommendations include implementing sampling programs to
determine redfish species/ecotypes in commercial and research survey catches
and improving biological sampling for maturity and age.
| [
{
"created": "Fri, 5 Jul 2024 18:15:35 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Cadigan",
"Noel G.",
""
],
[
"Perreault",
"Andrea M.",
""
],
[
"Nguyen",
"Hoang",
""
],
[
"Chen",
"Jiaying",
""
],
[
"Beita-Jimenez",
"Andres",
""
],
[
"Fuller",
"Natalie",
""
],
[
"Ransier",
"Krista",
""
]
] | We developed a state-space age-structured catch-at-length (ACL) assessment model for redfish in NAFO Divisions 3LN. The model was developed to address limitations in the surplus production model that was previously used to assess this stock. The ACL model included temporal variations in recruitment, growth, and mortality rates, which were limitations identified for the surplus production model. Our ACL model revealed some important discrepancies in survey and fishery length compositions. Our model also required large population dynamics process errors to achieve good fits to survey indices and catch estimates, which also demonstrated that additional understanding of these data and other model assumptions is required. As such, we do not propose the ACL model to provide management advice for 3LN redfish, but we do provide research recommendations that should provide a better basis to model the 3LN redfish stock dynamics. Recommendations include implementing sampling programs to determine redfish species/ecotypes in commercial and research survey catches and improving biological sampling for maturity and age. |
2005.02251 | Vladislav Goncharenko | V. Goncharenko, R. Grigoryan, A. Samokhina | Raccoons vs Demons: multiclass labeled P300 dataset | null | null | null | null | q-bio.NC cs.HC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We publish dataset of visual P300 BCI performed in Virtual Reality (VR) game
Raccoons versus Demons (RvD). Data contains reach labels incorporating
information about stimulus chosen enabling us to estimate model's confidence at
each stimulus prediction stage. Data and experiments code are available at
https://gitlab.com/impulse-neiry_public/raccoons-vs-demons
| [
{
"created": "Wed, 22 Apr 2020 20:10:31 GMT",
"version": "v1"
},
{
"created": "Mon, 11 May 2020 15:47:38 GMT",
"version": "v2"
}
] | 2020-05-12 | [
[
"Goncharenko",
"V.",
""
],
[
"Grigoryan",
"R.",
""
],
[
"Samokhina",
"A.",
""
]
] | We publish dataset of visual P300 BCI performed in Virtual Reality (VR) game Raccoons versus Demons (RvD). Data contains reach labels incorporating information about stimulus chosen enabling us to estimate model's confidence at each stimulus prediction stage. Data and experiments code are available at https://gitlab.com/impulse-neiry_public/raccoons-vs-demons |
2211.09240 | Rados{\l}aw Kycia | Agata Dziwulska-Hunek, Agnieszka Niemczynowicz, Rados{\l}aw A. Kycia,
Arkadiusz Matwijczuk, Krzysztof Kornarzy\'nski, Joanna Stadnik, Mariusz
Szymanek | Stimulation of soy seeds using environmentally friendly magnetic and
electric fields | null | null | null | null | q-bio.QM cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study analyzes the impact of constant and alternating magnetic fields and
alternating electric fields on various growth parameters of soy plants: the
germination energy and capacity, plants emergence and number, the Yield(II) of
the fresh mass of seedlings, protein content, and photosynthetic parameters.
Four cultivars were used: MAVKA, MERLIN, VIOLETTA, and ANUSZKA. Moreover, the
advanced Machine Learning processing pipeline was proposed to distinguish the
impact of physical factors on photosynthetic parameters. It is possible to
distinguish exposition on different physical factors for the first three
cultivars; therefore, it indicates that the EM factors have some observable
effect on soy plants. Moreover, some influence of physical factors on growth
parameters was observed. The use of ELM (Electromagnetic) fields had a positive
impact on the germination rate in Merlin plants. The highest values were
recorded for the constant magnetic field (CMF) - Merlin, and the lowest for the
alternating electric field (AEF) - Violetta. An increase in terms of emergence
and number of plants after seed stimulation was observed for the Mavka
cultivar, except for the AEF treatment (number of plants after 30 days) (...)
| [
{
"created": "Wed, 16 Nov 2022 22:05:58 GMT",
"version": "v1"
}
] | 2022-11-18 | [
[
"Dziwulska-Hunek",
"Agata",
""
],
[
"Niemczynowicz",
"Agnieszka",
""
],
[
"Kycia",
"Radosław A.",
""
],
[
"Matwijczuk",
"Arkadiusz",
""
],
[
"Kornarzyński",
"Krzysztof",
""
],
[
"Stadnik",
"Joanna",
""
],
[
"Szymanek",
"Mariusz",
""
]
] | The study analyzes the impact of constant and alternating magnetic fields and alternating electric fields on various growth parameters of soy plants: the germination energy and capacity, plants emergence and number, the Yield(II) of the fresh mass of seedlings, protein content, and photosynthetic parameters. Four cultivars were used: MAVKA, MERLIN, VIOLETTA, and ANUSZKA. Moreover, the advanced Machine Learning processing pipeline was proposed to distinguish the impact of physical factors on photosynthetic parameters. It is possible to distinguish exposition on different physical factors for the first three cultivars; therefore, it indicates that the EM factors have some observable effect on soy plants. Moreover, some influence of physical factors on growth parameters was observed. The use of ELM (Electromagnetic) fields had a positive impact on the germination rate in Merlin plants. The highest values were recorded for the constant magnetic field (CMF) - Merlin, and the lowest for the alternating electric field (AEF) - Violetta. An increase in terms of emergence and number of plants after seed stimulation was observed for the Mavka cultivar, except for the AEF treatment (number of plants after 30 days) (...) |
2306.09855 | Gianmarco Tiddia | Bruno Golosio, Jose Villamar, Gianmarco Tiddia, Elena Pastorelli,
Jonas Stapmanns, Viviana Fanti, Pier Stanislao Paolucci, Abigail Morrison and
Johanna Senk | Runtime Construction of Large-Scale Spiking Neuronal Network Models on
GPU Devices | 29 pages, 9 figures | Appl. Sci. 2023, 13(17), 9598 | 10.3390/app13179598 | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by/4.0/ | Simulation speed matters for neuroscientific research: this includes not only
how quickly the simulated model time of a large-scale spiking neuronal network
progresses, but also how long it takes to instantiate the network model in
computer memory. On the hardware side, acceleration via highly parallel GPUs is
being increasingly utilized. On the software side, code generation approaches
ensure highly optimized code, at the expense of repeated code regeneration and
recompilation after modifications to the network model. Aiming for a greater
flexibility with respect to iterative model changes, here we propose a new
method for creating network connections interactively, dynamically, and
directly in GPU memory through a set of commonly used high-level connection
rules. We validate the simulation performance with both consumer and data
center GPUs on two neuroscientifically relevant models: a cortical microcircuit
of about 77,000 leaky-integrate-and-fire neuron models and 300 million static
synapses, and a two-population network recurrently connected using a variety of
connection rules. With our proposed ad hoc network instantiation, both network
construction and simulation times are comparable or shorter than those obtained
with other state-of-the-art simulation technologies, while still meeting the
flexibility demands of explorative network modeling.
| [
{
"created": "Fri, 16 Jun 2023 14:08:27 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Golosio",
"Bruno",
""
],
[
"Villamar",
"Jose",
""
],
[
"Tiddia",
"Gianmarco",
""
],
[
"Pastorelli",
"Elena",
""
],
[
"Stapmanns",
"Jonas",
""
],
[
"Fanti",
"Viviana",
""
],
[
"Paolucci",
"Pier Stanislao",
""
],
[
"Morrison",
"Abigail",
""
],
[
"Senk",
"Johanna",
""
]
] | Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses, but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code, at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies, while still meeting the flexibility demands of explorative network modeling. |
2303.10819 | Ivan Junier | Ivan Junier and Elham Ghobadpour and Olivier Espeli and Ralf Everaers | DNA supercoiling in bacteria: state of play and challenges from a
viewpoint of physics based modeling | 11 figures | Front. Microbiol. 14:1192831 (2023) | 10.3389/fmicb.2023.1192831 | null | q-bio.BM physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | DNA supercoiling is central to many fundamental processes of living
organisms. Its average level along the chromosome and over time reflects the
dynamic equilibrium of opposite activities of topoisomerases, which are
required to relax mechanical stresses that are inevitably produced during DNA
replication and gene transcription. Supercoiling affects all scales of the
spatio-temporal organization of bacterial DNA, from the base pair to the large
scale chromosome conformation. Highlighted in vitro and in vivo in the 1960s
and 1970s, respectively, the first physical models were proposed concomitantly
in order to predict the deformation properties of the double helix. About
fifteen years later, polymer physics models demonstrated on larger scales the
plectonemic nature and the tree-like organization of supercoiled DNA. Since
then, many works have tried to establish a better understanding of the multiple
structuring and physiological properties of bacterial DNA in thermodynamic
equilibrium and far from equilibrium. The purpose of this essay is to address
upcoming challenges by thoroughly exploring the relevance, predictive capacity,
and limitations of current physical models, with a specific focus on structural
properties beyond the scale of the double helix. We discuss more particularly
the problem of DNA conformations, the interplay between DNA supercoiling with
gene transcription and DNA replication, its role on nucleoid formation and,
finally, the problem of scaling up models. Our primary objective is to foster
increased collaboration between physicists and biologists. To achieve this, we
have reduced the respective jargon to a minimum and we provide some explanatory
background material for the two communities.
| [
{
"created": "Mon, 20 Mar 2023 01:25:33 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Sep 2023 15:47:14 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Oct 2023 10:06:35 GMT",
"version": "v3"
},
{
"created": "Fri, 6 Oct 2023 12:53:13 GMT",
"version": "v4"
}
] | 2023-10-09 | [
[
"Junier",
"Ivan",
""
],
[
"Ghobadpour",
"Elham",
""
],
[
"Espeli",
"Olivier",
""
],
[
"Everaers",
"Ralf",
""
]
] | DNA supercoiling is central to many fundamental processes of living organisms. Its average level along the chromosome and over time reflects the dynamic equilibrium of opposite activities of topoisomerases, which are required to relax mechanical stresses that are inevitably produced during DNA replication and gene transcription. Supercoiling affects all scales of the spatio-temporal organization of bacterial DNA, from the base pair to the large scale chromosome conformation. Highlighted in vitro and in vivo in the 1960s and 1970s, respectively, the first physical models were proposed concomitantly in order to predict the deformation properties of the double helix. About fifteen years later, polymer physics models demonstrated on larger scales the plectonemic nature and the tree-like organization of supercoiled DNA. Since then, many works have tried to establish a better understanding of the multiple structuring and physiological properties of bacterial DNA in thermodynamic equilibrium and far from equilibrium. The purpose of this essay is to address upcoming challenges by thoroughly exploring the relevance, predictive capacity, and limitations of current physical models, with a specific focus on structural properties beyond the scale of the double helix. We discuss more particularly the problem of DNA conformations, the interplay between DNA supercoiling with gene transcription and DNA replication, its role on nucleoid formation and, finally, the problem of scaling up models. Our primary objective is to foster increased collaboration between physicists and biologists. To achieve this, we have reduced the respective jargon to a minimum and we provide some explanatory background material for the two communities. |
2105.04305 | Jan Krumsiek | Kelsey Chetnik, Elisa Benedetti, Daniel P. Gomari, Annalise
Schweickart, Richa Batra, Mustafa Buyukozkan, Zeyu Wang, Matthias Arnold,
Jonas Zierer, Karsten Suhre, Jan Krumsiek | maplet: An extensible R toolbox for modular and reproducible omics
pipelines | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | This paper presents maplet, an open-source R package for the creation of
highly customizable, fully reproducible statistical pipelines for omics data
analysis, with a special focus on metabolomics-based methods. It builds on the
SummarizedExperiment data structure to create a centralized pipeline framework
for storing data, analysis steps, results, and visualizations. maplet's key
design feature is its modularity, which offers several advantages, such as
ensuring code quality through the individual maintenance of functions and
promoting collaborative development by removing technical barriers to code
contribution. With over 90 functions, the package includes a wide range of
functionalities, covering many widely used statistical approaches and data
visualization techniques.
| [
{
"created": "Thu, 6 May 2021 18:54:13 GMT",
"version": "v1"
}
] | 2021-05-11 | [
[
"Chetnik",
"Kelsey",
""
],
[
"Benedetti",
"Elisa",
""
],
[
"Gomari",
"Daniel P.",
""
],
[
"Schweickart",
"Annalise",
""
],
[
"Batra",
"Richa",
""
],
[
"Buyukozkan",
"Mustafa",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Arnold",
"Matthias",
""
],
[
"Zierer",
"Jonas",
""
],
[
"Suhre",
"Karsten",
""
],
[
"Krumsiek",
"Jan",
""
]
] | This paper presents maplet, an open-source R package for the creation of highly customizable, fully reproducible statistical pipelines for omics data analysis, with a special focus on metabolomics-based methods. It builds on the SummarizedExperiment data structure to create a centralized pipeline framework for storing data, analysis steps, results, and visualizations. maplet's key design feature is its modularity, which offers several advantages, such as ensuring code quality through the individual maintenance of functions and promoting collaborative development by removing technical barriers to code contribution. With over 90 functions, the package includes a wide range of functionalities, covering many widely used statistical approaches and data visualization techniques. |
1307.0737 | Daniel B. Weissman | D. B. Weissman and O. Hallatschek | The rate of adaptation in large sexual populations with linear
chromosomes | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large populations, multiple beneficial mutations may be simultaneously
spreading. In asexual populations, these mutations must either arise on the
same background or compete against each other. In sexual populations,
recombination can bring together beneficial alleles from different backgrounds,
but tightly linked alleles may still greatly interfere with each other. We show
for well-mixed populations that when this interference is strong, the genome
can be seen as consisting of many effectively asexual stretches linked
together. The rate at which beneficial alleles fix is thus roughly proportional
to the rate of recombination, and depends only logarithmically on the mutation
supply and the strength of selection. Our scaling arguments also allow to
predict, with reasonable accuracy, the distribution of effects of fixed
mutations when new mutations have broadly-distributed effects. We focus on the
regime in which crossovers occur more frequently than beneficial mutations, as
is likely to be the case for many natural populations.
| [
{
"created": "Tue, 2 Jul 2013 15:57:59 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Dec 2013 04:52:05 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Dec 2013 05:57:09 GMT",
"version": "v3"
}
] | 2013-12-19 | [
[
"Weissman",
"D. B.",
""
],
[
"Hallatschek",
"O.",
""
]
] | In large populations, multiple beneficial mutations may be simultaneously spreading. In asexual populations, these mutations must either arise on the same background or compete against each other. In sexual populations, recombination can bring together beneficial alleles from different backgrounds, but tightly linked alleles may still greatly interfere with each other. We show for well-mixed populations that when this interference is strong, the genome can be seen as consisting of many effectively asexual stretches linked together. The rate at which beneficial alleles fix is thus roughly proportional to the rate of recombination, and depends only logarithmically on the mutation supply and the strength of selection. Our scaling arguments also allow to predict, with reasonable accuracy, the distribution of effects of fixed mutations when new mutations have broadly-distributed effects. We focus on the regime in which crossovers occur more frequently than beneficial mutations, as is likely to be the case for many natural populations. |
1302.2724 | Michael Assaf | Michael Assaf, Elijah Roberts, Zaida Luthey-Schulten and Nigel
Goldenfeld | Extrinsic noise driven phenotype switching in a self-regulating gene | 5 pages, 4 figures | null | 10.1103/PhysRevLett.111.058102 | null | q-bio.MN cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to inherent noise in intracellular networks cellular decisions can be
random, so genetically identical cells can display different phenotypic
behavior even in identical environments. Most previous work in understanding
the decision-making process has focused on the role of intrinsic noise in these
systems. Yet, especially in the high copy-number regime, extrinsic noise has
been shown to be much more significant. Here, using a prototypical example of a
bistable self-regulating gene model, we develop a theoretical framework
describing the combined effect of intrinsic and extrinsic noise on the dynamics
of stochastic genetic switches. Employing our theory and Monte Carlo
simulations, we show that extrinsic noise not only significantly alters the
lifetimes of the phenotypic states, but can induce bistability in unexpected
regions of parameter space, and may fundamentally change the escape mechanism.
These results have implications for interpreting experimentally observed
heterogeneity in cellular populations and for stochastic modeling of cellular
decision processes.
| [
{
"created": "Tue, 12 Feb 2013 07:41:10 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Assaf",
"Michael",
""
],
[
"Roberts",
"Elijah",
""
],
[
"Luthey-Schulten",
"Zaida",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | Due to inherent noise in intracellular networks cellular decisions can be random, so genetically identical cells can display different phenotypic behavior even in identical environments. Most previous work in understanding the decision-making process has focused on the role of intrinsic noise in these systems. Yet, especially in the high copy-number regime, extrinsic noise has been shown to be much more significant. Here, using a prototypical example of a bistable self-regulating gene model, we develop a theoretical framework describing the combined effect of intrinsic and extrinsic noise on the dynamics of stochastic genetic switches. Employing our theory and Monte Carlo simulations, we show that extrinsic noise not only significantly alters the lifetimes of the phenotypic states, but can induce bistability in unexpected regions of parameter space, and may fundamentally change the escape mechanism. These results have implications for interpreting experimentally observed heterogeneity in cellular populations and for stochastic modeling of cellular decision processes. |
1107.0313 | Michael Bachmann | Tristan Bereau, Michael Bachmann, and Markus Deserno | Interplay between Secondary and Tertiary Structure Formation in Protein
Folding Cooperativity | 3 pages, 3 figures | J. Am. Chem. Soc. 132, 13129-13131 (2010) | 10.1016/j.bpj.2010.12.1361 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein folding cooperativity is defined by the nature of the finite-size
thermodynamic transition exhibited upon folding: two-state transitions show a
free energy barrier between the folded and unfolded ensembles, while downhill
folding is barrierless. A microcanonical analysis, where the energy is the
natural variable, has shown better suited to unambiguously characterize the
nature of the transition compared to its canonical counterpart. Replica
exchange molecular dynamics simulations of a high resolution coarse-grained
model allow for the accurate evaluation of the density of states, in order to
extract precise thermodynamic information, and measure its impact on structural
features. The method is applied to three helical peptides: a short helix shows
sharp features of a two-state folder, while a longer helix and a three-helix
bundle exhibit downhill and two-state transitions, respectively. Extending the
results of lattice simulations and theoretical models, we find that it is the
interplay between secondary structure and the loss of non-native tertiary
contacts which determines the nature of the transition.
| [
{
"created": "Fri, 1 Jul 2011 19:59:45 GMT",
"version": "v1"
}
] | 2017-08-23 | [
[
"Bereau",
"Tristan",
""
],
[
"Bachmann",
"Michael",
""
],
[
"Deserno",
"Markus",
""
]
] | Protein folding cooperativity is defined by the nature of the finite-size thermodynamic transition exhibited upon folding: two-state transitions show a free energy barrier between the folded and unfolded ensembles, while downhill folding is barrierless. A microcanonical analysis, where the energy is the natural variable, has shown better suited to unambiguously characterize the nature of the transition compared to its canonical counterpart. Replica exchange molecular dynamics simulations of a high resolution coarse-grained model allow for the accurate evaluation of the density of states, in order to extract precise thermodynamic information, and measure its impact on structural features. The method is applied to three helical peptides: a short helix shows sharp features of a two-state folder, while a longer helix and a three-helix bundle exhibit downhill and two-state transitions, respectively. Extending the results of lattice simulations and theoretical models, we find that it is the interplay between secondary structure and the loss of non-native tertiary contacts which determines the nature of the transition. |
2201.09837 | Gustavo Mockaitis | Mahmood Mahmoodi-Eshkaftaki, Gustavo Mockaitis, Mohammad Rafie Rafiee | Dynamic optimization of volatile fatty acids to enrich biohydrogen
production using a deep learning neural network | null | null | null | null | q-bio.QM q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A new strategy was developed to investigate the effect of volatile fatty
acids (VFAs) on the efficiency of biogas production with a focus on improving
bio-H$_2$. The inoculum used, anaerobic granular sludge obtained from a UASB
reactor treating poultry slaughterhouse wastewater, was pretreated with five
different pretreatments. The relationship between VFAs and biogas compounds was
studied as time-dependent components. In time-dependent processes with small
sample size data, regression models may not be good enough at estimating
responses. Therefore, a deep learning neural network (DNN) model was developed
to estimate the biogas compounds based on the VFAs. The accuracy of this model
to predict the biogas compounds was higher than that of multivariate regression
models. Further, it could predict the effect of time changes on biogas
compounds. Analysis showed that all the pretreatments were able to increase the
ratio of butyric acid / acetic acid successfully, decrease propionic acid
drastically, and increase the efficiency of bio-H$_2$ production. As
discovered, butyric acid had the greatest effect on bio-H$_2$, and propionic
acid had the greatest effect on CH$_4$ production. The best amounts of the VFAs
were determined using an optimization method, integrated DNN and desirability
analysis, dynamically retrained based on digestion time. Accordingly, optimal
ranges of acetic, propionic, and butyric acids were 823.2 - 1534.3, 36.3 -
47.4, and 1522 - 1822 mg/L, respectively, determined for digestion time of
25.23 - 123.63 h. These values resulted in the production of bio-H$_2$, N$_2$,
CO$_2$, and CH$_4$ in ranges of 6.4 - 26.2, 12.2 - 43.2, 5 - 25.3, and 0 - 1.4
mmol/L, respectively. The optimum ranges of VFAs are relatively wide ranges and
practically can be used in biogas plants.
| [
{
"created": "Mon, 24 Jan 2022 17:59:06 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Feb 2022 18:01:46 GMT",
"version": "v2"
}
] | 2022-02-09 | [
[
"Mahmoodi-Eshkaftaki",
"Mahmood",
""
],
[
"Mockaitis",
"Gustavo",
""
],
[
"Rafiee",
"Mohammad Rafie",
""
]
] | A new strategy was developed to investigate the effect of volatile fatty acids (VFAs) on the efficiency of biogas production with a focus on improving bio-H$_2$. The inoculum used, anaerobic granular sludge obtained from a UASB reactor treating poultry slaughterhouse wastewater, was pretreated with five different pretreatments. The relationship between VFAs and biogas compounds was studied as time-dependent components. In time-dependent processes with small sample size data, regression models may not be good enough at estimating responses. Therefore, a deep learning neural network (DNN) model was developed to estimate the biogas compounds based on the VFAs. The accuracy of this model to predict the biogas compounds was higher than that of multivariate regression models. Further, it could predict the effect of time changes on biogas compounds. Analysis showed that all the pretreatments were able to increase the ratio of butyric acid / acetic acid successfully, decrease propionic acid drastically, and increase the efficiency of bio-H$_2$ production. As discovered, butyric acid had the greatest effect on bio-H$_2$, and propionic acid had the greatest effect on CH$_4$ production. The best amounts of the VFAs were determined using an optimization method, integrated DNN and desirability analysis, dynamically retrained based on digestion time. Accordingly, optimal ranges of acetic, propionic, and butyric acids were 823.2 - 1534.3, 36.3 - 47.4, and 1522 - 1822 mg/L, respectively, determined for digestion time of 25.23 - 123.63 h. These values resulted in the production of bio-H$_2$, N$_2$, CO$_2$, and CH$_4$ in ranges of 6.4 - 26.2, 12.2 - 43.2, 5 - 25.3, and 0 - 1.4 mmol/L, respectively. The optimum ranges of VFAs are relatively wide ranges and practically can be used in biogas plants. |
1603.03694 | Philippe Marcq | V. Nier, S. Jain, C. T. Lim, S. Ishihara, B. Ladoux and P. Marcq | Inference of internal stress in a cell monolayer | 38 pages, 14 figures | null | 10.1016/j.bpj.2016.03.002 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We combine traction force data with Bayesian inversion to obtain an absolute
estimate of the internal stress field of a cell monolayer. The method, Bayesian
inversion stress microscopy (BISM), is validated using numerical simulations
performed in a wide range of conditions. It is robust to changes in each
ingredient of the underlying statistical model. Importantly, its accuracy does
not depend on the rheology of the tissue. We apply BISM to experimental
traction force data measured in a narrow ring of cohesive epithelial cells, and
check that the inferred stress field coincides with that obtained by direct
spatial integration of the traction force data in this quasi-one-dimensional
geometry.
| [
{
"created": "Fri, 11 Mar 2016 17:09:44 GMT",
"version": "v1"
}
] | 2016-05-04 | [
[
"Nier",
"V.",
""
],
[
"Jain",
"S.",
""
],
[
"Lim",
"C. T.",
""
],
[
"Ishihara",
"S.",
""
],
[
"Ladoux",
"B.",
""
],
[
"Marcq",
"P.",
""
]
] | We combine traction force data with Bayesian inversion to obtain an absolute estimate of the internal stress field of a cell monolayer. The method, Bayesian inversion stress microscopy (BISM), is validated using numerical simulations performed in a wide range of conditions. It is robust to changes in each ingredient of the underlying statistical model. Importantly, its accuracy does not depend on the rheology of the tissue. We apply BISM to experimental traction force data measured in a narrow ring of cohesive epithelial cells, and check that the inferred stress field coincides with that obtained by direct spatial integration of the traction force data in this quasi-one-dimensional geometry. |
2207.12827 | Thierry Mora | Andrea Mazzolini, Thierry Mora, Aleksandra M Walczak | Inspecting the interaction between HIV and the immune system through
genetic turnover | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic infections of the human immunodeficiency virus (HIV) create a very
complex co-evolutionary process, where the virus tries to escape the
continuously adapting host immune system. Quantitative details of this process
are largely unknown and could help in disease treatment and vaccine
development. Here we study a longitudinal dataset of ten HIV-infected people,
where both the B-cell receptors and the virus are deeply sequenced. We focus on
simple measures of turnover, which quantify how much the composition of the
viral strains and the immune repertoire change between time points. At the
single-patient level, the viral-host turnover rates do not show any
statistically significant correlation, however they correlate if the
information is aggregated across patients. In particular, we identify an
anti-correlation: large changes in the viral pool composition come with small
changes in the B-cell receptor repertoire. This result seems to contradict the
naive expectation that when the virus mutates quickly, the immune repertoire
needs to change to keep up. However, we show that the observed anti-correlation
naturally emerges and can be understood in terms of simple population-genetics
models.
| [
{
"created": "Tue, 26 Jul 2022 11:47:26 GMT",
"version": "v1"
}
] | 2022-07-27 | [
[
"Mazzolini",
"Andrea",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M",
""
]
] | Chronic infections of the human immunodeficiency virus (HIV) create a very complex co-evolutionary process, where the virus tries to escape the continuously adapting host immune system. Quantitative details of this process are largely unknown and could help in disease treatment and vaccine development. Here we study a longitudinal dataset of ten HIV-infected people, where both the B-cell receptors and the virus are deeply sequenced. We focus on simple measures of turnover, which quantify how much the composition of the viral strains and the immune repertoire change between time points. At the single-patient level, the viral-host turnover rates do not show any statistically significant correlation, however they correlate if the information is aggregated across patients. In particular, we identify an anti-correlation: large changes in the viral pool composition come with small changes in the B-cell receptor repertoire. This result seems to contradict the naive expectation that when the virus mutates quickly, the immune repertoire needs to change to keep up. However, we show that the observed anti-correlation naturally emerges and can be understood in terms of simple population-genetics models. |
1712.00009 | Paulo Laerte Natti | T.M. Saita, P.L. Natti, E.R. Cirilo, N.M.L. Romeiro, M.A.C. Candezano,
R.B Acu\~na and L.C.G. Moreno | Numerical Simulation of Fecal Coliform Dynamics in Luruaco Lake,
Colombia | 13 pages, in Portuguese, 4 figures | TEMA, v.18, n.3, p.435-447, 2017 | 10.5540/tema.2017.018.03.0435 | null | q-bio.QM cs.CE math.NA q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Luruaco Lake located in the Department of Atl\'antico, Colombia, is
damaged by the discharge of untreated sewage, bringing risks to the health of
all who use its waters. The present study aims to perform the numerical
simulation of the concentration dynamics of fecal coliforms in the lake. The
simulation of the hydrodynamic flow is carried out by means of a
two-dimensional horizontal (2DH) model, given by a Navier-Stokes system. The
simulation of fecal coliform transport is described by a
convective-dispersive-reactive equation. These equations are solved numerically
by the Finite Difference Method (FDM) and the Mark and Cell (MAC) method, in
generalized coordinates. Regarding the construction of the computational mesh
of the Luruaco Lake, the cubic spline and multiblock methods were used. The
results obtained in the simulations allow a better understanding of the
dynamics of fecal coliforms in the Luruaco Lake, showing the more polluted
regions. They can also advise public agencies on identifying the emitters of
pollutants in the lake and on developing an optimal treatment for the recovery
of the polluted environment.
| [
{
"created": "Thu, 30 Nov 2017 13:19:47 GMT",
"version": "v1"
}
] | 2017-12-05 | [
[
"Saita",
"T. M.",
""
],
[
"Natti",
"P. L.",
""
],
[
"Cirilo",
"E. R.",
""
],
[
"Romeiro",
"N. M. L.",
""
],
[
"Candezano",
"M. A. C.",
""
],
[
"Acuña",
"R. B",
""
],
[
"Moreno",
"L. C. G.",
""
]
] | The Luruaco Lake located in the Department of Atl\'antico, Colombia, is damaged by the discharge of untreated sewage, bringing risks to the health of all who use its waters. The present study aims to perform the numerical simulation of the concentration dynamics of fecal coliforms in the lake. The simulation of the hydrodynamic flow is carried out by means of a two-dimensional horizontal (2DH) model, given by a Navier-Stokes system. The simulation of fecal coliform transport is described by a convective-dispersive-reactive equation. These equations are solved numerically by the Finite Difference Method (FDM) and the Mark and Cell (MAC) method, in generalized coordinates. Regarding the construction of the computational mesh of the Luruaco Lake, the cubic spline and multiblock methods were used. The results obtained in the simulations allow a better understanding of the dynamics of fecal coliforms in the Luruaco Lake, showing the more polluted regions. They can also advise public agencies on identifying the emitters of pollutants in the lake and on developing an optimal treatment for the recovery of the polluted environment. |
1108.1484 | Song Xu | Song Xu, Shuyun Jiao, Pengyao Jiang, Bo Yuan, Ping Ao | Non-fixation in infinite potential | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Under the effects of strong genetic drift, it is highly probable to observe
gene fixation or loss in a population, shown by divergent probability density
functions, or infinite adaptive peaks on a landscape. It is then interesting to
ask what such infinite peaks imply, with or without combining other biological
factors (e.g. mutation and selection). We study the stochastic escape time from
the generated infinite adaptive peaks, and show that Kramers' classical escape
formula can be extended to the non-Gaussian distribution cases. The constructed
landscape provides a global description for system's middle and long term
behaviors, breaking the constraints in previous methods.
| [
{
"created": "Sat, 6 Aug 2011 14:10:41 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Nov 2011 15:26:21 GMT",
"version": "v2"
},
{
"created": "Sat, 5 Nov 2011 01:48:54 GMT",
"version": "v3"
},
{
"created": "Mon, 9 Apr 2012 14:59:57 GMT",
"version": "v4"
},
{
"created": "Sat, 12 May 2012 02:14:16 GMT",
"version": "v5"
},
{
"created": "Fri, 26 Oct 2012 12:40:11 GMT",
"version": "v6"
}
] | 2012-10-29 | [
[
"Xu",
"Song",
""
],
[
"Jiao",
"Shuyun",
""
],
[
"Jiang",
"Pengyao",
""
],
[
"Yuan",
"Bo",
""
],
[
"Ao",
"Ping",
""
]
] | Under the effects of strong genetic drift, it is highly probable to observe gene fixation or loss in a population, shown by divergent probability density functions, or infinite adaptive peaks on a landscape. It is then interesting to ask what such infinite peaks imply, with or without combining other biological factors (e.g. mutation and selection). We study the stochastic escape time from the generated infinite adaptive peaks, and show that Kramers' classical escape formula can be extended to the non-Gaussian distribution cases. The constructed landscape provides a global description for system's middle and long term behaviors, breaking the constraints in previous methods. |
1210.3970 | Gianluca Martelloni | Gianluca Martelloni, Franco Bagnoli and Stefano Marsili Libelli | A dynamical population modeling of invasive species with reference to
the crayfish Procambarus Clarkii | null | null | null | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a discrete dynamical population modeling of invasive
species, with reference to the swamp crayfish Procambarus clarkii. Since this
species can cause environmental damage of various kinds, it is necessary to
evaluate its expected in not yet infested areas. A structured discrete model is
built, taking into account all biological information we were able to find,
including the environmental variability implemented by means of stochastic
parameters (coefficients of fertility, death, etc.). This model is based on a
structure with 7 age classes, i.e. a Leslie mathematical population modeling
type and it is calibrated with laboratory data provided by the Department of
Evolutionary Biology (DEB) of Florence (Italy). The model presents many
interesting aspects: the population has a high initial growth, then it
stabilizes similarly to the logistic growth, but then it exhibits oscillations
(a kind of limit-cycle attractor in the phase plane). The sensitivity analysis
shows a good resilience of the model and, for low values of reproductive female
fraction, the fluctuations may eventually lead to the extinction of the
species: this fact might be exploited as a controlling factor. Moreover, the
probability of extinction is valuated with an inverse Gaussian that indicates a
high resilience of the species, confirmed by experimental data and field
observation: this species has diffused in Italy since 1989 and it has shown a
natural tendency to grow. Finally, the spatial mobility is introduced in the
model, simulating the movement of the crayfishes in a virtual lake of
elliptical form by means of simple cinematic rules encouraging the movement
towards the banks of the catchment (as it happens in reality) while a random
walk is imposed when the banks are reached.
| [
{
"created": "Mon, 15 Oct 2012 10:33:24 GMT",
"version": "v1"
}
] | 2012-10-16 | [
[
"Martelloni",
"Gianluca",
""
],
[
"Bagnoli",
"Franco",
""
],
[
"Libelli",
"Stefano Marsili",
""
]
] | In this paper we present a discrete dynamical population modeling of invasive species, with reference to the swamp crayfish Procambarus clarkii. Since this species can cause environmental damage of various kinds, it is necessary to evaluate its expected in not yet infested areas. A structured discrete model is built, taking into account all biological information we were able to find, including the environmental variability implemented by means of stochastic parameters (coefficients of fertility, death, etc.). This model is based on a structure with 7 age classes, i.e. a Leslie mathematical population modeling type and it is calibrated with laboratory data provided by the Department of Evolutionary Biology (DEB) of Florence (Italy). The model presents many interesting aspects: the population has a high initial growth, then it stabilizes similarly to the logistic growth, but then it exhibits oscillations (a kind of limit-cycle attractor in the phase plane). The sensitivity analysis shows a good resilience of the model and, for low values of reproductive female fraction, the fluctuations may eventually lead to the extinction of the species: this fact might be exploited as a controlling factor. Moreover, the probability of extinction is valuated with an inverse Gaussian that indicates a high resilience of the species, confirmed by experimental data and field observation: this species has diffused in Italy since 1989 and it has shown a natural tendency to grow. Finally, the spatial mobility is introduced in the model, simulating the movement of the crayfishes in a virtual lake of elliptical form by means of simple cinematic rules encouraging the movement towards the banks of the catchment (as it happens in reality) while a random walk is imposed when the banks are reached. |
2303.11056 | Michael Gilson | Chapin E. Cavender, David A. Case, Julian C.-H. Chen, Lillian T.
Chong, Daniel A. Keedy, Kresten Lindorff-Larsen, David L. Mobley, O. H.
Samuli Ollila, Chris Oostenbrink, Paul Robustelli, Vincent A. Voelz, Michael
E. Wall, David C. Wych, Michael K. Gilson | Structure-Based Experimental Datasets for Benchmarking of Protein
Simulation Force Fields | null | null | null | null | q-bio.BM physics.bio-ph physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | This review article provides an overview of structurally oriented,
experimental datasets that can be used to benchmark protein force fields,
focusing on data generated by nuclear magnetic resonance (NMR) spectroscopy and
room temperature (RT) protein crystallography. We discuss why these observables
are useful for assessing force field accuracy, how they can be calculated from
simulation trajectories, and statistical issues that arise when comparing
simulations with experiment. The target audience for this article is
computational researchers and trainees who develop, benchmark, or use protein
force fields for molecular simulations.
| [
{
"created": "Thu, 2 Mar 2023 14:34:56 GMT",
"version": "v1"
}
] | 2023-03-21 | [
[
"Cavender",
"Chapin E.",
""
],
[
"Case",
"David A.",
""
],
[
"Chen",
"Julian C. -H.",
""
],
[
"Chong",
"Lillian T.",
""
],
[
"Keedy",
"Daniel A.",
""
],
[
"Lindorff-Larsen",
"Kresten",
""
],
[
"Mobley",
"David L.",
""
],
[
"Ollila",
"O. H. Samuli",
""
],
[
"Oostenbrink",
"Chris",
""
],
[
"Robustelli",
"Paul",
""
],
[
"Voelz",
"Vincent A.",
""
],
[
"Wall",
"Michael E.",
""
],
[
"Wych",
"David C.",
""
],
[
"Gilson",
"Michael K.",
""
]
] | This review article provides an overview of structurally oriented, experimental datasets that can be used to benchmark protein force fields, focusing on data generated by nuclear magnetic resonance (NMR) spectroscopy and room temperature (RT) protein crystallography. We discuss why these observables are useful for assessing force field accuracy, how they can be calculated from simulation trajectories, and statistical issues that arise when comparing simulations with experiment. The target audience for this article is computational researchers and trainees who develop, benchmark, or use protein force fields for molecular simulations. |
2003.06194 | Larissa Terumi Arashiro | Larissa Terumi Arashiro, Neus Montero, Ivet Ferrer, Francisco Gabriel
Acien, Cintia Gomez, Marianna Garfi | Life Cycle Assessment of high rate algal ponds for wastewater treatment
and resource recovery | null | Science of the total environment 622-623, 1118-1130 (2018) | 10.1016/j.scitotenv.2017.12.051 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The aim of this study was to assess the potential environmental impacts
associated with high rate algal ponds (HRAP) systems for wastewater treatment
and resource recovery in small communities. To this aim, a Life Cycle
Assessment (LCA) and an economic assessment were carried out evaluating two
alternatives: i) a HRAPs system for wastewater treatment where microalgal
biomass is valorised for energy recovery (biogas production); ii) a HRAPs
system for wastewater treatment where microalgal biomass is reused for
nutrients recovery (biofertiliser production). Additionally, both alternatives
were compared to a typical small-sized activated sludge system. The results
showed that HRAPs system coupled with biogas production appeared to be more
environmentally friendly than HRAPs system coupled with biofertiliser
production in the climate change, ozone layer depletion, photochemical oxidant
formation, and fossil depletion impact categories. Different climatic
conditions have strongly influenced the results obtained in the eutrophication
and metal depletion impact categories, with the HRAPs system located where warm
temperatures and high solar radiation are predominant showing lower impact. In
terms of costs, HRAPs systems seemed to be more economically feasible when
combined with biofertiliser production instead of biogas.
| [
{
"created": "Fri, 13 Mar 2020 10:46:43 GMT",
"version": "v1"
}
] | 2020-03-16 | [
[
"Arashiro",
"Larissa Terumi",
""
],
[
"Montero",
"Neus",
""
],
[
"Ferrer",
"Ivet",
""
],
[
"Acien",
"Francisco Gabriel",
""
],
[
"Gomez",
"Cintia",
""
],
[
"Garfi",
"Marianna",
""
]
] | The aim of this study was to assess the potential environmental impacts associated with high rate algal ponds (HRAP) systems for wastewater treatment and resource recovery in small communities. To this aim, a Life Cycle Assessment (LCA) and an economic assessment were carried out evaluating two alternatives: i) a HRAPs system for wastewater treatment where microalgal biomass is valorised for energy recovery (biogas production); ii) a HRAPs system for wastewater treatment where microalgal biomass is reused for nutrients recovery (biofertiliser production). Additionally, both alternatives were compared to a typical small-sized activated sludge system. The results showed that HRAPs system coupled with biogas production appeared to be more environmentally friendly than HRAPs system coupled with biofertiliser production in the climate change, ozone layer depletion, photochemical oxidant formation, and fossil depletion impact categories. Different climatic conditions have strongly influenced the results obtained in the eutrophication and metal depletion impact categories, with the HRAPs system located where warm temperatures and high solar radiation are predominant showing lower impact. In terms of costs, HRAPs systems seemed to be more economically feasible when combined with biofertiliser production instead of biogas. |
2310.12901 | Reka Albert | Jordan C. Rozum, Colin Campbell, Eli Newby, Fatemeh Sadat Fatemi
Nasrollahi, Reka Albert | Boolean Networks as Predictive Models of Emergent Biological Behaviors | Review, to appear in the Cambridge Elements series | null | null | null | q-bio.MN nlin.CG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Interacting biological systems at all organizational levels display emergent
behavior. Modeling these systems is made challenging by the number and variety
of biological components and interactions (from molecules in gene regulatory
networks to species in ecological networks) and the often-incomplete state of
system knowledge (e.g., the unknown values of kinetic parameters for
biochemical reactions). Boolean networks have emerged as a powerful tool for
modeling these systems. We provide a methodological overview of Boolean network
models of biological systems. After a brief introduction, we describe the
process of building, analyzing, and validating a Boolean model. We then present
the use of the model to make predictions about the system's response to
perturbations and about how to control (or at least influence) its behavior. We
emphasize the interplay between structural and dynamical properties of Boolean
networks and illustrate them in three case studies from disparate levels of
biological organization.
| [
{
"created": "Thu, 19 Oct 2023 16:53:08 GMT",
"version": "v1"
}
] | 2023-10-20 | [
[
"Rozum",
"Jordan C.",
""
],
[
"Campbell",
"Colin",
""
],
[
"Newby",
"Eli",
""
],
[
"Nasrollahi",
"Fatemeh Sadat Fatemi",
""
],
[
"Albert",
"Reka",
""
]
] | Interacting biological systems at all organizational levels display emergent behavior. Modeling these systems is made challenging by the number and variety of biological components and interactions (from molecules in gene regulatory networks to species in ecological networks) and the often-incomplete state of system knowledge (e.g., the unknown values of kinetic parameters for biochemical reactions). Boolean networks have emerged as a powerful tool for modeling these systems. We provide a methodological overview of Boolean network models of biological systems. After a brief introduction, we describe the process of building, analyzing, and validating a Boolean model. We then present the use of the model to make predictions about the system's response to perturbations and about how to control (or at least influence) its behavior. We emphasize the interplay between structural and dynamical properties of Boolean networks and illustrate them in three case studies from disparate levels of biological organization. |
0812.1310 | Vladimir Privman | Vladimir Privman, Mary A. Arugula, Jan Halamek, Marcos Pita, Evgeny
Katz | Network Analysis of Biochemical Logic for Noise Reduction and Stability:
A System of Three Coupled Enzymatic AND Gates | 31 pages, PDF | J. Phys. Chem. B 113, 5301--5310 (2009) | 10.1021/jp810743w | null | q-bio.MN cond-mat.soft q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop an approach aimed at optimizing the parameters of a network of
biochemical logic gates for reduction of the "analog" noise buildup.
Experiments for three coupled enzymatic AND gates are reported, illustrating
our procedure. Specifically, starch - one of the controlled network inputs - is
converted to maltose by beta-amylase. With the use of phosphate (another
controlled input), maltose phosphorylase then produces glucose. Finally,
nicotinamide adenine dinucleotide (NAD+) - the third controlled input - is
reduced under the action of glucose dehydrogenase to yield the optically
detected signal. Network functioning is analyzed by varying selective inputs
and fitting standardized few-parameters "response-surface" functions assumed
for each gate. This allows a certain probe of the individual gate quality, but
primarily yields information on the relative contribution of the gates to noise
amplification. The derived information is then used to modify our experimental
system to put it in a regime of a less noisy operation.
| [
{
"created": "Sat, 6 Dec 2008 20:05:08 GMT",
"version": "v1"
}
] | 2010-10-12 | [
[
"Privman",
"Vladimir",
""
],
[
"Arugula",
"Mary A.",
""
],
[
"Halamek",
"Jan",
""
],
[
"Pita",
"Marcos",
""
],
[
"Katz",
"Evgeny",
""
]
] | We develop an approach aimed at optimizing the parameters of a network of biochemical logic gates for reduction of the "analog" noise buildup. Experiments for three coupled enzymatic AND gates are reported, illustrating our procedure. Specifically, starch - one of the controlled network inputs - is converted to maltose by beta-amylase. With the use of phosphate (another controlled input), maltose phosphorylase then produces glucose. Finally, nicotinamide adenine dinucleotide (NAD+) - the third controlled input - is reduced under the action of glucose dehydrogenase to yield the optically detected signal. Network functioning is analyzed by varying selective inputs and fitting standardized few-parameters "response-surface" functions assumed for each gate. This allows a certain probe of the individual gate quality, but primarily yields information on the relative contribution of the gates to noise amplification. The derived information is then used to modify our experimental system to put it in a regime of a less noisy operation. |
2111.11880 | Patrick Sobetzko | Marc Teufel, Carlo A. Klein, Maurice Mager, Patrick Sobetzko | CRISPR SWAPnDROP -- A multifunctional system for genome editing and
large-scale interspecies gene transfer | null | null | 10.1038/s41467-022-30843-1 | null | q-bio.GN | http://creativecommons.org/publicdomain/zero/1.0/ | The need for diverse chromosomal modifications in biotechnology, synthetic
biology and basic research requires the development of new technologies. With
CRISPR SWAPnDROP, we extend the limits of genome editing to large-scale in-vivo
DNA transfer between bacterial species. Its modular platform approach
facilitates species specific adaptation to confer genome editing in various
species. In this study, we show the implementation of the CRISPR SWAPnDROP
concept for the model organism Escherichia coli and the currently fastest
growing and biotechnologically relevant organism Vibrio natriegens. We
demonstrate the excision, transfer and integration of 151kb chromosomal DNA
between E. coli strains and from E. coli to V. natriegens without size-limiting
intermediate DNA extraction. With the transfer of the E. coli MG1655 wild type
lac operon, we establish a functional lactose and galactose degradation pathway
in V. natriegens to extend its biotechnological spectrum. We also transfer the
E. coli DH5alpha lac operon and make V. natriegens capable of
alpha-complementation - a step towards an ultra-fast cloning strain.
Furthermore, CRISPR SWAPnDROP is designed to be the swiss army knife of genome
engineering. Its spectrum of application comprises scarless, marker-free,
iterative and parallel insertions and deletions, genome rearrangements, as well
as gene transfer between strains and across species. The modular character
facilitates DNA library applications and the recycling of standardized parts.
Its novel multi-color scarless co-selection system significantly improves
editing efficiency to 92% for single edits and 83% for quadruple edits and
provides visual quality controls throughout the assembly and editing process.
| [
{
"created": "Tue, 23 Nov 2021 13:47:07 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Teufel",
"Marc",
""
],
[
"Klein",
"Carlo A.",
""
],
[
"Mager",
"Maurice",
""
],
[
"Sobetzko",
"Patrick",
""
]
] | The need for diverse chromosomal modifications in biotechnology, synthetic biology and basic research requires the development of new technologies. With CRISPR SWAPnDROP, we extend the limits of genome editing to large-scale in-vivo DNA transfer between bacterial species. Its modular platform approach facilitates species specific adaptation to confer genome editing in various species. In this study, we show the implementation of the CRISPR SWAPnDROP concept for the model organism Escherichia coli and the currently fastest growing and biotechnologically relevant organism Vibrio natriegens. We demonstrate the excision, transfer and integration of 151kb chromosomal DNA between E. coli strains and from E. coli to V. natriegens without size-limiting intermediate DNA extraction. With the transfer of the E. coli MG1655 wild type lac operon, we establish a functional lactose and galactose degradation pathway in V. natriegens to extend its biotechnological spectrum. We also transfer the E. coli DH5alpha lac operon and make V. natriegens capable of alpha-complementation - a step towards an ultra-fast cloning strain. Furthermore, CRISPR SWAPnDROP is designed to be the swiss army knife of genome engineering. Its spectrum of application comprises scarless, marker-free, iterative and parallel insertions and deletions, genome rearrangements, as well as gene transfer between strains and across species. The modular character facilitates DNA library applications and the recycling of standardized parts. Its novel multi-color scarless co-selection system significantly improves editing efficiency to 92% for single edits and 83% for quadruple edits and provides visual quality controls throughout the assembly and editing process. |
q-bio/0406025 | Chi Zhang | Song Liu, Chi Zhang, Yaoqi Zhou | Unbound Protein-Protein Docking Selections by the DFIRE-based
Statistical Pair Potential | null | null | null | null | q-bio.BM q-bio.QM | null | A newly developed statistical pair potential based on Distance-scaled Finite
Ideal-gas REference (DFIRE) state is applied to unbound protein-protein docking
structure selections. The performance of the DFIRE energy function is compared
to those of the well-established ZDOCK energy scores and RosettaDock energy
function using the comprehensive decoy sets generated by ZDOCK and RosettaDock.
Despite significant difference in the functional forms and complexities of the
three energy scores, the differences in overall performance for docking
structure selections are small between DFIRE and ZDOCK2.3 and between DFIRE and
RosettaDock. This result is remarkable considering that a single-term DFIRE
energy function was originally designed for monomer proteins while
multiple-term energy functions of ZDOCK and RosettaDock were specifically
optimized for docking. This provides hope that the accuracy of the existing
energy functions for docking can be improved.
| [
{
"created": "Sat, 12 Jun 2004 21:00:35 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Oct 2004 17:38:18 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Liu",
"Song",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhou",
"Yaoqi",
""
]
] | A newly developed statistical pair potential based on Distance-scaled Finite Ideal-gas REference (DFIRE) state is applied to unbound protein-protein docking structure selections. The performance of the DFIRE energy function is compared to those of the well-established ZDOCK energy scores and RosettaDock energy function using the comprehensive decoy sets generated by ZDOCK and RosettaDock. Despite significant difference in the functional forms and complexities of the three energy scores, the differences in overall performance for docking structure selections are small between DFIRE and ZDOCK2.3 and between DFIRE and RosettaDock. This result is remarkable considering that a single-term DFIRE energy function was originally designed for monomer proteins while multiple-term energy functions of ZDOCK and RosettaDock were specifically optimized for docking. This provides hope that the accuracy of the existing energy functions for docking can be improved. |
2201.05440 | Hsin-Po Wang | Hsin-Po Wang and Ryan Gabrys and Alexander Vardy | Tropical Group Testing | 25 pages, 20 figures. v2 fixes typos | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Polymerase chain reaction (PCR) testing is the gold standard for diagnosing
COVID-19. PCR amplifies the virus DNA 40 times to produce measurements of viral
loads that span seven orders of magnitude. Unfortunately, the outputs of these
tests are imprecise and therefore quantitative group testing methods, which
rely on precise measurements, are not applicable. Motivated by the
ever-increasing demand to identify individuals infected with SARS-CoV-19, we
propose a new model that leverages tropical arithmetic to characterize the PCR
testing process. Our proposed framework, termed tropical group testing,
overcomes existing limitations of quantitative group testing by allowing for
imprecise test measurements. In many cases, some of which are highlighted in
this work, tropical group testing is provably more powerful than traditional
binary group testing in that it require fewer tests than classical approaches,
while additionally providing a mechanism to identify the viral load of each
infected individual. It is also empirically stronger than related works that
have attempted to combine PCR, quantitative group testing, and compressed
sensing.
| [
{
"created": "Wed, 12 Jan 2022 21:45:45 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jan 2022 20:33:34 GMT",
"version": "v2"
}
] | 2022-01-19 | [
[
"Wang",
"Hsin-Po",
""
],
[
"Gabrys",
"Ryan",
""
],
[
"Vardy",
"Alexander",
""
]
] | Polymerase chain reaction (PCR) testing is the gold standard for diagnosing COVID-19. PCR amplifies the virus DNA 40 times to produce measurements of viral loads that span seven orders of magnitude. Unfortunately, the outputs of these tests are imprecise and therefore quantitative group testing methods, which rely on precise measurements, are not applicable. Motivated by the ever-increasing demand to identify individuals infected with SARS-CoV-19, we propose a new model that leverages tropical arithmetic to characterize the PCR testing process. Our proposed framework, termed tropical group testing, overcomes existing limitations of quantitative group testing by allowing for imprecise test measurements. In many cases, some of which are highlighted in this work, tropical group testing is provably more powerful than traditional binary group testing in that it require fewer tests than classical approaches, while additionally providing a mechanism to identify the viral load of each infected individual. It is also empirically stronger than related works that have attempted to combine PCR, quantitative group testing, and compressed sensing. |
2003.04096 | Francesca Romana Bertani | F. R. Bertani, L. Businaro, L. Gambacorta, A.Mencattin, D. Brenda, D.
Di Giuseppe, A. De Ninno, M. Solfrizzo, E. Martinelli, A. Gerardino | Optical detection of Aflatoxins B in grained almonds using fluorescence
spectroscopy and machine learning algorithms | null | null | null | null | q-bio.QM physics.data-an | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Aflatoxins are fungal metabolites extensively produced by many different
fungal species that may contaminate a wide range of agricultural food products.
They have been studied extensively because of being associated with various
chronic and acute diseases especially immunosuppression and cancer and their
presence in food is strictly monitored and regulated worldwide. Aflatoxin
detection and measurement relies mainly on chemical methods usually based on
chromatography approaches, and recently developed immunochemical based assays
that have advantages but also limitations, since these are expensive and
destructive techniques. Nondestructive, optical approaches are recently being
developed to assess presence of contamination in a cost and time effective way,
maintaining acceptable accuracy and reproducibility. In this paper are
presented the results obtained with a simple portable device for nondestructive
detection of aflatoxins in almonds. The presented approach is based on the
analysis of fluorescence spectra of slurried almonds under 375 nm wavelength
excitation. Experiments were conducted with almonds contaminated in the range
of 2.7-320.2 ng/g total aflatoxins B (AFB1 + AFB2) as determined by HPLC/FLD.
After applying pre-processing steps, spectral analysis was carried out by a
binary classification model based on SVM algorithm. A majority vote procedure
was then performed on the classification results. In this way we could achieve,
as best result, a classification accuracy of 94% (and false negative rate 5%)
with a threshold set at 6.4 ng/g. These results illustrate the feasibility of
such an approach in the great challenge of aflatoxin detection for food and
feed safety.
| [
{
"created": "Thu, 20 Feb 2020 14:53:46 GMT",
"version": "v1"
}
] | 2020-03-10 | [
[
"Bertani",
"F. R.",
""
],
[
"Businaro",
"L.",
""
],
[
"Gambacorta",
"L.",
""
],
[
"Mencattin",
"A.",
""
],
[
"Brenda",
"D.",
""
],
[
"Di Giuseppe",
"D.",
""
],
[
"De Ninno",
"A.",
""
],
[
"Solfrizzo",
"M.",
""
],
[
"Martinelli",
"E.",
""
],
[
"Gerardino",
"A.",
""
]
] | Aflatoxins are fungal metabolites extensively produced by many different fungal species that may contaminate a wide range of agricultural food products. They have been studied extensively because of being associated with various chronic and acute diseases especially immunosuppression and cancer and their presence in food is strictly monitored and regulated worldwide. Aflatoxin detection and measurement relies mainly on chemical methods usually based on chromatography approaches, and recently developed immunochemical based assays that have advantages but also limitations, since these are expensive and destructive techniques. Nondestructive, optical approaches are recently being developed to assess presence of contamination in a cost and time effective way, maintaining acceptable accuracy and reproducibility. In this paper are presented the results obtained with a simple portable device for nondestructive detection of aflatoxins in almonds. The presented approach is based on the analysis of fluorescence spectra of slurried almonds under 375 nm wavelength excitation. Experiments were conducted with almonds contaminated in the range of 2.7-320.2 ng/g total aflatoxins B (AFB1 + AFB2) as determined by HPLC/FLD. After applying pre-processing steps, spectral analysis was carried out by a binary classification model based on SVM algorithm. A majority vote procedure was then performed on the classification results. In this way we could achieve, as best result, a classification accuracy of 94% (and false negative rate 5%) with a threshold set at 6.4 ng/g. These results illustrate the feasibility of such an approach in the great challenge of aflatoxin detection for food and feed safety. |
q-bio/0610030 | Marta Casanellas | Marta Casanellas, Jesus Fernandez-Sanchez | Performance of a new invariants method on homogeneous and
non-homogeneous quartet trees | 12 pages and 4 pages of Supplementary Material included, 8 figures.
to appear in Molecular Biology and Evolution | null | null | null | q-bio.PE | null | An attempt to use phylogenetic invariants for tree reconstruction was made at
the end of the 80s and the beginning of the 90s by several authors (the initial
idea due to Lake and Cavender and Felsenstein in 1987. However, the efficiency
of methods based on invariants is still in doubt, probably because these
methods only used few generators of the set of phylogenetic invariants. The
method studied in this paper was first introduced by Casanellas, Garcia and
Sullivant and it is the first method based on invariants that uses the whole
set of generators for DNA data. The simulation studies performed in this paper
prove that it is a very competitive and highly efficient phylogenetic
reconstruction method, especially for non-homogeneous models on phylogenetic
trees.
| [
{
"created": "Tue, 17 Oct 2006 14:34:01 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Casanellas",
"Marta",
""
],
[
"Fernandez-Sanchez",
"Jesus",
""
]
] | An attempt to use phylogenetic invariants for tree reconstruction was made at the end of the 80s and the beginning of the 90s by several authors (the initial idea due to Lake and Cavender and Felsenstein in 1987. However, the efficiency of methods based on invariants is still in doubt, probably because these methods only used few generators of the set of phylogenetic invariants. The method studied in this paper was first introduced by Casanellas, Garcia and Sullivant and it is the first method based on invariants that uses the whole set of generators for DNA data. The simulation studies performed in this paper prove that it is a very competitive and highly efficient phylogenetic reconstruction method, especially for non-homogeneous models on phylogenetic trees. |
1103.1402 | Aleksandar Stojmirovi\'c | Aleksandar Stojmirovi\'c and Yi-Kuo Yu | ppiTrim: Constructing non-redundant and up-to-date interactomes | 21 pages, 2 figures, 6 tables, 8 supplementary tables. Minor
corrections of typos and similar | Database (Oxford). 2011:bar036 | 10.1093/database/bar036 | null | q-bio.MN | http://creativecommons.org/licenses/publicdomain/ | Robust advances in interactome analysis demand comprehensive, non-redundant
and consistently annotated datasets. By non-redundant, we mean that the
accounting of evidence for every interaction should be faithful: each
independent experimental support is counted exactly once, no more, no less.
While many interactions are shared among public repositories, none of them
contains the complete known interactome for any model organism. In addition,
the annotations of the same experimental result by different repositories often
disagree. This brings up the issue of which annotation to keep while
consolidating evidences that are the same. The iRefIndex database, including
interactions from most popular repositories with a standardized protein
nomenclature, represents a significant advance in all aspects, especially in
comprehensiveness. However, iRefIndex aims to maintain all
information/annotation from original sources and requires users to perform
additional processing to fully achieve the aforementioned goals.
To address issues with iRefIndex and to achieve our goals, we have developed
ppiTrim, a script that processes iRefIndex to produce non-redundant,
consistently annotated datasets of physical interactions. Our script proceeds
in three stages: mapping all interactants to gene identifiers and removing all
undesired raw interactions, deflating potentially expanded complexes, and
reconciling for each interaction the annotation labels among different source
databases. As an illustration, we have processed the three largest organismal
datasets: yeast, human and fruitfly. While ppiTrim can resolve most apparent
conflicts between different labelings, we also discovered some unresolvable
disagreements mostly resulting from different annotation policies among
repositories.
URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/ppiTrim.html
| [
{
"created": "Mon, 7 Mar 2011 22:47:55 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jun 2011 02:12:41 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Jul 2011 01:53:36 GMT",
"version": "v3"
}
] | 2011-10-25 | [
[
"Stojmirović",
"Aleksandar",
""
],
[
"Yu",
"Yi-Kuo",
""
]
] | Robust advances in interactome analysis demand comprehensive, non-redundant and consistently annotated datasets. By non-redundant, we mean that the accounting of evidence for every interaction should be faithful: each independent experimental support is counted exactly once, no more, no less. While many interactions are shared among public repositories, none of them contains the complete known interactome for any model organism. In addition, the annotations of the same experimental result by different repositories often disagree. This brings up the issue of which annotation to keep while consolidating evidences that are the same. The iRefIndex database, including interactions from most popular repositories with a standardized protein nomenclature, represents a significant advance in all aspects, especially in comprehensiveness. However, iRefIndex aims to maintain all information/annotation from original sources and requires users to perform additional processing to fully achieve the aforementioned goals. To address issues with iRefIndex and to achieve our goals, we have developed ppiTrim, a script that processes iRefIndex to produce non-redundant, consistently annotated datasets of physical interactions. Our script proceeds in three stages: mapping all interactants to gene identifiers and removing all undesired raw interactions, deflating potentially expanded complexes, and reconciling for each interaction the annotation labels among different source databases. As an illustration, we have processed the three largest organismal datasets: yeast, human and fruitfly. While ppiTrim can resolve most apparent conflicts between different labelings, we also discovered some unresolvable disagreements mostly resulting from different annotation policies among repositories. URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/ppiTrim.html |
2202.05145 | Xuan Lin | Xiaoqin Pan, Xuan Lin, Dongsheng Cao, Xiangxiang Zeng, Philip S. Yu,
Lifang He, Ruth Nussinov, Feixiong Cheng | Deep learning for drug repurposing: methods, databases, and applications | Accepted by WIREs Computational Molecular Science | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Drug development is time-consuming and expensive. Repurposing existing drugs
for new therapies is an attractive solution that accelerates drug development
at reduced experimental costs, specifically for Coronavirus Disease 2019
(COVID-19), an infectious disease caused by severe acute respiratory syndrome
coronavirus 2 (SARS-CoV-2). However, comprehensively obtaining and productively
integrating available knowledge and big biomedical data to effectively advance
deep learning models is still challenging for drug repurposing in other complex
diseases. In this review, we introduce guidelines on how to utilize deep
learning methodologies and tools for drug repurposing. We first summarized the
commonly used bioinformatics and pharmacogenomics databases for drug
repurposing. Next, we discuss recently developed sequence-based and graph-based
representation approaches as well as state-of-the-art deep learning-based
methods. Finally, we present applications of drug repurposing to fight the
COVID-19 pandemic, and outline its future challenges.
| [
{
"created": "Tue, 8 Feb 2022 09:42:08 GMT",
"version": "v1"
}
] | 2022-02-11 | [
[
"Pan",
"Xiaoqin",
""
],
[
"Lin",
"Xuan",
""
],
[
"Cao",
"Dongsheng",
""
],
[
"Zeng",
"Xiangxiang",
""
],
[
"Yu",
"Philip S.",
""
],
[
"He",
"Lifang",
""
],
[
"Nussinov",
"Ruth",
""
],
[
"Cheng",
"Feixiong",
""
]
] | Drug development is time-consuming and expensive. Repurposing existing drugs for new therapies is an attractive solution that accelerates drug development at reduced experimental costs, specifically for Coronavirus Disease 2019 (COVID-19), an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). However, comprehensively obtaining and productively integrating available knowledge and big biomedical data to effectively advance deep learning models is still challenging for drug repurposing in other complex diseases. In this review, we introduce guidelines on how to utilize deep learning methodologies and tools for drug repurposing. We first summarized the commonly used bioinformatics and pharmacogenomics databases for drug repurposing. Next, we discuss recently developed sequence-based and graph-based representation approaches as well as state-of-the-art deep learning-based methods. Finally, we present applications of drug repurposing to fight the COVID-19 pandemic, and outline its future challenges. |
2106.05186 | Madhavun Candadai | Madhavun Candadai | Information theoretic analysis of computational models as a tool to
understand the neural basis of behaviors | null | null | null | null | q-bio.NC cs.IT cs.NE math.IT | http://creativecommons.org/licenses/by/4.0/ | One of the greatest research challenges of this century is to understand the
neural basis for how behavior emerges in brain-body-environment systems. To
this end, research has flourished along several directions but have
predominantly focused on the brain. While there is in an increasing acceptance
and focus on including the body and environment in studying the neural basis of
behavior, animal researchers are often limited by technology or tools.
Computational models provide an alternative framework within which one can
study model systems where ground-truth can be measured and interfered with.
These models act as a hypothesis generation framework that would in turn guide
experimentation. Furthermore, the ability to intervene as we please, allows us
to conduct in-depth analysis of these models in a way that cannot be performed
in natural systems. For this purpose, information theory is emerging as a
powerful tool that can provide insights into the operation of these
brain-body-environment models. In this work, I provide an introduction, a
review and discussion to make a case for how information theoretic analysis of
computational models is a potent research methodology to help us better
understand the neural basis of behavior.
| [
{
"created": "Wed, 2 Jun 2021 02:08:18 GMT",
"version": "v1"
}
] | 2021-06-10 | [
[
"Candadai",
"Madhavun",
""
]
] | One of the greatest research challenges of this century is to understand the neural basis for how behavior emerges in brain-body-environment systems. To this end, research has flourished along several directions but have predominantly focused on the brain. While there is in an increasing acceptance and focus on including the body and environment in studying the neural basis of behavior, animal researchers are often limited by technology or tools. Computational models provide an alternative framework within which one can study model systems where ground-truth can be measured and interfered with. These models act as a hypothesis generation framework that would in turn guide experimentation. Furthermore, the ability to intervene as we please, allows us to conduct in-depth analysis of these models in a way that cannot be performed in natural systems. For this purpose, information theory is emerging as a powerful tool that can provide insights into the operation of these brain-body-environment models. In this work, I provide an introduction, a review and discussion to make a case for how information theoretic analysis of computational models is a potent research methodology to help us better understand the neural basis of behavior. |
2101.08841 | Robert Miller | Florian Rupprecht, S\"oren Enge, Kornelius Schmidt, Wei Gao, Clemens
Kirschbaum, Robert Miller | Automating LC-MS/MS mass chromatogram quantification. Wavelet transform
based peak detection and automated estimation of peak boundaries and
signal-to-noise ratio using signal processing methods | 20 pages, 8 figures | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by-sa/4.0/ | While there are many different methods for peak detection, no automatic
methods for marking peak boundaries to calculate area under the curve (AUC) and
signal-to-noise ratio (SNR) estimation exist. An algorithm for the automation
of liquid chromatography tandem mass spectrometry (LC-MS/MS) mass chromatogram
quantification was developed and validated. Continuous wavelet transformation
and other digital signal processing methods were used in a multi-step procedure
to calculate concentrations of six different analytes. To evaluate the
performance of the algorithm, the results of the manual quantification of 446
hair samples with 6 different steroid hormones by two experts were compared to
the algorithm results. The proposed approach of automating mass chromatogram
quantification is reliable and valid. The algorithm returns less nondetectables
than human raters. Based on signal to noise ratio, human non-detectables could
be correctly classified with a diagnostic performance of AUC = 0.95. The
algorithm presented here allows fast, automated, reliable, and valid
computational peak detection and quantification in LC- MS/MS.
| [
{
"created": "Thu, 21 Jan 2021 20:25:34 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Rupprecht",
"Florian",
""
],
[
"Enge",
"Sören",
""
],
[
"Schmidt",
"Kornelius",
""
],
[
"Gao",
"Wei",
""
],
[
"Kirschbaum",
"Clemens",
""
],
[
"Miller",
"Robert",
""
]
] | While there are many different methods for peak detection, no automatic methods for marking peak boundaries to calculate area under the curve (AUC) and signal-to-noise ratio (SNR) estimation exist. An algorithm for the automation of liquid chromatography tandem mass spectrometry (LC-MS/MS) mass chromatogram quantification was developed and validated. Continuous wavelet transformation and other digital signal processing methods were used in a multi-step procedure to calculate concentrations of six different analytes. To evaluate the performance of the algorithm, the results of the manual quantification of 446 hair samples with 6 different steroid hormones by two experts were compared to the algorithm results. The proposed approach of automating mass chromatogram quantification is reliable and valid. The algorithm returns less nondetectables than human raters. Based on signal to noise ratio, human non-detectables could be correctly classified with a diagnostic performance of AUC = 0.95. The algorithm presented here allows fast, automated, reliable, and valid computational peak detection and quantification in LC- MS/MS. |
1604.00832 | Henry Tuckwell | Henry C. Tuckwell | Density independent population growth with random survival | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simplified model for the growth of a population is studied in which random
effects arise because reproducing individuals have a certain probability of
surviving until the next breeding season and hence contributing to the next
generation. The resulting Markov chain is that of a branching process with a
known generating function. For parameter values leading to non-extinction, an
approximating diffusion process is obtained for the population size.
Results are obtained for the number of offspring $r_h$ and the initial
population size $N_0$ required to guarantee a given probabilty of survival. For
large probabilities of survival, increasing the initial population size from
$N_0=1$ to $N_0=2$ gives a very large decrease in required fecundity but
further increases in $N_0$ lead to much smaller decreases in $r_h$. For small
probabilities (< 0.2) of survival the decreases in required fecundity when
$N_0$ changes from 1 to 2 are very small. The calculations have relevance to
the survival of populations derived from colonizing individuals which could be
any of a variety of organisms.
| [
{
"created": "Mon, 4 Apr 2016 12:22:30 GMT",
"version": "v1"
}
] | 2016-04-05 | [
[
"Tuckwell",
"Henry C.",
""
]
] | A simplified model for the growth of a population is studied in which random effects arise because reproducing individuals have a certain probability of surviving until the next breeding season and hence contributing to the next generation. The resulting Markov chain is that of a branching process with a known generating function. For parameter values leading to non-extinction, an approximating diffusion process is obtained for the population size. Results are obtained for the number of offspring $r_h$ and the initial population size $N_0$ required to guarantee a given probabilty of survival. For large probabilities of survival, increasing the initial population size from $N_0=1$ to $N_0=2$ gives a very large decrease in required fecundity but further increases in $N_0$ lead to much smaller decreases in $r_h$. For small probabilities (< 0.2) of survival the decreases in required fecundity when $N_0$ changes from 1 to 2 are very small. The calculations have relevance to the survival of populations derived from colonizing individuals which could be any of a variety of organisms. |
2208.06073 | Xiangzhe Kong | Xiangzhe Kong, Wenbing Huang, Yang Liu | Conditional Antibody Design as 3D Equivariant Graph Translation | Accepted to ICLR 2023 as oral presentation. Outstanding paper
honorable mentions | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibody design is valuable for therapeutic usage and biological research.
Existing deep-learning-based methods encounter several key issues: 1)
incomplete context for Complementarity-Determining Regions (CDRs) generation;
2) incapability of capturing the entire 3D geometry of the input structure; 3)
inefficient prediction of the CDR sequences in an autoregressive manner. In
this paper, we propose Multi-channel Equivariant Attention Network (MEAN) to
co-design 1D sequences and 3D structures of CDRs. To be specific, MEAN
formulates antibody design as a conditional graph translation problem by
importing extra components including the target antigen and the light chain of
the antibody. Then, MEAN resorts to E(3)-equivariant message passing along with
a proposed attention mechanism to better capture the geometrical correlation
between different components. Finally, it outputs both the 1D sequences and 3D
structure via a multi-round progressive full-shot scheme, which enjoys more
efficiency and precision against previous autoregressive approaches. Our method
significantly surpasses state-of-the-art models in sequence and structure
modeling, antigen-binding CDR design, and binding affinity optimization.
Specifically, the relative improvement to baselines is about 23% in
antigen-binding CDR design and 34% for affinity optimization.
| [
{
"created": "Fri, 12 Aug 2022 01:00:59 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Sep 2022 12:53:10 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Dec 2022 13:17:13 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Feb 2023 03:10:59 GMT",
"version": "v4"
},
{
"created": "Sat, 18 Feb 2023 06:00:52 GMT",
"version": "v5"
},
{
"created": "Thu, 30 Mar 2023 12:30:46 GMT",
"version": "v6"
}
] | 2023-03-31 | [
[
"Kong",
"Xiangzhe",
""
],
[
"Huang",
"Wenbing",
""
],
[
"Liu",
"Yang",
""
]
] | Antibody design is valuable for therapeutic usage and biological research. Existing deep-learning-based methods encounter several key issues: 1) incomplete context for Complementarity-Determining Regions (CDRs) generation; 2) incapability of capturing the entire 3D geometry of the input structure; 3) inefficient prediction of the CDR sequences in an autoregressive manner. In this paper, we propose Multi-channel Equivariant Attention Network (MEAN) to co-design 1D sequences and 3D structures of CDRs. To be specific, MEAN formulates antibody design as a conditional graph translation problem by importing extra components including the target antigen and the light chain of the antibody. Then, MEAN resorts to E(3)-equivariant message passing along with a proposed attention mechanism to better capture the geometrical correlation between different components. Finally, it outputs both the 1D sequences and 3D structure via a multi-round progressive full-shot scheme, which enjoys more efficiency and precision against previous autoregressive approaches. Our method significantly surpasses state-of-the-art models in sequence and structure modeling, antigen-binding CDR design, and binding affinity optimization. Specifically, the relative improvement to baselines is about 23% in antigen-binding CDR design and 34% for affinity optimization. |
2104.03140 | Fabrizio Pittorino | Marco Stucchi, Fabrizio Pittorino, Matteo di Volo, Alessandro Vezzani,
Raffaella Burioni | Order symmetry breaking and broad distribution of events in spiking
neural networks with continuous membrane potential | 10 pages, 9 figures; to appear in Chaos, Solitons & Fractals | null | 10.1016/j.chaos.2021.110946 | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an exactly integrable version of the well-known leaky
integrate-and-fire (LIF) model, with continuous membrane potential at the
spiking event, the c-LIF. We investigate the dynamical regimes of a fully
connected network of excitatory c-LIF neurons in the presence of short-term
synaptic plasticity. By varying the coupling strength among neurons, we show
that a complex chaotic dynamics arises, characterized by scale free avalanches.
The origin of this phenomenon in the c-LIF can be related to the order symmetry
breaking in neurons spike-times, which corresponds to the onset of a broad
activity distribution. Our analysis uncovers a general mechanism through which
networks of simple neurons can be attracted to a complex basin in the phase
space.
| [
{
"created": "Wed, 7 Apr 2021 14:14:12 GMT",
"version": "v1"
}
] | 2021-06-02 | [
[
"Stucchi",
"Marco",
""
],
[
"Pittorino",
"Fabrizio",
""
],
[
"di Volo",
"Matteo",
""
],
[
"Vezzani",
"Alessandro",
""
],
[
"Burioni",
"Raffaella",
""
]
] | We introduce an exactly integrable version of the well-known leaky integrate-and-fire (LIF) model, with continuous membrane potential at the spiking event, the c-LIF. We investigate the dynamical regimes of a fully connected network of excitatory c-LIF neurons in the presence of short-term synaptic plasticity. By varying the coupling strength among neurons, we show that a complex chaotic dynamics arises, characterized by scale free avalanches. The origin of this phenomenon in the c-LIF can be related to the order symmetry breaking in neurons spike-times, which corresponds to the onset of a broad activity distribution. Our analysis uncovers a general mechanism through which networks of simple neurons can be attracted to a complex basin in the phase space. |
2307.01491 | Ahmed BaHammam | Abeer Alasmari, Abdulrahman Al-Khalifah, Ahmed BaHammam, Hesham
Alodah, Ahmad Almnaizel, Noura Alshiban, Maha Alhussain | Ramadan Fasting Model Exerts Hepatoprotective, Anti-obesity, and
Anti-Hyperlipidemic Effects in an Experimentally-induced Nonalcoholic Fatty
Liver in Rats | 33 pages, 4 figures, 1 Table, pre-proof | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: The epidemic of nonalcoholic fatty liver disease (NAFLD) and its
metabolic effects present a serious public health concern. We hypothesized that
the Ramadan fasting model (RFM), which involves fasting from dawn to dusk for a
month, could provide potential therapeutic benefits and mitigate NAFLD.
Accordingly, we aimed to validate this hypothesis using obese male rats.
Methods: Rats were split into two groups (n = 24 per group), and they were
given either a standard (S) or high-fat diet (HFD) for 12 weeks. During the
last four weeks of the study period, both S- and HFD-fed rats were subdivided
into eight groups to assess the effect of RFM with/without training (T) or
glucose administration (G) on the lipid profile, liver enzymes, and liver
structure (n=6/group). Results: The HFD+RFM groups exhibited a significantly
lower final body weight than that the HFDC group. Serum cholesterol,
low-density lipoprotein, and triglyceride levels were significantly lower in
the HFD+RFM, HFD+RFM+T, and HFD+RFM+G groups than those in the HFDC group.
Compared with the HFD-fed group, all groups had improved serum high-density
lipoprotein levels. Furthermore, HFD groups subjected to RFM had reduced serum
levels of aspartate transaminase and alanine transaminase compared with those
of the HFD-fed group. Moreover, the liver histology has improved in rats
subjected to RFM compared with that of HFD-fed rats, which exhibited macro and
micro fat droplet accumulation. Conclusion: RFM can induce positive metabolic
changes and improve alterations associated with NAFLD, including weight gain,
lipid profile, liver enzymes, and hepatic steatosis.
| [
{
"created": "Tue, 4 Jul 2023 05:51:07 GMT",
"version": "v1"
}
] | 2023-07-06 | [
[
"Alasmari",
"Abeer",
""
],
[
"Al-Khalifah",
"Abdulrahman",
""
],
[
"BaHammam",
"Ahmed",
""
],
[
"Alodah",
"Hesham",
""
],
[
"Almnaizel",
"Ahmad",
""
],
[
"Alshiban",
"Noura",
""
],
[
"Alhussain",
"Maha",
""
]
] | Background: The epidemic of nonalcoholic fatty liver disease (NAFLD) and its metabolic effects present a serious public health concern. We hypothesized that the Ramadan fasting model (RFM), which involves fasting from dawn to dusk for a month, could provide potential therapeutic benefits and mitigate NAFLD. Accordingly, we aimed to validate this hypothesis using obese male rats. Methods: Rats were split into two groups (n = 24 per group), and they were given either a standard (S) or high-fat diet (HFD) for 12 weeks. During the last four weeks of the study period, both S- and HFD-fed rats were subdivided into eight groups to assess the effect of RFM with/without training (T) or glucose administration (G) on the lipid profile, liver enzymes, and liver structure (n=6/group). Results: The HFD+RFM groups exhibited a significantly lower final body weight than that the HFDC group. Serum cholesterol, low-density lipoprotein, and triglyceride levels were significantly lower in the HFD+RFM, HFD+RFM+T, and HFD+RFM+G groups than those in the HFDC group. Compared with the HFD-fed group, all groups had improved serum high-density lipoprotein levels. Furthermore, HFD groups subjected to RFM had reduced serum levels of aspartate transaminase and alanine transaminase compared with those of the HFD-fed group. Moreover, the liver histology has improved in rats subjected to RFM compared with that of HFD-fed rats, which exhibited macro and micro fat droplet accumulation. Conclusion: RFM can induce positive metabolic changes and improve alterations associated with NAFLD, including weight gain, lipid profile, liver enzymes, and hepatic steatosis. |
1402.2584 | Peter Thomas PhD | David F. Anderson and Bard Ermentrout and Peter J. Thomas | Stochastic Representations of Ion Channel Kinetics and Exact Stochastic
Simulation of Neuronal Dynamics | 39 pages, 6 figures, appendix with XPP and Matlab code | Journal of Computational Neuroscience: Volume 38, Issue 1 (2015),
Page 67-82 | 10.1007/s10827-014-0528-2 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we provide two representations for stochastic ion channel
kinetics, and compare the performance of exact simulation with a commonly used
numerical approximation strategy. The first representation we present is a
random time change representation, popularized by Thomas Kurtz, with the second
being analogous to a "Gillespie" representation. Exact stochastic algorithms
are provided for the different representations, which are preferable to either
(a) fixed time step or (b) piecewise constant propensity algorithms, which
still appear in the literature. As examples, we provide versions of the exact
algorithms for the Morris-Lecar conductance based model, and detail the error
induced, both in a weak and a strong sense, by the use of approximate
algorithms on this model. We include ready-to-use implementations of the random
time change algorithm in both XPP and Matlab. Finally, through the
consideration of parametric sensitivity analysis, we show how the
representations presented here are useful in the development of further
computational methods. The general representations and simulation strategies
provided here are known in other parts of the sciences, but less so in the
present setting.
| [
{
"created": "Tue, 11 Feb 2014 18:05:25 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Sep 2014 04:08:23 GMT",
"version": "v2"
},
{
"created": "Tue, 11 Nov 2014 21:37:42 GMT",
"version": "v3"
}
] | 2015-01-20 | [
[
"Anderson",
"David F.",
""
],
[
"Ermentrout",
"Bard",
""
],
[
"Thomas",
"Peter J.",
""
]
] | In this paper we provide two representations for stochastic ion channel kinetics, and compare the performance of exact simulation with a commonly used numerical approximation strategy. The first representation we present is a random time change representation, popularized by Thomas Kurtz, with the second being analogous to a "Gillespie" representation. Exact stochastic algorithms are provided for the different representations, which are preferable to either (a) fixed time step or (b) piecewise constant propensity algorithms, which still appear in the literature. As examples, we provide versions of the exact algorithms for the Morris-Lecar conductance based model, and detail the error induced, both in a weak and a strong sense, by the use of approximate algorithms on this model. We include ready-to-use implementations of the random time change algorithm in both XPP and Matlab. Finally, through the consideration of parametric sensitivity analysis, we show how the representations presented here are useful in the development of further computational methods. The general representations and simulation strategies provided here are known in other parts of the sciences, but less so in the present setting. |
1707.03489 | Michael Meehan Dr | Michael T. Meehan, Daniel G. Cocks, Johannes M\"uller, Emma S. McBryde | Global stability properties of renewal epidemic models | 11 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the global dynamics of a general Kermack-McKendrick-type
epidemic model formulated in terms of a system of renewal equations.
Specifically, we consider a renewal model for which both the force of infection
and the infected removal rates are arbitrary functions of the infection age,
$\tau$, and use the direct Lyapunov method to establish the global asymptotic
stability of the equilibrium solutions. In particular, we show that the basic
reproduction number, $R_0$, represents a sharp threshold parameter such that
for $R_0\leq 1$, the infection-free equilibrium is globally asymptotically
stable; whereas the endemic equilibrium becomes globally asymptotically stable
when $R_0 > 1$, i.e. when it exists.
| [
{
"created": "Tue, 11 Jul 2017 23:13:39 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Oct 2018 01:20:21 GMT",
"version": "v2"
}
] | 2018-10-04 | [
[
"Meehan",
"Michael T.",
""
],
[
"Cocks",
"Daniel G.",
""
],
[
"Müller",
"Johannes",
""
],
[
"McBryde",
"Emma S.",
""
]
] | We investigate the global dynamics of a general Kermack-McKendrick-type epidemic model formulated in terms of a system of renewal equations. Specifically, we consider a renewal model for which both the force of infection and the infected removal rates are arbitrary functions of the infection age, $\tau$, and use the direct Lyapunov method to establish the global asymptotic stability of the equilibrium solutions. In particular, we show that the basic reproduction number, $R_0$, represents a sharp threshold parameter such that for $R_0\leq 1$, the infection-free equilibrium is globally asymptotically stable; whereas the endemic equilibrium becomes globally asymptotically stable when $R_0 > 1$, i.e. when it exists. |
1411.1448 | Kamran Kaveh | Kamran Kaveh, Venkata S. K. Manem, Mohammad Kohandel, Siv
Sivaloganathan | Modeling Age-Dependent Radiation-Induced Second Cancer Risks and
Estimation of Mutation Rate: An Evolutionary Approach | 24 pages, 16 figures, appears in Rad. Env. BioPhys 2014 | null | null | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the survival rate of cancer patients has significantly increased due
to advances in anti-cancer therapeutics, one of the major side effects of these
therapies, particularly radiotherapy, is the potential manifestation of
radiation-induced secondary malignancies. In this work, a novel evolutionary
stochastic model is introduced that couples short-term formalism (during
radiotherapy) and long-term formalism (post treatment). This framework is used
to estimate the risks of second cancer as a function of spontaneous background
and radiation-induced mutation rates of normal and pre-malignant cells. By
fitting the model to available clinical data for spontaneous background risk
together with data of Hodgkins lymphoma survivors (for various organs), the
second cancer mutation rate is estimated. The model predicts a significant
increase in mutation rate for some cancer types, which may be a sign of genomic
instability. Finally, it is shown that the model results are in agreement with
the measured results for excess relative risk (ERR) as a function of exposure
age, and that the model predicts a negative correlation of ERR with increase in
attained age. This novel approach can be used to analyze several radiotherapy
protocols in current clinical practice, and to forecast the second cancer risks
over time for individual patients.
| [
{
"created": "Wed, 5 Nov 2014 23:29:15 GMT",
"version": "v1"
}
] | 2014-11-07 | [
[
"Kaveh",
"Kamran",
""
],
[
"Manem",
"Venkata S. K.",
""
],
[
"Kohandel",
"Mohammad",
""
],
[
"Sivaloganathan",
"Siv",
""
]
] | Although the survival rate of cancer patients has significantly increased due to advances in anti-cancer therapeutics, one of the major side effects of these therapies, particularly radiotherapy, is the potential manifestation of radiation-induced secondary malignancies. In this work, a novel evolutionary stochastic model is introduced that couples short-term formalism (during radiotherapy) and long-term formalism (post treatment). This framework is used to estimate the risks of second cancer as a function of spontaneous background and radiation-induced mutation rates of normal and pre-malignant cells. By fitting the model to available clinical data for spontaneous background risk together with data of Hodgkins lymphoma survivors (for various organs), the second cancer mutation rate is estimated. The model predicts a significant increase in mutation rate for some cancer types, which may be a sign of genomic instability. Finally, it is shown that the model results are in agreement with the measured results for excess relative risk (ERR) as a function of exposure age, and that the model predicts a negative correlation of ERR with increase in attained age. This novel approach can be used to analyze several radiotherapy protocols in current clinical practice, and to forecast the second cancer risks over time for individual patients. |
2312.02203 | Weikang Qiu | Weikang Qiu, Huangrui Chu, Selena Wang, Haolan Zuo, Xiaoxiao Li, Yize
Zhao, Rex Ying | Learning High-Order Relationships of Brain Regions | Accepted at ICML 2024, Camera Ready Version | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering reliable and informative relationships among brain regions from
functional magnetic resonance imaging (fMRI) signals is essential in phenotypic
predictions. Most of the current methods fail to accurately characterize those
interactions because they only focus on pairwise connections and overlook the
high-order relationships of brain regions. We propose that these high-order
relationships should be maximally informative and minimally redundant (MIMR).
However, identifying such high-order relationships is challenging and
under-explored due to the exponential search space and the absence of a
tractable objective. In response to this gap, we propose a novel method named
HYBRID which aims to extract MIMR high-order relationships from fMRI data.
HYBRID employs a CONSTRUCTOR to identify hyperedge structures, and a WEIGHTER
to compute a weight for each hyperedge, which avoids searching in exponential
space. HYBRID achieves the MIMR objective through an innovative information
bottleneck framework named multi-head drop-bottleneck with theoretical
guarantees. Our comprehensive experiments demonstrate the effectiveness of our
model. Our model outperforms the state-of-the-art predictive model by an
average of 11.2%, regarding the quality of hyperedges measured by CPM, a
standard protocol for studying brain connections.
| [
{
"created": "Sat, 2 Dec 2023 21:39:05 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Feb 2024 22:18:36 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Jun 2024 22:32:45 GMT",
"version": "v3"
}
] | 2024-06-11 | [
[
"Qiu",
"Weikang",
""
],
[
"Chu",
"Huangrui",
""
],
[
"Wang",
"Selena",
""
],
[
"Zuo",
"Haolan",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Zhao",
"Yize",
""
],
[
"Ying",
"Rex",
""
]
] | Discovering reliable and informative relationships among brain regions from functional magnetic resonance imaging (fMRI) signals is essential in phenotypic predictions. Most of the current methods fail to accurately characterize those interactions because they only focus on pairwise connections and overlook the high-order relationships of brain regions. We propose that these high-order relationships should be maximally informative and minimally redundant (MIMR). However, identifying such high-order relationships is challenging and under-explored due to the exponential search space and the absence of a tractable objective. In response to this gap, we propose a novel method named HYBRID which aims to extract MIMR high-order relationships from fMRI data. HYBRID employs a CONSTRUCTOR to identify hyperedge structures, and a WEIGHTER to compute a weight for each hyperedge, which avoids searching in exponential space. HYBRID achieves the MIMR objective through an innovative information bottleneck framework named multi-head drop-bottleneck with theoretical guarantees. Our comprehensive experiments demonstrate the effectiveness of our model. Our model outperforms the state-of-the-art predictive model by an average of 11.2%, regarding the quality of hyperedges measured by CPM, a standard protocol for studying brain connections. |
2309.06297 | Lorenzo Pavesi | Ilya Auslender and Lorenzo Pavesi | Reservoir Computing Model For Multi-Electrode Electrophysiological Data
Analysis | null | 2023 IEEE Conference on Computational Intelligence in
Bioinformatics and Computational Biology (CIBCB), Eindhoven, Netherlands,
2023 | 10.1109/CIBCB56990.2023.10264895 | null | q-bio.QM cs.ET physics.bio-ph q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper we present a computational model which decodes the
spatio-temporal data from electro-physiological measurements of neuronal
networks and reconstructs the network structure on a macroscopic domain,
representing the connectivity between neuronal units. The model is based on
reservoir computing network (RCN) approach, where experimental data is used as
training and validation data. Consequently, the model can be used to study the
functionality of different neuronal cultures and simulate the network response
to external stimuli.
| [
{
"created": "Tue, 12 Sep 2023 15:00:48 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Auslender",
"Ilya",
""
],
[
"Pavesi",
"Lorenzo",
""
]
] | In this paper we present a computational model which decodes the spatio-temporal data from electro-physiological measurements of neuronal networks and reconstructs the network structure on a macroscopic domain, representing the connectivity between neuronal units. The model is based on reservoir computing network (RCN) approach, where experimental data is used as training and validation data. Consequently, the model can be used to study the functionality of different neuronal cultures and simulate the network response to external stimuli. |
2206.12231 | Frederic Bastian | Anne Niknejad, Christopher J. Mungall, David Osumi-Sutherland, Marc
Robinson-Rechavi, Frederic B. Bastian | Creation and unification of development and life stage ontologies for
animals | 2 pages, 1 table, accepted at Bio-Ontologies COSI ISMB 2022
conference. AN developed species-specific ontologies, links to Uberon. CJM
and DOS developed Uberon life-stage ontology, ontology design principles. DOS
developed fly ontology. MRR contributed work supervision. FBB supervised the
work, built the integration to Uberon, and wrote the paper. CJM, DOS and FBB
maintain the repository | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | With the new era of genomics, an increasing number of animal species are
amenable to large-scale data generation. This had led to the emergence of new
multi-species ontologies to annotate and organize these data. While anatomy and
cell types are well covered by these efforts, information regarding development
and life stages is also critical in the annotation of animal data. Its lack can
hamper our ability to answer comparative biology questions and to interpret
functional results. We present here a collection of development and life stage
ontologies for 21 animal species, and their merge into a common multi-species
ontology. This work has allowed the integration and comparison of
transcriptomics data in 52 animal species.
| [
{
"created": "Fri, 24 Jun 2022 11:57:26 GMT",
"version": "v1"
}
] | 2022-06-27 | [
[
"Niknejad",
"Anne",
""
],
[
"Mungall",
"Christopher J.",
""
],
[
"Osumi-Sutherland",
"David",
""
],
[
"Robinson-Rechavi",
"Marc",
""
],
[
"Bastian",
"Frederic B.",
""
]
] | With the new era of genomics, an increasing number of animal species are amenable to large-scale data generation. This had led to the emergence of new multi-species ontologies to annotate and organize these data. While anatomy and cell types are well covered by these efforts, information regarding development and life stages is also critical in the annotation of animal data. Its lack can hamper our ability to answer comparative biology questions and to interpret functional results. We present here a collection of development and life stage ontologies for 21 animal species, and their merge into a common multi-species ontology. This work has allowed the integration and comparison of transcriptomics data in 52 animal species. |
2203.14201 | Chung-Hao Lee | Chung-Hao Lee | Cryopreservation of seeds of blue waterlily (Nymphaea caerulea) using
glutathione adding plant vitrification solution, PVS+ | This publish was a mistake | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Nymphaea caerulea is a valuable freshwater aquatic plant, not only because of
its ornamental value but also its extractions for chemical and medical uses. It
is necessary to store its seeds as backup resources. Cryopreservation is the
reliable, cost-effective method for long-term preservation of botanical genetic
banks, especially for recalcitrant plants. In this study, we demonstrated that
due to unable to tolerate desiccation and low-temperature, N. caerulea is
recalcitrant. Since viability was lost before 6 months storage, N. caerulea
seeds were not appropriate to long-term store by traditional storage method.
The only way to long-term store N. caerulea seeds is cryopreservation. However,
Plant Vitrification Solution 3 (PVS3; 50% w/v sucrose, 50% w/v glycerol), a
commonly used plant vitrification solution, was ineffective on N. caerulea
seeds cryopreservation. The maximum survival rate of cryopreserved seeds
treated by PVS3 was only 23%. Oxidative stress induced by reactive oxygen
species (ROS) accumulation within seeds was the reason to cause inefficiency of
PVS3 treated seeds cryopreservation. We developed a new plant vitrification
solution (PVS+), by adding glutathione (GSH) into PVS3. PVS+ rescued the
inefficiency by decreasing ROS accumulation and elevated survival rate of
cryopreserved seeds to 97%. Our results showed some inefficiencies of plant
tissue cryopreservation may be caused by ROS-induced oxidative stress.
Antioxidants can reduce ROS-induced oxidative stress and improve seeds survival
after cryopreservation. In conclusion, N. caerulea seeds were identified as
recalcitrant and successfully cryopreserved. Suppressing ROS accumulation
during cryopreservation may be a potential strategy to improve survival rate of
recalcitrant seeds after cryopreservation.
| [
{
"created": "Sun, 27 Mar 2022 03:49:43 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Mar 2022 15:07:29 GMT",
"version": "v2"
}
] | 2022-03-31 | [
[
"Lee",
"Chung-Hao",
""
]
] | Nymphaea caerulea is a valuable freshwater aquatic plant, not only because of its ornamental value but also its extractions for chemical and medical uses. It is necessary to store its seeds as backup resources. Cryopreservation is the reliable, cost-effective method for long-term preservation of botanical genetic banks, especially for recalcitrant plants. In this study, we demonstrated that due to unable to tolerate desiccation and low-temperature, N. caerulea is recalcitrant. Since viability was lost before 6 months storage, N. caerulea seeds were not appropriate to long-term store by traditional storage method. The only way to long-term store N. caerulea seeds is cryopreservation. However, Plant Vitrification Solution 3 (PVS3; 50% w/v sucrose, 50% w/v glycerol), a commonly used plant vitrification solution, was ineffective on N. caerulea seeds cryopreservation. The maximum survival rate of cryopreserved seeds treated by PVS3 was only 23%. Oxidative stress induced by reactive oxygen species (ROS) accumulation within seeds was the reason to cause inefficiency of PVS3 treated seeds cryopreservation. We developed a new plant vitrification solution (PVS+), by adding glutathione (GSH) into PVS3. PVS+ rescued the inefficiency by decreasing ROS accumulation and elevated survival rate of cryopreserved seeds to 97%. Our results showed some inefficiencies of plant tissue cryopreservation may be caused by ROS-induced oxidative stress. Antioxidants can reduce ROS-induced oxidative stress and improve seeds survival after cryopreservation. In conclusion, N. caerulea seeds were identified as recalcitrant and successfully cryopreserved. Suppressing ROS accumulation during cryopreservation may be a potential strategy to improve survival rate of recalcitrant seeds after cryopreservation. |
0901.0066 | Christopher Wylie | C. Scott Wylie, Cheol-Min Ghim, David A. Kessler, Herbert Levine | Supplementary information for "The fixation probability of rare mutators
in finite asexual populations" | Online supplementary information for arXiv:0801.4812. 14 pages, 3
figures, 1 table | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This supplementary information contains detailed derivations, comparison to
experiment, and discussion of other miscellaneous issues omitted from the main
text.
| [
{
"created": "Wed, 31 Dec 2008 06:43:30 GMT",
"version": "v1"
}
] | 2009-01-05 | [
[
"Wylie",
"C. Scott",
""
],
[
"Ghim",
"Cheol-Min",
""
],
[
"Kessler",
"David A.",
""
],
[
"Levine",
"Herbert",
""
]
] | This supplementary information contains detailed derivations, comparison to experiment, and discussion of other miscellaneous issues omitted from the main text. |
2109.00931 | Alexander Krau{\ss} | Alexander Krau{\ss}, Thilo Gross and Barbara Drossel | Master Stability Functions for metacommunities with two types of
habitats | Submitted to Physical Review E | null | 10.1103/PhysRevE.105.044310 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Current questions in ecology revolve around instabilities in the dynamics on
spatial networks and particularly the effect of node heterogeneity. We extend
the Master Stability Function formalism to inhomogeneous biregular networks
having two types of spatial nodes. Notably, this class of systems also allows
the investigation of certain types of dynamics on higher-order networks.
Combined with the Generalized Modelling approach to study the linear stability
of steady states, this is a powerful tool to numerically asses the stability of
large ensembles of systems. We analyze the stability of ecological
metacommunities with two distinct types of habitats analytically and
numerically in order to identify several sets of conditions under which the
dynamics can become stabilized by dispersal. Our analytical approach allows
general insights into stabilizing and destabilizing effects in metapopulations.
Specifically, we show that maladaptive dispersal may be stable under certain
conditions.
| [
{
"created": "Thu, 2 Sep 2021 13:19:08 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jan 2022 17:42:45 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jan 2022 19:23:15 GMT",
"version": "v3"
}
] | 2022-04-27 | [
[
"Krauß",
"Alexander",
""
],
[
"Gross",
"Thilo",
""
],
[
"Drossel",
"Barbara",
""
]
] | Current questions in ecology revolve around instabilities in the dynamics on spatial networks and particularly the effect of node heterogeneity. We extend the Master Stability Function formalism to inhomogeneous biregular networks having two types of spatial nodes. Notably, this class of systems also allows the investigation of certain types of dynamics on higher-order networks. Combined with the Generalized Modelling approach to study the linear stability of steady states, this is a powerful tool to numerically asses the stability of large ensembles of systems. We analyze the stability of ecological metacommunities with two distinct types of habitats analytically and numerically in order to identify several sets of conditions under which the dynamics can become stabilized by dispersal. Our analytical approach allows general insights into stabilizing and destabilizing effects in metapopulations. Specifically, we show that maladaptive dispersal may be stable under certain conditions. |
1606.01936 | Marco Polin | Rapha\"el Jeanneret, Matteo Contino and Marco Polin | A brief introduction to the model microswimmer {\it Chlamydomonas
reinhardtii} | 16 pages, 7 figures. To be published as part of EPJ ST | null | 10.1140/epjst/e2016-60065-3 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unicellular biflagellate green alga {\it Chlamydomonas reinhardtii} has
been an important model system in biology for decades, and in recent years it
has started to attract growing attention also within the biophysics community.
Here we provide a concise review of some of the aspects of {\it Chlamydomonas}
biology and biophysics most immediately relevant to physicists that might be
interested in starting to work with this versatile microorganism.
| [
{
"created": "Mon, 6 Jun 2016 20:48:10 GMT",
"version": "v1"
}
] | 2016-11-23 | [
[
"Jeanneret",
"Raphaël",
""
],
[
"Contino",
"Matteo",
""
],
[
"Polin",
"Marco",
""
]
] | The unicellular biflagellate green alga {\it Chlamydomonas reinhardtii} has been an important model system in biology for decades, and in recent years it has started to attract growing attention also within the biophysics community. Here we provide a concise review of some of the aspects of {\it Chlamydomonas} biology and biophysics most immediately relevant to physicists that might be interested in starting to work with this versatile microorganism. |
2103.02339 | Omar Chehab | Omar Chehab, Alexandre Defossez, Jean-Christophe Loiseau, Alexandre
Gramfort, Jean-Remi King | Deep Recurrent Encoder: A scalable end-to-end network to model brain
signals | null | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how the brain responds to sensory inputs is challenging: brain
recordings are partial, noisy, and high dimensional; they vary across sessions
and subjects and they capture highly nonlinear dynamics. These challenges have
led the community to develop a variety of preprocessing and analytical (almost
exclusively linear) methods, each designed to tackle one of these issues.
Instead, we propose to address these challenges through a specific end-to-end
deep learning architecture, trained to predict the brain responses of multiple
subjects at once. We successfully test this approach on a large cohort of
magnetoencephalography (MEG) recordings acquired during a one-hour reading
task. Our Deep Recurrent Encoding (DRE) architecture reliably predicts MEG
responses to words with a three-fold improvement over classic linear methods.
To overcome the notorious issue of interpretability of deep learning, we
describe a simple variable importance analysis. When applied to DRE, this
method recovers the expected evoked responses to word length and word
frequency. The quantitative improvement of the present deep learning approach
paves the way to better understand the nonlinear dynamics of brain activity
from large datasets.
| [
{
"created": "Wed, 3 Mar 2021 11:39:17 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 07:59:33 GMT",
"version": "v2"
},
{
"created": "Fri, 30 Sep 2022 08:34:43 GMT",
"version": "v3"
}
] | 2022-10-03 | [
[
"Chehab",
"Omar",
""
],
[
"Defossez",
"Alexandre",
""
],
[
"Loiseau",
"Jean-Christophe",
""
],
[
"Gramfort",
"Alexandre",
""
],
[
"King",
"Jean-Remi",
""
]
] | Understanding how the brain responds to sensory inputs is challenging: brain recordings are partial, noisy, and high dimensional; they vary across sessions and subjects and they capture highly nonlinear dynamics. These challenges have led the community to develop a variety of preprocessing and analytical (almost exclusively linear) methods, each designed to tackle one of these issues. Instead, we propose to address these challenges through a specific end-to-end deep learning architecture, trained to predict the brain responses of multiple subjects at once. We successfully test this approach on a large cohort of magnetoencephalography (MEG) recordings acquired during a one-hour reading task. Our Deep Recurrent Encoding (DRE) architecture reliably predicts MEG responses to words with a three-fold improvement over classic linear methods. To overcome the notorious issue of interpretability of deep learning, we describe a simple variable importance analysis. When applied to DRE, this method recovers the expected evoked responses to word length and word frequency. The quantitative improvement of the present deep learning approach paves the way to better understand the nonlinear dynamics of brain activity from large datasets. |
1510.00115 | Louxin Zhang | Andreas D. M. Gunawan and Louxin Zhang | Bounding the Size of a Network Defined By Visibility Property | 23 pages, 9 figures | null | null | null | q-bio.PE cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks are mathematical structures for modeling and
visualization of reticulation processes in the study of evolution. Galled
networks, reticulation visible networks, nearly-stable networks and
stable-child networks are the four classes of phylogenetic networks that are
recently introduced to study the topological and algorithmic aspects of
phylogenetic networks. We prove the following results.
(1) A binary galled network with n leaves has at most 2(n-1) reticulation
nodes. (2) A binary nearly-stable network with n leaves has at most 3(n-1)
reticulation nodes. (3) A binary stable-child network with n leaves has at most
7(n-1) reticulation nodes.
| [
{
"created": "Thu, 1 Oct 2015 06:11:15 GMT",
"version": "v1"
}
] | 2015-10-02 | [
[
"Gunawan",
"Andreas D. M.",
""
],
[
"Zhang",
"Louxin",
""
]
] | Phylogenetic networks are mathematical structures for modeling and visualization of reticulation processes in the study of evolution. Galled networks, reticulation visible networks, nearly-stable networks and stable-child networks are the four classes of phylogenetic networks that are recently introduced to study the topological and algorithmic aspects of phylogenetic networks. We prove the following results. (1) A binary galled network with n leaves has at most 2(n-1) reticulation nodes. (2) A binary nearly-stable network with n leaves has at most 3(n-1) reticulation nodes. (3) A binary stable-child network with n leaves has at most 7(n-1) reticulation nodes. |
2102.04882 | Alexander Siegenfeld | Pratyush K. Kollepara, Alexander F. Siegenfeld, Nassim Nicholas Taleb,
Yaneer Bar-Yam | Unmasking the mask studies: why the effectiveness of surgical masks in
preventing respiratory infections has been underestimated | New version with minor changes and some edits for clarity | Journal of Travel Medicine 28 (7), taab144 (2021) | 10.1093/jtm/taab144 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Face masks have been widely used as a protective measure against COVID-19.
However, pre-pandemic empirical studies have produced mixed statistical results
on the effectiveness of masks against respiratory viruses. The implications of
the studies' recognized limitations have not been quantitatively and
statistically analyzed, leading to confusion regarding the effectiveness of
masks. Such confusion may have contributed to organizations such as the WHO and
CDC initially not recommending that the general public wear masks. Here we show
that when the adherence to mask-usage guidelines is taken into account, the
empirical evidence indicates that masks prevent disease transmission: all
studies we analyzed that did not find surgical masks to be effective were
under-powered to such an extent that even if masks were 100% effective, the
studies in question would still have been unlikely to find a statistically
significant effect. We also provide a framework for understanding the effect of
masks on the probability of infection for single and repeated exposures. The
framework demonstrates that more frequently wearing a mask provides
super-linearly compounding protection, as does both the susceptible and
infected individual wearing a mask. This work shows (1) that both theoretical
and empirical evidence is consistent with masks protecting against respiratory
infections and (2) that nonlinear effects and statistical considerations
regarding the percentage of exposures for which masks are worn must be taken
into account when designing empirical studies and interpreting their results.
| [
{
"created": "Mon, 8 Feb 2021 08:19:52 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 18:43:08 GMT",
"version": "v2"
}
] | 2022-11-24 | [
[
"Kollepara",
"Pratyush K.",
""
],
[
"Siegenfeld",
"Alexander F.",
""
],
[
"Taleb",
"Nassim Nicholas",
""
],
[
"Bar-Yam",
"Yaneer",
""
]
] | Face masks have been widely used as a protective measure against COVID-19. However, pre-pandemic empirical studies have produced mixed statistical results on the effectiveness of masks against respiratory viruses. The implications of the studies' recognized limitations have not been quantitatively and statistically analyzed, leading to confusion regarding the effectiveness of masks. Such confusion may have contributed to organizations such as the WHO and CDC initially not recommending that the general public wear masks. Here we show that when the adherence to mask-usage guidelines is taken into account, the empirical evidence indicates that masks prevent disease transmission: all studies we analyzed that did not find surgical masks to be effective were under-powered to such an extent that even if masks were 100% effective, the studies in question would still have been unlikely to find a statistically significant effect. We also provide a framework for understanding the effect of masks on the probability of infection for single and repeated exposures. The framework demonstrates that more frequently wearing a mask provides super-linearly compounding protection, as does both the susceptible and infected individual wearing a mask. This work shows (1) that both theoretical and empirical evidence is consistent with masks protecting against respiratory infections and (2) that nonlinear effects and statistical considerations regarding the percentage of exposures for which masks are worn must be taken into account when designing empirical studies and interpreting their results. |
2402.00014 | Alexey Melnikov | Matvei Anoshin, Asel Sagingalieva, Christopher Mansell, Vishal Shete,
Markus Pflitsch, and Alexey Melnikov | Hybrid quantum cycle generative adversarial network for small molecule
generation | 11 pages, 6 figures, 3 tables | null | null | null | q-bio.BM cs.ET cs.LG physics.bio-ph quant-ph | http://creativecommons.org/licenses/by/4.0/ | The contemporary drug design process demands considerable time and resources
to develop each new compound entering the market. Generating small molecules is
a pivotal aspect of drug discovery, essential for developing innovative
pharmaceuticals. Uniqueness, validity, diversity, druglikeliness,
synthesizability, and solubility molecular pharmacokinetic properties, however,
are yet to be maximized. This work introduces several new generative
adversarial network models based on engineering integration of parametrized
quantum circuits into known molecular generative adversarial networks. The
introduced machine learning models incorporate a new multi-parameter reward
function grounded in reinforcement learning principles. Through extensive
experimentation on benchmark drug design datasets, QM9 and PC9, the introduced
models are shown to outperform scores achieved previously. Most prominently,
the new scores indicate an increase of up to 30% in the druglikeness
quantitative estimation. The new hybrid quantum machine learning algorithms, as
well as the achieved scores of pharmacokinetic properties, contribute to the
development of fast and accurate drug discovery processes.
| [
{
"created": "Thu, 28 Dec 2023 14:10:26 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Anoshin",
"Matvei",
""
],
[
"Sagingalieva",
"Asel",
""
],
[
"Mansell",
"Christopher",
""
],
[
"Shete",
"Vishal",
""
],
[
"Pflitsch",
"Markus",
""
],
[
"Melnikov",
"Alexey",
""
]
] | The contemporary drug design process demands considerable time and resources to develop each new compound entering the market. Generating small molecules is a pivotal aspect of drug discovery, essential for developing innovative pharmaceuticals. Uniqueness, validity, diversity, druglikeliness, synthesizability, and solubility molecular pharmacokinetic properties, however, are yet to be maximized. This work introduces several new generative adversarial network models based on engineering integration of parametrized quantum circuits into known molecular generative adversarial networks. The introduced machine learning models incorporate a new multi-parameter reward function grounded in reinforcement learning principles. Through extensive experimentation on benchmark drug design datasets, QM9 and PC9, the introduced models are shown to outperform scores achieved previously. Most prominently, the new scores indicate an increase of up to 30% in the druglikeness quantitative estimation. The new hybrid quantum machine learning algorithms, as well as the achieved scores of pharmacokinetic properties, contribute to the development of fast and accurate drug discovery processes. |
1401.4122 | Carsten Allefeld | Carsten Allefeld, John-Dylan Haynes | Searchlight-based multi-voxel pattern analysis of fMRI by
cross-validated MANOVA | null | NeuroImage, 89:345-357, 2014 | 10.1016/j.neuroimage.2013.11.043 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-voxel pattern analysis (MVPA) is a fruitful and increasingly popular
complement to traditional univariate methods of analyzing neuroimaging data. We
propose to replace the standard 'decoding' approach to searchlight-based MVPA,
measuring the performance of a classifier by its accuracy, with a method based
on the multivariate form of the general linear model. Following the
well-established methodology of multivariate analysis of variance (MANOVA), we
define a measure that directly characterizes the structure of multi-voxel data,
the pattern distinctness $D$. Our measure is related to standard multivariate
statistics, but we apply cross-validation to obtain an unbiased estimate of its
population value, independent of the amount of data or its partitioning into
'training' and 'test' sets. The estimate $\hat D$ can therefore serve not only
as a test statistic, but as an interpretable measure of multivariate effect
size. The pattern distinctness generalizes the Mahalanobis distance to an
arbitrary number of classes, but also the case where there are no classes of
trials because the design is described by parametric regressors. It is defined
for arbitrary estimable contrasts, including main effects (pattern differences)
and interactions (pattern changes). In this way, our approach makes the full
analytical power of complex factorial designs known from univariate fMRI
analyses available to MVPA studies. Moreover, we show how the results of a
factorial analysis can be used to obtain a measure of pattern stability, the
equivalent of 'cross-decoding'.
| [
{
"created": "Thu, 16 Jan 2014 18:44:28 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Feb 2014 15:05:10 GMT",
"version": "v2"
}
] | 2014-02-10 | [
[
"Allefeld",
"Carsten",
""
],
[
"Haynes",
"John-Dylan",
""
]
] | Multi-voxel pattern analysis (MVPA) is a fruitful and increasingly popular complement to traditional univariate methods of analyzing neuroimaging data. We propose to replace the standard 'decoding' approach to searchlight-based MVPA, measuring the performance of a classifier by its accuracy, with a method based on the multivariate form of the general linear model. Following the well-established methodology of multivariate analysis of variance (MANOVA), we define a measure that directly characterizes the structure of multi-voxel data, the pattern distinctness $D$. Our measure is related to standard multivariate statistics, but we apply cross-validation to obtain an unbiased estimate of its population value, independent of the amount of data or its partitioning into 'training' and 'test' sets. The estimate $\hat D$ can therefore serve not only as a test statistic, but as an interpretable measure of multivariate effect size. The pattern distinctness generalizes the Mahalanobis distance to an arbitrary number of classes, but also the case where there are no classes of trials because the design is described by parametric regressors. It is defined for arbitrary estimable contrasts, including main effects (pattern differences) and interactions (pattern changes). In this way, our approach makes the full analytical power of complex factorial designs known from univariate fMRI analyses available to MVPA studies. Moreover, we show how the results of a factorial analysis can be used to obtain a measure of pattern stability, the equivalent of 'cross-decoding'. |
2406.16911 | Javier Garc\'ia Ciudad | Javier Garc\'ia Ciudad, Morten M{\o}rup, Birgitte Rahbek Kornum and
Alexander Neergaard Zahid | Evaluating the Influence of Temporal Context on Automatic Mouse Sleep
Staging through the Application of Human Models | Accepted for publication in the 46th Annual International Conference
of the IEEE Engineering in Medicine and Biology Society (2024) | null | null | null | q-bio.NC cs.AI cs.CV cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In human sleep staging models, augmenting the temporal context of the input
to the range of tens of minutes has recently demonstrated performance
improvement. In contrast, the temporal context of mouse sleep staging models is
typically in the order of tens of seconds. While long-term time patterns are
less clear in mouse sleep, increasing the temporal context further than that of
the current mouse sleep staging models might still result in a performance
increase, given that the current methods only model very short term patterns.
In this study, we examine the influence of increasing the temporal context in
mouse sleep staging up to 15 minutes in three mouse cohorts using two recent
and high-performing human sleep staging models that account for long-term
dependencies. These are compared to two prominent mouse sleep staging models
that use a local context of 12 s and 20 s, respectively. An increase in context
up to 28 s is observed to have a positive impact on sleep stage classification
performance, especially in REM sleep. However, the impact is limited for longer
context windows. One of the human sleep scoring models, L-SeqSleepNet,
outperforms both mouse models in all cohorts. This suggests that mouse sleep
staging can benefit from more temporal context than currently used.
| [
{
"created": "Thu, 6 Jun 2024 10:07:19 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Ciudad",
"Javier García",
""
],
[
"Mørup",
"Morten",
""
],
[
"Kornum",
"Birgitte Rahbek",
""
],
[
"Zahid",
"Alexander Neergaard",
""
]
] | In human sleep staging models, augmenting the temporal context of the input to the range of tens of minutes has recently demonstrated performance improvement. In contrast, the temporal context of mouse sleep staging models is typically in the order of tens of seconds. While long-term time patterns are less clear in mouse sleep, increasing the temporal context further than that of the current mouse sleep staging models might still result in a performance increase, given that the current methods only model very short term patterns. In this study, we examine the influence of increasing the temporal context in mouse sleep staging up to 15 minutes in three mouse cohorts using two recent and high-performing human sleep staging models that account for long-term dependencies. These are compared to two prominent mouse sleep staging models that use a local context of 12 s and 20 s, respectively. An increase in context up to 28 s is observed to have a positive impact on sleep stage classification performance, especially in REM sleep. However, the impact is limited for longer context windows. One of the human sleep scoring models, L-SeqSleepNet, outperforms both mouse models in all cohorts. This suggests that mouse sleep staging can benefit from more temporal context than currently used. |
2201.00817 | Eduardo R. Miranda Prof | Eduardo Reck Miranda, Satvik Venkatesh, Jose D. Mart{\i}n-Guerrero,
Carlos Hernani-Morales, Lucas Lamata, Enrique Solano | An approach to interfacing the brain with quantum computers: practical
steps and caveats | Journal pre-submission draft. Revision pending | null | null | null | q-bio.NC cs.ET | http://creativecommons.org/licenses/by/4.0/ | We report on the first proof-of-concept system demonstrating how one can
control a qubit with mental activity. We developed a method to encode neural
correlates of mental activity as instructions for a quantum computer. Brain
signals are detected utilizing electrodes placed on the scalp of a person, who
learns how to produce the required mental activity to issue instructions to
rotate and measure a qubit. Currently, our proof-of-concept runs on a software
simulation of a quantum computer. At the time of writing, available quantum
computing hardware and brain activity sensing technology are not sufficiently
developed for real-time control of quantum states with the brain. But we are
one step closer to interfacing the brain with real quantum machines, as
improvements in hardware technology at both fronts become available in time to
come. The paper ends with a discussion on some of the challenging problems that
need to be addressed before we can interface the brain with quantum hardware.
| [
{
"created": "Tue, 4 Jan 2022 08:00:16 GMT",
"version": "v1"
}
] | 2022-01-05 | [
[
"Miranda",
"Eduardo Reck",
""
],
[
"Venkatesh",
"Satvik",
""
],
[
"Martın-Guerrero",
"Jose D.",
""
],
[
"Hernani-Morales",
"Carlos",
""
],
[
"Lamata",
"Lucas",
""
],
[
"Solano",
"Enrique",
""
]
] | We report on the first proof-of-concept system demonstrating how one can control a qubit with mental activity. We developed a method to encode neural correlates of mental activity as instructions for a quantum computer. Brain signals are detected utilizing electrodes placed on the scalp of a person, who learns how to produce the required mental activity to issue instructions to rotate and measure a qubit. Currently, our proof-of-concept runs on a software simulation of a quantum computer. At the time of writing, available quantum computing hardware and brain activity sensing technology are not sufficiently developed for real-time control of quantum states with the brain. But we are one step closer to interfacing the brain with real quantum machines, as improvements in hardware technology at both fronts become available in time to come. The paper ends with a discussion on some of the challenging problems that need to be addressed before we can interface the brain with quantum hardware. |
2201.07539 | Camille Gontier | Camille Gontier, Simone Carlo Surace, Igor Delvendahl, Martin M\"uller
and Jean-Pascal Pfister | Efficient Sampling-Based Bayesian Active Learning for synaptic
characterization | Major review after submission: - Change of title - Add biological
recordings | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Bayesian Active Learning (BAL) is an efficient framework for learning the
parameters of a model, in which input stimuli are selected to maximize the
mutual information between the observations and the unknown parameters.
However, the applicability of BAL to experiments is limited as it requires
performing high-dimensional integrations and optimizations in real time:
current methods are either too time consuming, or only applicable to specific
models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning
(ESB-BAL) framework, which is efficient enough to be used in real-time
biological experiments. We apply our method to the problem of estimating the
parameters of a chemical synapse from the postsynaptic responses to evoked
presynaptic action potentials. Using synthetic data and synaptic whole-cell
patch-clamp recordings, we show that our method can improve the precision of
model-based inferences, thereby paving the way towards more systematic and
efficient experimental designs in physiology.
| [
{
"created": "Wed, 19 Jan 2022 11:33:29 GMT",
"version": "v1"
},
{
"created": "Tue, 24 May 2022 17:53:14 GMT",
"version": "v2"
},
{
"created": "Wed, 30 Nov 2022 17:04:14 GMT",
"version": "v3"
}
] | 2022-12-01 | [
[
"Gontier",
"Camille",
""
],
[
"Surace",
"Simone Carlo",
""
],
[
"Delvendahl",
"Igor",
""
],
[
"Müller",
"Martin",
""
],
[
"Pfister",
"Jean-Pascal",
""
]
] | Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time: current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology. |
2101.03762 | Thierry Douki | Marie Roser (CIBEST), David B\'eal (CIBEST), Camille Eldin (CIBEST),
Leslie Gudimard (CIBEST), Fanny Caffin, Fanny Gros-D\'esormeaux (IRBA),
Daniel L\'eon\c{c}o, Fran\c{c}ois Fenaille, Christophe Junot, Christophe
Pi\'erard (IRBA), Thierry Douki (CIBEST) | Glutathione conjugates of the mercapturic acid pathway and guanine
adduct as biomarkers of exposure to CEES, a sulfur mustard analog | Analytical and Bioanalytical Chemistry, Springer Verlag, 2020 | null | 10.1007/s00216-020-03096-4 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sulfur mustard (SM), a chemical warfare agent, is a strong alkylating
compound that readily reacts with numerous biomolecules. The goal of the
present work was to define and validate new biomarkers of exposure to SM that
could be easily accessible in urine or plasma. Because investigations using SM
are prohibited by the Organization for the Prohibition of Chemical Weapons, we
worked with 2-chloroethyl ethyl sulfide (CEES), a monofunctional analog of SM.
We developed an ultra-high-pressure liquid chromatography - tandem mass
spectrometry approach (UHPLC-MS/MS) to the conjugate of CEES to glutathione and
two of its metabolites, the cysteine and the N-acetyl-cysteine conjugates. The
N7-guanine adduct of CEES (N7Gua-CEES) was also targeted. After synthesizing
the specific biomarkers, a solid phase extraction protocol and a UHPLC-MS/MS
method with isotopic dilution were optimized. We were able to quantify
N7Gua-CEES in the DNA of HaCaT keratinocytes and of explants of human skin
exposed to CEES. N7Gua-CEES was also detected in the culture medium of these
two models, together with the glutathione and the cysteine conjugates. In
contrast, the N-acetyl-cysteine conjugate was not detected. The method was then
applied to plasma from mice cutaneously exposed to CEES. All four markers could
be detected. Our present results thus validate both the analytical technique
and the biological relevance of new, easily quantifiable biomarkers of exposure
to CEES. Because CEES behaves very similarly to SM, the results are promising
for application to this toxic of interest.
| [
{
"created": "Mon, 11 Jan 2021 08:45:30 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Roser",
"Marie",
"",
"CIBEST"
],
[
"Béal",
"David",
"",
"CIBEST"
],
[
"Eldin",
"Camille",
"",
"CIBEST"
],
[
"Gudimard",
"Leslie",
"",
"CIBEST"
],
[
"Caffin",
"Fanny",
"",
"IRBA"
],
[
"Gros-Désormeaux",
"Fanny",
"",
"IRBA"
],
[
"Léonço",
"Daniel",
"",
"IRBA"
],
[
"Fenaille",
"François",
"",
"IRBA"
],
[
"Junot",
"Christophe",
"",
"IRBA"
],
[
"Piérard",
"Christophe",
"",
"IRBA"
],
[
"Douki",
"Thierry",
"",
"CIBEST"
]
] | Sulfur mustard (SM), a chemical warfare agent, is a strong alkylating compound that readily reacts with numerous biomolecules. The goal of the present work was to define and validate new biomarkers of exposure to SM that could be easily accessible in urine or plasma. Because investigations using SM are prohibited by the Organization for the Prohibition of Chemical Weapons, we worked with 2-chloroethyl ethyl sulfide (CEES), a monofunctional analog of SM. We developed an ultra-high-pressure liquid chromatography - tandem mass spectrometry approach (UHPLC-MS/MS) to the conjugate of CEES to glutathione and two of its metabolites, the cysteine and the N-acetyl-cysteine conjugates. The N7-guanine adduct of CEES (N7Gua-CEES) was also targeted. After synthesizing the specific biomarkers, a solid phase extraction protocol and a UHPLC-MS/MS method with isotopic dilution were optimized. We were able to quantify N7Gua-CEES in the DNA of HaCaT keratinocytes and of explants of human skin exposed to CEES. N7Gua-CEES was also detected in the culture medium of these two models, together with the glutathione and the cysteine conjugates. In contrast, the N-acetyl-cysteine conjugate was not detected. The method was then applied to plasma from mice cutaneously exposed to CEES. All four markers could be detected. Our present results thus validate both the analytical technique and the biological relevance of new, easily quantifiable biomarkers of exposure to CEES. Because CEES behaves very similarly to SM, the results are promising for application to this toxic of interest. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.