id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2207.04262
Yihan Wu
Yihan Wu, Min Xia, Xiuzhu Wang, Yangsong Zhang
Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework
null
null
null
null
q-bio.NC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Schizophrenia (SZ) is a serious mental disorder that could seriously affect the patient's quality of life. In recent years, detection of SZ based on deep learning (DL) using electroencephalogram (EEG) has received increasing attention. In this paper, we proposed an end-to-end recurrent auto-encoder (RAE) model to detect SZ. In the RAE model, the raw data was input into one auto-encoder block, and the reconstructed data were recurrently input into the same block. The extracted code by auto-encoder block was simultaneously served as an input of a classifier block to discriminate SZ patients from healthy controls (HC). Evaluated on the dataset containing 14 SZ patients and 14 HC subjects, and the proposed method achieved an average classification accuracy of 81.81% in subject-independent experiment scenario. This study demonstrated that the structure of RAE is able to capture the differential features between SZ patients and HC subjects.
[ { "created": "Sat, 9 Jul 2022 12:57:35 GMT", "version": "v1" } ]
2022-07-12
[ [ "Wu", "Yihan", "" ], [ "Xia", "Min", "" ], [ "Wang", "Xiuzhu", "" ], [ "Zhang", "Yangsong", "" ] ]
Schizophrenia (SZ) is a serious mental disorder that could seriously affect the patient's quality of life. In recent years, detection of SZ based on deep learning (DL) using electroencephalogram (EEG) has received increasing attention. In this paper, we proposed an end-to-end recurrent auto-encoder (RAE) model to detect SZ. In the RAE model, the raw data was input into one auto-encoder block, and the reconstructed data were recurrently input into the same block. The extracted code by auto-encoder block was simultaneously served as an input of a classifier block to discriminate SZ patients from healthy controls (HC). Evaluated on the dataset containing 14 SZ patients and 14 HC subjects, and the proposed method achieved an average classification accuracy of 81.81% in subject-independent experiment scenario. This study demonstrated that the structure of RAE is able to capture the differential features between SZ patients and HC subjects.
2005.06712
Hue Sun Chan
Suman Das, Yi-Hsuan Lin, Robert M. Vernon, Julie D. Forman-Kay, and Hue Sun Chan
Comparative Roles of Charge, $\pi$ and Hydrophobic Interactions in Sequence-Dependent Phase Separation of Intrinsically Disordered Proteins
65 pages (main text and supporting information), 7 main-text figures, 7 supporting figures, 1 supporting table, 135 references; accepted for publication in the Proceedings of the National Academy of Sciences, U.S.A
Proc. Natl. Acad. Sci. U.S.A. 117, 28795-28805 (2020)
10.1073/pnas.2008122117
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Endeavoring toward a transferable, predictive coarse-grained explicit-chain model for biomolecular condensates underlain by liquid-liquid phase separation (LLPS), we conducted multiple-chain simulations of the N-terminal intrinsically disordered region (IDR) of DEAD-box helicase Ddx4, as a test case, to assess the roles of electrostatic, hydrophobic, cation-$\pi$, and aromatic interactions in amino acid sequence-dependent LLPS. We evaluated 3 residue-residue interaction schemes with a shared electrostatic potential. Neither a common hydrophobicity scheme nor one augmented with arginine/lysine-aromatic cation-$\pi$ interactions consistently accounted for the experimental LLPS data on the wildtype, a charge-scrambled, an FtoA, and an RtoK mutant of Ddx4 IDR. In contrast, interactions based on contact statistics among folded globular protein structures reproduce the overall experimental trend, including that the RtoK mutant has a much diminished LLPS propensity. Consistency between simulation and LLPS experiment was also found for RtoK mutants of P-granule protein LAF-1, underscoring that, to a degree, the important LLPS-driving $\pi$-related interactions are embodied in classical statistical potentials. Further elucidation will be necessary, however, especially of phenylalanine's role in condensate assembly because experiments on FtoA and YtoF mutants suggest that LLPS-driving phenylalanine interactions are significantly weaker than those posited by common statistical potentials. Protein-protein electrostatic interactions are modulated by relative permittivity, which depends on protein concentration. Analytical theory suggests that this dependence entails enhanced inter-protein interactions in the condensed phase but more favorable protein-solvent interactions in the dilute phase. The opposing trends lead to a modest overall impact on LLPS.
[ { "created": "Thu, 14 May 2020 03:55:52 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 21:01:10 GMT", "version": "v2" } ]
2020-12-10
[ [ "Das", "Suman", "" ], [ "Lin", "Yi-Hsuan", "" ], [ "Vernon", "Robert M.", "" ], [ "Forman-Kay", "Julie D.", "" ], [ "Chan", "Hue Sun", "" ] ]
Endeavoring toward a transferable, predictive coarse-grained explicit-chain model for biomolecular condensates underlain by liquid-liquid phase separation (LLPS), we conducted multiple-chain simulations of the N-terminal intrinsically disordered region (IDR) of DEAD-box helicase Ddx4, as a test case, to assess the roles of electrostatic, hydrophobic, cation-$\pi$, and aromatic interactions in amino acid sequence-dependent LLPS. We evaluated 3 residue-residue interaction schemes with a shared electrostatic potential. Neither a common hydrophobicity scheme nor one augmented with arginine/lysine-aromatic cation-$\pi$ interactions consistently accounted for the experimental LLPS data on the wildtype, a charge-scrambled, an FtoA, and an RtoK mutant of Ddx4 IDR. In contrast, interactions based on contact statistics among folded globular protein structures reproduce the overall experimental trend, including that the RtoK mutant has a much diminished LLPS propensity. Consistency between simulation and LLPS experiment was also found for RtoK mutants of P-granule protein LAF-1, underscoring that, to a degree, the important LLPS-driving $\pi$-related interactions are embodied in classical statistical potentials. Further elucidation will be necessary, however, especially of phenylalanine's role in condensate assembly because experiments on FtoA and YtoF mutants suggest that LLPS-driving phenylalanine interactions are significantly weaker than those posited by common statistical potentials. Protein-protein electrostatic interactions are modulated by relative permittivity, which depends on protein concentration. Analytical theory suggests that this dependence entails enhanced inter-protein interactions in the condensed phase but more favorable protein-solvent interactions in the dilute phase. The opposing trends lead to a modest overall impact on LLPS.
2205.06313
William Poole
William Poole, Thomas Ouldridge, Manoj Gopalkrishnan, and Erik Winfree
Detailed Balanced Chemical Reaction Networks as Generalized Boltzmann Machines
Based on work in William Poole's Thesis "Compilation and Inference with Chemical Reaction Networks" available at: https://www.dna.caltech.edu/Papers/William_Poole_2022_thesis.pdf
null
null
null
q-bio.MN cond-mat.stat-mech cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can a micron sized sack of interacting molecules understand, and adapt to a constantly-fluctuating environment? Cellular life provides an existence proof in the affirmative, but the principles that allow for life's existence are far from being proven. One challenge in engineering and understanding biochemical computation is the intrinsic noise due to chemical fluctuations. In this paper, we draw insights from machine learning theory, chemical reaction network theory, and statistical physics to show that the broad and biologically relevant class of detailed balanced chemical reaction networks is capable of representing and conditioning complex distributions. These results illustrate how a biochemical computer can use intrinsic chemical noise to perform complex computations. Furthermore, we use our explicit physical model to derive thermodynamic costs of inference.
[ { "created": "Thu, 12 May 2022 18:59:43 GMT", "version": "v1" } ]
2022-05-16
[ [ "Poole", "William", "" ], [ "Ouldridge", "Thomas", "" ], [ "Gopalkrishnan", "Manoj", "" ], [ "Winfree", "Erik", "" ] ]
Can a micron sized sack of interacting molecules understand, and adapt to a constantly-fluctuating environment? Cellular life provides an existence proof in the affirmative, but the principles that allow for life's existence are far from being proven. One challenge in engineering and understanding biochemical computation is the intrinsic noise due to chemical fluctuations. In this paper, we draw insights from machine learning theory, chemical reaction network theory, and statistical physics to show that the broad and biologically relevant class of detailed balanced chemical reaction networks is capable of representing and conditioning complex distributions. These results illustrate how a biochemical computer can use intrinsic chemical noise to perform complex computations. Furthermore, we use our explicit physical model to derive thermodynamic costs of inference.
1703.01246
Martin Weigt
Guido Uguzzoni, Shalini John Lovis, Francesco Oteri, Alexander Schug, Hendrik Szurmant, Martin Weigt
Large-scale identification of coevolution signals across homo-oligomeric protein interfaces by Direct Coupling Analysis
18 pages, 6 figures, to appears in PNAS
Proc. Natl. Acad. Sci. 114, E2662-E2671 (2017)
10.1073/pnas.1615068114
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins have evolved to perform diverse cellular functions, from serving as reaction catalysts to coordinating cellular propagation and development. Frequently, proteins do not exert their full potential as monomers but rather undergo concerted interactions as either homo-oligomers or with other proteins as hetero-oligomers. The experimental study of such protein complexes and interactions has been arduous. Theoretical structure prediction methods are an attractive alternative. Here, we investigate homo-oligomeric interfaces by tracing residue coevolution via the global statistical Direct Coupling Analysis (DCA). DCA can accurately infer spatial adjacencies between residues. These adjacencies can be included as constraints in structure-prediction techniques to predict high-resolution models. By taking advantage of the on-going exponential growth of sequence databases, we go significantly beyond anecdotal cases of a few protein families and apply DCA to a systematic large-scale study of nearly 2000 PFAM protein families with sufficient sequence information and structurally resolved homo-oligomeric interfaces. We find that large interfaces are commonly identified by DCA. We further demonstrate that DCA can differentiate between subfamilies of different binding modes within one large PFAM family. Sequence derived contact information for the subfamilies proves sufficient to assemble accurate structural models of the diverse protein-oligomers. Thus, we provide an approach to investigate oligomerization for arbitrary protein families leading to structural models complementary to often difficult experimental methods. Combined with ever more abundant sequential data, we anticipate that this study will be instrumental to allow the structural description of many hetero-protein complexes in the future.
[ { "created": "Fri, 3 Mar 2017 16:51:32 GMT", "version": "v1" } ]
2019-10-07
[ [ "Uguzzoni", "Guido", "" ], [ "Lovis", "Shalini John", "" ], [ "Oteri", "Francesco", "" ], [ "Schug", "Alexander", "" ], [ "Szurmant", "Hendrik", "" ], [ "Weigt", "Martin", "" ] ]
Proteins have evolved to perform diverse cellular functions, from serving as reaction catalysts to coordinating cellular propagation and development. Frequently, proteins do not exert their full potential as monomers but rather undergo concerted interactions as either homo-oligomers or with other proteins as hetero-oligomers. The experimental study of such protein complexes and interactions has been arduous. Theoretical structure prediction methods are an attractive alternative. Here, we investigate homo-oligomeric interfaces by tracing residue coevolution via the global statistical Direct Coupling Analysis (DCA). DCA can accurately infer spatial adjacencies between residues. These adjacencies can be included as constraints in structure-prediction techniques to predict high-resolution models. By taking advantage of the on-going exponential growth of sequence databases, we go significantly beyond anecdotal cases of a few protein families and apply DCA to a systematic large-scale study of nearly 2000 PFAM protein families with sufficient sequence information and structurally resolved homo-oligomeric interfaces. We find that large interfaces are commonly identified by DCA. We further demonstrate that DCA can differentiate between subfamilies of different binding modes within one large PFAM family. Sequence derived contact information for the subfamilies proves sufficient to assemble accurate structural models of the diverse protein-oligomers. Thus, we provide an approach to investigate oligomerization for arbitrary protein families leading to structural models complementary to often difficult experimental methods. Combined with ever more abundant sequential data, we anticipate that this study will be instrumental to allow the structural description of many hetero-protein complexes in the future.
1506.04622
Harbaljit Sohal
Harbaljit S. Sohal, Konstantin Vassilevski, Andrew Jackson, Stuart N. Baker, Anthony O'Neill
Design and Microfabrication Considerations for Reliable Flexible Intracortical Implants
The final three authors contributed equally to this work
null
null
null
q-bio.NC physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current microelectrodes designed to record chronic neural activity suffer from recording instabilities due to the modulus mismatch between the electrode materials and the brain. We sought to address this by microfabricating a novel flexible neural probe. Our probe was fabricated from parylene-C with a WTi metal, using contact photolithography and reactive ion etching, with three design features to address this modulus mismatch: a sinusoidal shaft, a rounded tip and a polyimide anchoring ball. The anchor restricts movement of the electrode recording sites and the shaft accommodates the brain motion. We successfully patterned thick metal and parylene-C layers, with a reliable device release process leading to high functional yield. This novel reliably microfabricated probe can record stable neural activity for up to two years without delamination, surpassing the current state-of-the-art intracortical probes. This challenges recent concerns that have been raised over the long-term reliability of chronic implants when Parylene-C is used as an insulator, for both research and human applications. The microfabrication and design considerations provided in this manuscript may aid in the future development of flexible devices for biomedical applications.
[ { "created": "Mon, 15 Jun 2015 14:53:42 GMT", "version": "v1" }, { "created": "Tue, 3 May 2016 23:23:32 GMT", "version": "v2" } ]
2016-05-05
[ [ "Sohal", "Harbaljit S.", "" ], [ "Vassilevski", "Konstantin", "" ], [ "Jackson", "Andrew", "" ], [ "Baker", "Stuart N.", "" ], [ "O'Neill", "Anthony", "" ] ]
Current microelectrodes designed to record chronic neural activity suffer from recording instabilities due to the modulus mismatch between the electrode materials and the brain. We sought to address this by microfabricating a novel flexible neural probe. Our probe was fabricated from parylene-C with a WTi metal, using contact photolithography and reactive ion etching, with three design features to address this modulus mismatch: a sinusoidal shaft, a rounded tip and a polyimide anchoring ball. The anchor restricts movement of the electrode recording sites and the shaft accommodates the brain motion. We successfully patterned thick metal and parylene-C layers, with a reliable device release process leading to high functional yield. This novel reliably microfabricated probe can record stable neural activity for up to two years without delamination, surpassing the current state-of-the-art intracortical probes. This challenges recent concerns that have been raised over the long-term reliability of chronic implants when Parylene-C is used as an insulator, for both research and human applications. The microfabrication and design considerations provided in this manuscript may aid in the future development of flexible devices for biomedical applications.
1802.00217
Heidi Teppola
H. Teppola (1 and 2), S. Okujeni (2), M.-L. Linne (1), U. Egert (2) ((1) Department of Signal Processing, Tampere University of Technology, Tampere, Finland, (2) Bernstein Center Freiburg & Department of Microsystems Engineering, IMTEK, Albert-Ludwig University of Freiburg, Freiburg, Germany)
AMPA, NMDA and GABAA receptor mediated network burst dynamics in cortical cultures in vitro
4 pages, 3 figures, 1 table, In Proceedings of the 8th International Workshop on Computational Systems Biology (WCSB2011), eds. H Koeppl, J. A\'cimovi\'c, T. M\"aki-Marttunen, A. Larjo, and O. Yli-Harja (Zurich, Switzerland: TICSP series), 181-184
Proc. Int. Works. Comp. Syst. Biol. TICSP series 8 (2011) 181-184
null
null
q-bio.NC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we study the excitatory AMPA, and NMDA, and inhibitory GABAA receptor mediated dynamical changes in neuronal networks of neonatal rat cortex in vitro. Extracellular network-wide activity was recorded with 59 planar electrodes simultaneously under different pharmacological conditions. We analyzed the changes of overall network activity and network-wide burst frequency between baseline and AMPA receptor (AMPA-R) or NMDA receptor (NMDA-R) driven activity, as well as between the latter states and disinhibited activity. Additionally, spatiotemporal structures of pharmacologically modified bursts and recruitment of electrodes during the network bursts were studied. Our results show that AMPA-R and NMDA-R receptors have clearly distinct roles in network dynamics. AMPA-Rs are in greater charge to initiate network wide bursts. Therefore NMDA-Rs maintain the already initiated activity. GABAA receptors (GABAA-Rs) inhibit AMPA-R driven network activity more strongly than NMDA-R driven activity during the bursts.
[ { "created": "Thu, 1 Feb 2018 09:50:29 GMT", "version": "v1" } ]
2018-02-02
[ [ "Teppola", "H.", "", "1 and 2" ], [ "Okujeni", "S.", "" ], [ "Linne", "M. -L.", "" ], [ "Egert", "U.", "" ] ]
In this work we study the excitatory AMPA, and NMDA, and inhibitory GABAA receptor mediated dynamical changes in neuronal networks of neonatal rat cortex in vitro. Extracellular network-wide activity was recorded with 59 planar electrodes simultaneously under different pharmacological conditions. We analyzed the changes of overall network activity and network-wide burst frequency between baseline and AMPA receptor (AMPA-R) or NMDA receptor (NMDA-R) driven activity, as well as between the latter states and disinhibited activity. Additionally, spatiotemporal structures of pharmacologically modified bursts and recruitment of electrodes during the network bursts were studied. Our results show that AMPA-R and NMDA-R receptors have clearly distinct roles in network dynamics. AMPA-Rs are in greater charge to initiate network wide bursts. Therefore NMDA-Rs maintain the already initiated activity. GABAA receptors (GABAA-Rs) inhibit AMPA-R driven network activity more strongly than NMDA-R driven activity during the bursts.
q-bio/0605039
Franco Bagnoli
Carlo Guardiani, Franco Bagnoli
A toy model of polymer stretching
New revised version
J. Chem. Phys. 125, 084908 (2006)
10.1063/1.2335639
null
q-bio.BM
null
We present an extremely simplified model of multiple-domains polymer stretching in an atomic force microscopy experiment. We portray each module as a binary set of contacts and decompose the system energy into a harmonic term (the cantilever) and long-range interactions terms inside each domain. Exact equilibrium computations and Monte Carlo simulations qualitatively reproduce the experimental saw-tooth pattern of force-extension profiles, corresponding (in our model) to first-order phase transitions. We study the influence of the coupling induced by the cantilever and the pulling speed on the relative heights of the force peaks. The results suggest that the increasing height of the critical force for subsequent unfolding events is an out-of-equilibrium effect due to a finite pulling speed. The dependence of the average unfolding force on the pulling speed is shown to reproduce the experimental logarithmic law.
[ { "created": "Wed, 24 May 2006 10:03:50 GMT", "version": "v1" }, { "created": "Sat, 15 Jul 2006 19:21:52 GMT", "version": "v2" } ]
2008-01-20
[ [ "Guardiani", "Carlo", "" ], [ "Bagnoli", "Franco", "" ] ]
We present an extremely simplified model of multiple-domains polymer stretching in an atomic force microscopy experiment. We portray each module as a binary set of contacts and decompose the system energy into a harmonic term (the cantilever) and long-range interactions terms inside each domain. Exact equilibrium computations and Monte Carlo simulations qualitatively reproduce the experimental saw-tooth pattern of force-extension profiles, corresponding (in our model) to first-order phase transitions. We study the influence of the coupling induced by the cantilever and the pulling speed on the relative heights of the force peaks. The results suggest that the increasing height of the critical force for subsequent unfolding events is an out-of-equilibrium effect due to a finite pulling speed. The dependence of the average unfolding force on the pulling speed is shown to reproduce the experimental logarithmic law.
1504.03983
Wilhelm Braun
Wilhelm Braun, Paul C. Matthews, R\"udiger Thul
First passage times in integrate-and-fire neurons with stochastic thresholds
8 pages, 7 figures. Accepted for publication in Physical Review E
Phys. Rev. E 91, 052701 (2015)
10.1103/PhysRevE.91.052701
null
q-bio.NC math.PR physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a leaky integrate-and-fire neuron with deterministic subthreshold dynamics and a firing threshold that evolves as an Ornstein-Uhlenbeck process. The formulation of this minimal model is motivated by the experimentally observed widespread variation of neural firing thresholds. We show numerically that the mean first passage time can depend non-monotonically on the noise amplitude. For sufficiently large values of the correlation time of the stochastic threshold the mean first passage time is maximal for non-vanishing noise. We provide an explanation for this effect by analytically transforming the original model into a first passage time problem for Brownian motion. This transformation also allows for a perturbative calculation of the first passage time histograms. In turn this provides quantitative insights into the mechanisms that lead to the non-monotonic behaviour of the mean first passage time. The perturbation expansion is in excellent agreement with direct numerical simulations. The approach developed here can be applied to any deterministic subthreshold dynamics and any Gauss-Markov processes for the firing threshold. This opens up the possibility to incorporate biophysically detailed components into the subthreshold dynamics, rendering our approach a powerful framework that sits between traditional integrate-and-fire models and complex mechanistic descriptions of neural dynamics.
[ { "created": "Wed, 15 Apr 2015 18:08:49 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2015 21:38:03 GMT", "version": "v2" } ]
2015-05-12
[ [ "Braun", "Wilhelm", "" ], [ "Matthews", "Paul C.", "" ], [ "Thul", "Rüdiger", "" ] ]
We consider a leaky integrate-and-fire neuron with deterministic subthreshold dynamics and a firing threshold that evolves as an Ornstein-Uhlenbeck process. The formulation of this minimal model is motivated by the experimentally observed widespread variation of neural firing thresholds. We show numerically that the mean first passage time can depend non-monotonically on the noise amplitude. For sufficiently large values of the correlation time of the stochastic threshold the mean first passage time is maximal for non-vanishing noise. We provide an explanation for this effect by analytically transforming the original model into a first passage time problem for Brownian motion. This transformation also allows for a perturbative calculation of the first passage time histograms. In turn this provides quantitative insights into the mechanisms that lead to the non-monotonic behaviour of the mean first passage time. The perturbation expansion is in excellent agreement with direct numerical simulations. The approach developed here can be applied to any deterministic subthreshold dynamics and any Gauss-Markov processes for the firing threshold. This opens up the possibility to incorporate biophysically detailed components into the subthreshold dynamics, rendering our approach a powerful framework that sits between traditional integrate-and-fire models and complex mechanistic descriptions of neural dynamics.
1609.08318
Denis Michel
Denis Michel
Conformational selection or induced fit? New insights from old principles
10 pages, 4 figures
Michel, D. 2016. Conformational selection or induced fit? New insights from old principles. Biochimie 128-129, 48-54
10.1016/j.biochi.2016.06.012
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long standing debate in biochemistry is to determine whether the conformational changes observed during biomolecular interactions proceed through conformational selection (of preexisting isoforms) or induced fit (ligand-induced 3D reshaping). The latter mechanism had been invoked in certain circumstances, for example to explain the non-Michaelian activity of monomeric enzymes like glucokinase. But the relative importance of induced fit has been recently depreciated in favor of conformational selection, assumed to be always sufficient, predominant in general and in particular for glucokinase. The relative contributions of conformational selection and induced fit are reconsidered here in and out of equilibrium, in the light of earlier concepts such as the cyclic equilibrium rule and the turning wheel of Wyman, using single molecule state probability, one way fluxes and net fluxes. The conditions for a switch from conformational selection to induced fit at a given ligand concentration are explicitly determined. Out of equilibrium, the inspection of the enzyme states circuit shows that conformational selection alone would give a Michaelian reaction rate but not the established nonlinear behaviour of glucokinase. Moreover, when induced fit and conformational selection coexist and allow kinetic cooperativity, the net flux emerging in the linkage cycle necessarily corresponds to the induced fit path.
[ { "created": "Tue, 27 Sep 2016 08:52:24 GMT", "version": "v1" } ]
2016-09-28
[ [ "Michel", "Denis", "" ] ]
A long standing debate in biochemistry is to determine whether the conformational changes observed during biomolecular interactions proceed through conformational selection (of preexisting isoforms) or induced fit (ligand-induced 3D reshaping). The latter mechanism had been invoked in certain circumstances, for example to explain the non-Michaelian activity of monomeric enzymes like glucokinase. But the relative importance of induced fit has been recently depreciated in favor of conformational selection, assumed to be always sufficient, predominant in general and in particular for glucokinase. The relative contributions of conformational selection and induced fit are reconsidered here in and out of equilibrium, in the light of earlier concepts such as the cyclic equilibrium rule and the turning wheel of Wyman, using single molecule state probability, one way fluxes and net fluxes. The conditions for a switch from conformational selection to induced fit at a given ligand concentration are explicitly determined. Out of equilibrium, the inspection of the enzyme states circuit shows that conformational selection alone would give a Michaelian reaction rate but not the established nonlinear behaviour of glucokinase. Moreover, when induced fit and conformational selection coexist and allow kinetic cooperativity, the net flux emerging in the linkage cycle necessarily corresponds to the induced fit path.
2202.00495
Abicumaran Uthamacumaran
Abicumaran Uthamacumaran, Mohamed Abdouh, Kinshuk Sengupta, Zu-hua Gao, Stefano Forte, Thupten Tsering, Julia V Burnier, Goffredo Arena
Machine Intelligence-Driven Classification of Cancer Patients-Derived Extracellular Vesicles using Fluorescence Correlation Spectroscopy: Results from a Pilot Study
23 pages, 6 figures
Neural Computing and Applications (2023)
10.1007/s00521-022-08113-4
volume 35: 8407--8422
q-bio.QM cs.LG q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Patient-derived extracellular vesicles (EVs) that contains a complex biological cargo is a valuable source of liquid biopsy diagnostics to aid in early detection, cancer screening, and precision nanotherapeutics. In this study, we predicted that coupling cancer patient blood-derived EVs to time-resolved spectroscopy and artificial intelligence (AI) could provide a robust cancer screening and follow-up tools. Methods: Fluorescence correlation spectroscopy (FCS) measurements were performed on 24 blood samples-derived EVs. Blood samples were obtained from 15 cancer patients (presenting 5 different types of cancers), and 9 healthy controls (including patients with benign lesions). The obtained FCS autocorrelation spectra were processed into power spectra using the Fast-Fourier Transform algorithm and subjected to various machine learning algorithms to distinguish cancer spectra from healthy control spectra. Results and Applications: The performance of AdaBoost Random Forest (RF) classifier, support vector machine, and multilayer perceptron, were tested on selected frequencies in the N=118 power spectra. The RF classifier exhibited a 90% classification accuracy and high sensitivity and specificity in distinguishing the FCS power spectra of cancer patients from those of healthy controls. Further, an image convolutional neural network (CNN), ResNet network, and a quantum CNN were assessed on the power spectral images as additional validation tools. All image-based CNNs exhibited a nearly equal classification performance with an accuracy of roughly 82% and reasonably high sensitivity and specificity scores. Our pilot study demonstrates that AI-algorithms coupled to time-resolved FCS power spectra can accurately and differentially classify the complex patient-derived EVs from different cancer samples of distinct tissue subtypes.
[ { "created": "Tue, 1 Feb 2022 15:46:36 GMT", "version": "v1" } ]
2023-05-10
[ [ "Uthamacumaran", "Abicumaran", "" ], [ "Abdouh", "Mohamed", "" ], [ "Sengupta", "Kinshuk", "" ], [ "Gao", "Zu-hua", "" ], [ "Forte", "Stefano", "" ], [ "Tsering", "Thupten", "" ], [ "Burnier", "Julia V", ""...
Patient-derived extracellular vesicles (EVs) that contains a complex biological cargo is a valuable source of liquid biopsy diagnostics to aid in early detection, cancer screening, and precision nanotherapeutics. In this study, we predicted that coupling cancer patient blood-derived EVs to time-resolved spectroscopy and artificial intelligence (AI) could provide a robust cancer screening and follow-up tools. Methods: Fluorescence correlation spectroscopy (FCS) measurements were performed on 24 blood samples-derived EVs. Blood samples were obtained from 15 cancer patients (presenting 5 different types of cancers), and 9 healthy controls (including patients with benign lesions). The obtained FCS autocorrelation spectra were processed into power spectra using the Fast-Fourier Transform algorithm and subjected to various machine learning algorithms to distinguish cancer spectra from healthy control spectra. Results and Applications: The performance of AdaBoost Random Forest (RF) classifier, support vector machine, and multilayer perceptron, were tested on selected frequencies in the N=118 power spectra. The RF classifier exhibited a 90% classification accuracy and high sensitivity and specificity in distinguishing the FCS power spectra of cancer patients from those of healthy controls. Further, an image convolutional neural network (CNN), ResNet network, and a quantum CNN were assessed on the power spectral images as additional validation tools. All image-based CNNs exhibited a nearly equal classification performance with an accuracy of roughly 82% and reasonably high sensitivity and specificity scores. Our pilot study demonstrates that AI-algorithms coupled to time-resolved FCS power spectra can accurately and differentially classify the complex patient-derived EVs from different cancer samples of distinct tissue subtypes.
1611.10042
Adam Stefanko Dr.
Christian Thiede, Gerhard Ehninger, Kai Simons, Michal Grzybek, Adam Stefanko
Lipidomic approach for stratification of Acute Myeloid Leukemia patients
20 pages, 3 figures, 1 table
null
null
null
q-bio.TO q-bio.BM q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The pathogenesis and progression of many tumors, including hematologic malignancies is highly dependent on enhanced lipogenesis. De novo fatty-acid synthesis permits accelerated proliferation of tumor cells by providing structural components to build the membranes. It may also lead to alterations of physicochemical properties of the formed membranes, which can have an impact on signaling or even increase resistance to drugs in cancer cells. Cancer type-specific lipid profiles would allow understanding the actual effects of lipid changes and therefore could potentially serve as fingerprints for individual tumors and be explored as diagnostic markers. We have used shotgun MS approach to identify lipid patterns in different types of acute myeloid leukemia (AML) patients that either show no karyotype changes or belong to t(8;21) or inv16 types. The observed differences in lipidomes of t(8;21) and inv(16) patients, as compared to AML patients without karyotype changes, concentrate mostly on substantial modulation of ceramides/sphingolipids synthesis. Also significant changes in the physicochemical properties of the membranes, between the t(8;21) and the other patients, were noted that were related to a marked alteration in the saturation levels of lipids. The revealed differences in lipid profiles of various AML types increase our understanding of the affected biochemical pathways and can potentially serve as diagnostic tools.
[ { "created": "Wed, 30 Nov 2016 08:15:44 GMT", "version": "v1" } ]
2016-12-04
[ [ "Thiede", "Christian", "" ], [ "Ehninger", "Gerhard", "" ], [ "Simons", "Kai", "" ], [ "Grzybek", "Michal", "" ], [ "Stefanko", "Adam", "" ] ]
The pathogenesis and progression of many tumors, including hematologic malignancies is highly dependent on enhanced lipogenesis. De novo fatty-acid synthesis permits accelerated proliferation of tumor cells by providing structural components to build the membranes. It may also lead to alterations of physicochemical properties of the formed membranes, which can have an impact on signaling or even increase resistance to drugs in cancer cells. Cancer type-specific lipid profiles would allow understanding the actual effects of lipid changes and therefore could potentially serve as fingerprints for individual tumors and be explored as diagnostic markers. We have used shotgun MS approach to identify lipid patterns in different types of acute myeloid leukemia (AML) patients that either show no karyotype changes or belong to t(8;21) or inv16 types. The observed differences in lipidomes of t(8;21) and inv(16) patients, as compared to AML patients without karyotype changes, concentrate mostly on substantial modulation of ceramides/sphingolipids synthesis. Also significant changes in the physicochemical properties of the membranes, between the t(8;21) and the other patients, were noted that were related to a marked alteration in the saturation levels of lipids. The revealed differences in lipid profiles of various AML types increase our understanding of the affected biochemical pathways and can potentially serve as diagnostic tools.
1307.6870
Hugo Brandao
Hugo B. Brandao, Hussain Sangji, Elvis Pandzic, Susanne Bechstedt, Gary J. Brouhard, Paul W. Wiseman
Measuring ligand-receptor binding kinetics and dynamics using k-space image correlation spectroscopy
14 pages, 5 figures
null
10.1016/j.ymeth.2013.07.042
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate measurements of kinetic rate constants for interacting biomolecules is crucial for understanding the mechanisms underlying intracellular signalling pathways. The magnitude of binding rates plays a very important molecular regulatory role which can lead to very different cellular physiological responses under different conditions. Here, we extend the k-space image correlation spectroscopy (kICS) technique to study the kinetic binding rates of systems wherein: (a) fluorescently labelled, free ligands in solution interact with unlabelled, diffusing receptors in the plasma membrane and (b) systems where labelled, diffusing receptors are allowed to bind/unbind and interconvert between two different diffusing states on the plasma membrane. We develop the necessary mathematical framework for the kICS analysis and demonstrate how to extract the elevant kinetic binding parameters of the underlying molecular system from fluorescence video-microscopy image time-series. Finally, by examining real data for two model experimental systems, we demonstrate how kICS can be a powerful tool to measure molecular transport coefficients and binding kinetics.
[ { "created": "Thu, 25 Jul 2013 20:17:36 GMT", "version": "v1" } ]
2013-12-02
[ [ "Brandao", "Hugo B.", "" ], [ "Sangji", "Hussain", "" ], [ "Pandzic", "Elvis", "" ], [ "Bechstedt", "Susanne", "" ], [ "Brouhard", "Gary J.", "" ], [ "Wiseman", "Paul W.", "" ] ]
Accurate measurements of kinetic rate constants for interacting biomolecules is crucial for understanding the mechanisms underlying intracellular signalling pathways. The magnitude of binding rates plays a very important molecular regulatory role which can lead to very different cellular physiological responses under different conditions. Here, we extend the k-space image correlation spectroscopy (kICS) technique to study the kinetic binding rates of systems wherein: (a) fluorescently labelled, free ligands in solution interact with unlabelled, diffusing receptors in the plasma membrane and (b) systems where labelled, diffusing receptors are allowed to bind/unbind and interconvert between two different diffusing states on the plasma membrane. We develop the necessary mathematical framework for the kICS analysis and demonstrate how to extract the elevant kinetic binding parameters of the underlying molecular system from fluorescence video-microscopy image time-series. Finally, by examining real data for two model experimental systems, we demonstrate how kICS can be a powerful tool to measure molecular transport coefficients and binding kinetics.
1808.03672
Peter Clote
Amir H. Bayegan and Peter Clote
RNAmountAlign: efficient software for local, global, semiglobal pairwise and multiple RNA sequence/structure alignment
28 pages, 8 figures, 5 tables, 5 supplementary figures, 3 supplementary tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alignment of structural RNAs is an important problem with a wide range of applications. Since function is often determined by molecular structure, RNA alignment programs should take into account both sequence and base-pairing information for structural homology identi^Lcation. A number of successful alignment programs are heuristic versions of Sanko^K's optimal algorithm. Most of them require O(n4) run time. This paper describes C++ software, RNAmountAlign, for RNA sequence/structure alignment that runs in O(n3) time and O(n2) space; moreover, our software returns a p-value (transformable to expect value E) based on Karlin-Altschul statistics for local alignment, as well as parameter ^Ltting for local and global alignment. Using incremental mountain height, a representation of structural information computable in cubic time, RNAmountAlign implements quadratic time pairwise local, global and global/semiglobal (query search) alignment using a weighted combination of sequence and structural similarity. RNAmountAlign is capable of performing progressive multiple alignment as well. Benchmarking of RNAmountAlign against LocARNA, LARA, FOLDALIGN, DYNALIGN and STRAL shows that RNAmountAlign has reasonably good accuracy and much faster run time supporting all alignment types.
[ { "created": "Fri, 10 Aug 2018 18:41:57 GMT", "version": "v1" } ]
2018-08-14
[ [ "Bayegan", "Amir H.", "" ], [ "Clote", "Peter", "" ] ]
Alignment of structural RNAs is an important problem with a wide range of applications. Since function is often determined by molecular structure, RNA alignment programs should take into account both sequence and base-pairing information for structural homology identi^Lcation. A number of successful alignment programs are heuristic versions of Sanko^K's optimal algorithm. Most of them require O(n4) run time. This paper describes C++ software, RNAmountAlign, for RNA sequence/structure alignment that runs in O(n3) time and O(n2) space; moreover, our software returns a p-value (transformable to expect value E) based on Karlin-Altschul statistics for local alignment, as well as parameter ^Ltting for local and global alignment. Using incremental mountain height, a representation of structural information computable in cubic time, RNAmountAlign implements quadratic time pairwise local, global and global/semiglobal (query search) alignment using a weighted combination of sequence and structural similarity. RNAmountAlign is capable of performing progressive multiple alignment as well. Benchmarking of RNAmountAlign against LocARNA, LARA, FOLDALIGN, DYNALIGN and STRAL shows that RNAmountAlign has reasonably good accuracy and much faster run time supporting all alignment types.
1503.05074
Johanna Paijmans
Gloria G. Fortes and Johanna L.A. Paijmans
Analysis of whole mitogenomes from ancient samples
17 pages, 2 figures
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ancient mitochondrial DNA has been used in a wide variety of palaeontological and archaeological studies, ranging from population dynamics of extinct species to patterns of domestication. Most of these studies have traditionally been based on the analysis of short fragments from the mitochondrial control region, analysed using PCR coupled with Sanger sequencing. With the introduction of high-throughput sequencing, as well as new enrichment technologies, the recovery of full mitochondrial genomes (mitogenomes) from ancient specimens has become significantly less complicated. Here we present a protocol to build ancient extracts into Illumina high-throughput sequencing libraries, and subsequent Agilent array-based capture to enrich for the desired mitogenome. Both are based on previously published protocols, with the introduction of several improvements aimed to increase the recovery of short DNA fragments, while keeping the cost and effort requirements low. This protocol was designed for enrichment of mitochondrial DNA in ancient or degraded samples. However, the protocols can be easily adapted for using for building libraries for shotgun-sequencing of whole genomes, or enrichment of other genomic regions.
[ { "created": "Tue, 17 Mar 2015 14:33:56 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2016 08:29:00 GMT", "version": "v2" } ]
2016-04-26
[ [ "Fortes", "Gloria G.", "" ], [ "Paijmans", "Johanna L. A.", "" ] ]
Ancient mitochondrial DNA has been used in a wide variety of palaeontological and archaeological studies, ranging from population dynamics of extinct species to patterns of domestication. Most of these studies have traditionally been based on the analysis of short fragments from the mitochondrial control region, analysed using PCR coupled with Sanger sequencing. With the introduction of high-throughput sequencing, as well as new enrichment technologies, the recovery of full mitochondrial genomes (mitogenomes) from ancient specimens has become significantly less complicated. Here we present a protocol to build ancient extracts into Illumina high-throughput sequencing libraries, and subsequent Agilent array-based capture to enrich for the desired mitogenome. Both are based on previously published protocols, with the introduction of several improvements aimed to increase the recovery of short DNA fragments, while keeping the cost and effort requirements low. This protocol was designed for enrichment of mitochondrial DNA in ancient or degraded samples. However, the protocols can be easily adapted for using for building libraries for shotgun-sequencing of whole genomes, or enrichment of other genomic regions.
0912.4465
Charlotte Gils
C. Gils, J.L. Wrana, and W.K. Abou Salem
A quantum spin approach to histone dynamics
12 pages, 6 figures
null
null
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by/3.0/
Post-translational modifications of histone proteins are an important factor in epigenetic control that serve to regulate transcription, depending on the particular modification states of the histone proteins. We study the stochastic dynamics of histone protein states, taking into account a feedback mechanism where modified nucleosomes recruit enzymes that diffuse to adjacent nucleosomes. We map the system onto a quantum spin system whose dynamics is generated by a non-Hermitian Hamiltonian. Making an ansatz for the solution as a tensor product state leads to nonlinear partial differential equations that describe the dynamics of the system. Multiple stable histone states appear in a parameter regime whose size increases with increasing number of modification sites. We discuss the role of the spatial dependance, and we consider the effects of spatially heterogeneous enzymatic activity. Finally, we consider multistability in a model of several types of correlated post-translational modifications.
[ { "created": "Tue, 22 Dec 2009 17:36:07 GMT", "version": "v1" }, { "created": "Sun, 12 Sep 2010 22:16:16 GMT", "version": "v2" } ]
2010-09-14
[ [ "Gils", "C.", "" ], [ "Wrana", "J. L.", "" ], [ "Salem", "W. K. Abou", "" ] ]
Post-translational modifications of histone proteins are an important factor in epigenetic control that serve to regulate transcription, depending on the particular modification states of the histone proteins. We study the stochastic dynamics of histone protein states, taking into account a feedback mechanism where modified nucleosomes recruit enzymes that diffuse to adjacent nucleosomes. We map the system onto a quantum spin system whose dynamics is generated by a non-Hermitian Hamiltonian. Making an ansatz for the solution as a tensor product state leads to nonlinear partial differential equations that describe the dynamics of the system. Multiple stable histone states appear in a parameter regime whose size increases with increasing number of modification sites. We discuss the role of the spatial dependance, and we consider the effects of spatially heterogeneous enzymatic activity. Finally, we consider multistability in a model of several types of correlated post-translational modifications.
2303.00233
Wenzhuo Tang
Wenzhuo Tang, Hongzhi Wen, Renming Liu, Jiayuan Ding, Wei Jin, Yuying Xie, Hui Liu, Jiliang Tang
Single-Cell Multimodal Prediction via Transformers
CIKM 2023
In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM 23), 2023, Birmingham, United Kingdom
10.1145/3583780.3615061
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics. Nevertheless, the proliferation of multimodal single-cell data also introduces tremendous challenges in modeling the complex interactions among different modalities. The recently advanced methods focus on constructing static interaction graphs and applying graph neural networks (GNNs) to learn from multimodal data. However, such static graphs can be suboptimal as they do not take advantage of the downstream task information; meanwhile GNNs also have some inherent limitations when deeply stacking GNN layers. To tackle these issues, in this work, we investigate how to leverage transformers for multimodal single-cell data in an end-to-end manner while exploiting downstream task information. In particular, we propose a scMoFormer framework which can readily incorporate external domain knowledge and model the interactions within each modality and cross modalities. Extensive experiments demonstrate that scMoFormer achieves superior performance on various benchmark datasets. Remarkably, scMoFormer won a Kaggle silver medal with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022 competition. Our implementation is publicly available at Github.
[ { "created": "Wed, 1 Mar 2023 05:03:23 GMT", "version": "v1" }, { "created": "Thu, 17 Aug 2023 07:10:50 GMT", "version": "v2" }, { "created": "Fri, 13 Oct 2023 15:32:57 GMT", "version": "v3" } ]
2023-10-16
[ [ "Tang", "Wenzhuo", "" ], [ "Wen", "Hongzhi", "" ], [ "Liu", "Renming", "" ], [ "Ding", "Jiayuan", "" ], [ "Jin", "Wei", "" ], [ "Xie", "Yuying", "" ], [ "Liu", "Hui", "" ], [ "Tang", "Jiliang", ...
The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics. Nevertheless, the proliferation of multimodal single-cell data also introduces tremendous challenges in modeling the complex interactions among different modalities. The recently advanced methods focus on constructing static interaction graphs and applying graph neural networks (GNNs) to learn from multimodal data. However, such static graphs can be suboptimal as they do not take advantage of the downstream task information; meanwhile GNNs also have some inherent limitations when deeply stacking GNN layers. To tackle these issues, in this work, we investigate how to leverage transformers for multimodal single-cell data in an end-to-end manner while exploiting downstream task information. In particular, we propose a scMoFormer framework which can readily incorporate external domain knowledge and model the interactions within each modality and cross modalities. Extensive experiments demonstrate that scMoFormer achieves superior performance on various benchmark datasets. Remarkably, scMoFormer won a Kaggle silver medal with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022 competition. Our implementation is publicly available at Github.
2301.05255
Amanda Lea
Amanda J. Lea, Andrew G. Clark, Andrew W. Dahl, Orrin Devinsky, Angela R. Garcia, Christopher D. Golden, Joseph Kamau, Thomas S. Kraft, Yvonne A. L. Lim, Dino Martins, Donald Mogoi, Paivi Pajukanta, George Perry, Herman Pontzer, Benjamin C. Trumble, Samuel S. Urlacher, Vivek V. Venkataraman, Ian J. Wallace, Michael Gurven, Daniel Lieberman, Julien F. Ayroles
Evolutionary mismatch and the role of GxE interactions in human disease
null
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Globally, we are witnessing the rise of complex, non-communicable diseases (NCDs) related to changes in our daily environments. Obesity, asthma, cardiovascular disease, and type 2 diabetes are part of a long list of "lifestyle" diseases that were rare throughout human history but are now common. A key idea from anthropology and evolutionary biology--the evolutionary mismatch hypothesis--seeks to explain this phenomenon. It posits that humans evolved in environments that radically differ from the ones experienced by most people today, and thus traits that were advantageous in past environments may now be "mismatched" and disease-causing. This hypothesis is, at its core, a genetic one: it predicts that loci with a history of selection will exhibit "genotype by environment" (GxE) interactions and have differential health effects in ancestral versus modern environments. Here, we discuss how this concept could be leveraged to uncover the genetic architecture of NCDs in a principled way. Specifically, we advocate for partnering with small-scale, subsistence-level groups that are currently transitioning from environments that are arguably more "matched" with their recent evolutionary history to those that are more "mismatched". These populations provide diverse genetic backgrounds as well as the needed levels and types of environmental variation necessary for mapping GxE interactions in an explicit mismatch framework. Such work would make important contributions to our understanding of environmental and genetic risk factors for NCDs across diverse ancestries and sociocultural contexts.
[ { "created": "Thu, 12 Jan 2023 19:07:59 GMT", "version": "v1" }, { "created": "Mon, 13 Feb 2023 18:27:12 GMT", "version": "v2" } ]
2023-02-14
[ [ "Lea", "Amanda J.", "" ], [ "Clark", "Andrew G.", "" ], [ "Dahl", "Andrew W.", "" ], [ "Devinsky", "Orrin", "" ], [ "Garcia", "Angela R.", "" ], [ "Golden", "Christopher D.", "" ], [ "Kamau", "Joseph", "" ...
Globally, we are witnessing the rise of complex, non-communicable diseases (NCDs) related to changes in our daily environments. Obesity, asthma, cardiovascular disease, and type 2 diabetes are part of a long list of "lifestyle" diseases that were rare throughout human history but are now common. A key idea from anthropology and evolutionary biology--the evolutionary mismatch hypothesis--seeks to explain this phenomenon. It posits that humans evolved in environments that radically differ from the ones experienced by most people today, and thus traits that were advantageous in past environments may now be "mismatched" and disease-causing. This hypothesis is, at its core, a genetic one: it predicts that loci with a history of selection will exhibit "genotype by environment" (GxE) interactions and have differential health effects in ancestral versus modern environments. Here, we discuss how this concept could be leveraged to uncover the genetic architecture of NCDs in a principled way. Specifically, we advocate for partnering with small-scale, subsistence-level groups that are currently transitioning from environments that are arguably more "matched" with their recent evolutionary history to those that are more "mismatched". These populations provide diverse genetic backgrounds as well as the needed levels and types of environmental variation necessary for mapping GxE interactions in an explicit mismatch framework. Such work would make important contributions to our understanding of environmental and genetic risk factors for NCDs across diverse ancestries and sociocultural contexts.
2109.12412
Ana Carpio
A. Carpio, E. Pierret
Uncertainty quantification in covid-19 spread: lockdown effects
null
Results in Physics, 105375, 2022
10.1016/j.rinp.2022.105375
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a Bayesian inference framework to quantify uncertainties in epidemiological models. We use SEIJR and SIJR models involving populations of susceptible, exposed, infective, diagnosed, dead and recovered individuals to infer from covid-19 data rate constants, as well as their variations in response to lockdown measures. To account for confinement, we distinguish two susceptible populations at different risk: confined and unconfined. We show that transmission and recovery rates within them vary in response to facts. A key unknown to predict the evolution of the epidemic is the fraction of the population affected by the virus, including asymptomatic subjects. Our study tracks its time evolution with quantified uncertainty from available official data from the onset of the epidemic, limited, however, by the data quality. We exemplify the technique with data from Spain, country in which late drastic lockdowns were enforced for months. In late actions and in the absence of other measures, spread is delayed but not stopped unless a large enough fraction of the population is confined until the asymptomatic population is depleted. To some extent, confinement could be replaced by strong distancing through masks in adequate circumstances.
[ { "created": "Sat, 25 Sep 2021 17:40:12 GMT", "version": "v1" } ]
2022-03-08
[ [ "Carpio", "A.", "" ], [ "Pierret", "E.", "" ] ]
We develop a Bayesian inference framework to quantify uncertainties in epidemiological models. We use SEIJR and SIJR models involving populations of susceptible, exposed, infective, diagnosed, dead and recovered individuals to infer from covid-19 data rate constants, as well as their variations in response to lockdown measures. To account for confinement, we distinguish two susceptible populations at different risk: confined and unconfined. We show that transmission and recovery rates within them vary in response to facts. A key unknown to predict the evolution of the epidemic is the fraction of the population affected by the virus, including asymptomatic subjects. Our study tracks its time evolution with quantified uncertainty from available official data from the onset of the epidemic, limited, however, by the data quality. We exemplify the technique with data from Spain, country in which late drastic lockdowns were enforced for months. In late actions and in the absence of other measures, spread is delayed but not stopped unless a large enough fraction of the population is confined until the asymptomatic population is depleted. To some extent, confinement could be replaced by strong distancing through masks in adequate circumstances.
2309.10821
Marat Rvachev
Marat M. Rvachev
An Operating Principle of the Cerebral Cortex, and a Cellular Mechanism for Attentional Trial-and-Error Pattern Learning and Useful Classification Extraction
20 pages, 12 figures
Front. Neural Circuits 18:1280604 (2024)
10.3389/fncir.2024.1280604
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention towards important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization...
[ { "created": "Mon, 21 Aug 2023 19:19:26 GMT", "version": "v1" }, { "created": "Fri, 24 Nov 2023 05:08:29 GMT", "version": "v2" }, { "created": "Tue, 6 Feb 2024 02:37:28 GMT", "version": "v3" } ]
2024-03-19
[ [ "Rvachev", "Marat M.", "" ] ]
A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention towards important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization...
1811.03612
Hamdan Awan
Hamdan Awan, Raviraj S. Adve, Nigel Wallbridge, Carrol Plummer, and Andrew W. Eckford
Communication and Information Theory of Single Action Potential Signals in Plants
13 Pages, 15 Figures, Accepted for Publication in IEEE Transactions on NanoBioscience
null
10.1109/TNB.2018.2880924
null
q-bio.NC cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many plants, such as Mimosa pudica (the sensitive plant), employ electrochemical signals known as action potentials (APs) for rapid intercellular communication. In this paper, we consider a reaction diffusion model of individual AP signals to analyze APs from a communication and information theoretic perspective. We use concepts from molecular communication to explain the underlying process of information transfer in a plant for a single AP pulse that is shared with one or more receiver cells. We also use the chemical Langevin equation to accommodate the deterministic as well as stochastic component of the system. Finally we present an information theoretic analysis of single action potentials, obtaining achievable information rates for these signals. We show that, in general, the presence of an AP signal can increase the mutual information and information propagation speed among neighboring cells with receivers in different settings.
[ { "created": "Thu, 8 Nov 2018 18:55:38 GMT", "version": "v1" } ]
2019-04-05
[ [ "Awan", "Hamdan", "" ], [ "Adve", "Raviraj S.", "" ], [ "Wallbridge", "Nigel", "" ], [ "Plummer", "Carrol", "" ], [ "Eckford", "Andrew W.", "" ] ]
Many plants, such as Mimosa pudica (the sensitive plant), employ electrochemical signals known as action potentials (APs) for rapid intercellular communication. In this paper, we consider a reaction diffusion model of individual AP signals to analyze APs from a communication and information theoretic perspective. We use concepts from molecular communication to explain the underlying process of information transfer in a plant for a single AP pulse that is shared with one or more receiver cells. We also use the chemical Langevin equation to accommodate the deterministic as well as stochastic component of the system. Finally we present an information theoretic analysis of single action potentials, obtaining achievable information rates for these signals. We show that, in general, the presence of an AP signal can increase the mutual information and information propagation speed among neighboring cells with receivers in different settings.
q-bio/0404022
Michael Deem
Taison Tan, Leonard D. Bogarad, Michael W. Deem
Modulation of Base Specific Mutation and Recombination Rates Enables Functional Adaptation within the Context of the Genetic Code
30 pages; 6 figures; 6 tables; to appear in J. Mol. Evol
null
null
null
q-bio.GN q-bio.PE
null
The persistence of life requires populations to adapt at a rate commensurate with the dynamics of their environment. Successful populations that inhabit highly variable environments have evolved mechanisms to increase the likelihood of successful adaptation. We introduce a $64 \times 64$ matrix to quantify base-specific mutation potential, analyzing four different replicative systems, error-prone PCR, mouse antibodies, a nematode, and Drosophila. Mutational tendencies are correlated with the structural evolution of proteins. In systems under strong selective pressure, mutational biases are shown to favor the adaptive search of space, either by base mutation or by recombination. Such adaptability is discussed within the context of the genetic code at the levels of replication and codon usage.
[ { "created": "Tue, 20 Apr 2004 19:21:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tan", "Taison", "" ], [ "Bogarad", "Leonard D.", "" ], [ "Deem", "Michael W.", "" ] ]
The persistence of life requires populations to adapt at a rate commensurate with the dynamics of their environment. Successful populations that inhabit highly variable environments have evolved mechanisms to increase the likelihood of successful adaptation. We introduce a $64 \times 64$ matrix to quantify base-specific mutation potential, analyzing four different replicative systems, error-prone PCR, mouse antibodies, a nematode, and Drosophila. Mutational tendencies are correlated with the structural evolution of proteins. In systems under strong selective pressure, mutational biases are shown to favor the adaptive search of space, either by base mutation or by recombination. Such adaptability is discussed within the context of the genetic code at the levels of replication and codon usage.
0811.0514
Jean-Charles Boisson
Jean-Charles Boisson (LIFL, INRIA Lille - Nord Europe), Laetitia Jourdan (LIFL, INRIA Lille - Nord Europe, INRIA Futurs), El-Ghazali Talbi (LIFL, INRIA Lille - Nord Europe, INRIA Futurs), Dragos Horvath (UGSF)
Parallel multi-objective algorithms for the molecular docking problem
Computational Intelligence in Bioinformatics and Bioengineering (CIBCB08) (2008)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular docking is an essential tool for drug design. It helps the scientist to rapidly know if two molecules, respectively called ligand and receptor, can be combined together to obtain a stable complex. We propose a new multi-objective model combining an energy term and a surface term to gain such complexes. The aim of our model is to provide complexes with a low energy and low surface. This model has been validated with two multi-objective genetic algorithms on instances from the literature dedicated to the docking benchmarking.
[ { "created": "Tue, 4 Nov 2008 14:22:31 GMT", "version": "v1" } ]
2008-11-05
[ [ "Boisson", "Jean-Charles", "", "LIFL, INRIA Lille - Nord Europe" ], [ "Jourdan", "Laetitia", "", "LIFL, INRIA Lille - Nord Europe, INRIA Futurs" ], [ "Talbi", "El-Ghazali", "", "LIFL, INRIA Lille - Nord Europe, INRIA Futurs" ], [ "Horvath", "Drag...
Molecular docking is an essential tool for drug design. It helps the scientist to rapidly know if two molecules, respectively called ligand and receptor, can be combined together to obtain a stable complex. We propose a new multi-objective model combining an energy term and a surface term to gain such complexes. The aim of our model is to provide complexes with a low energy and low surface. This model has been validated with two multi-objective genetic algorithms on instances from the literature dedicated to the docking benchmarking.
1506.06920
Vladimir Akulin
V. M. Akulin, F. Carlier, Stanislaw Solnik, and M. L. Latash
Neural Control of Redundant (Abundant) Systems as Algorithms Stabilizing Subspaces
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of stability of motor actions implemented by the central nervous system based on simple algorithms potentially reflecting physical (including physiological) processes within the body. A number of conceptually simple algorithms that solve motor tasks with a high probability of success may be based on feedback schemes that ensure stability of subspaces of neural variables associated with accomplishing those tasks. The task is formulated in terms of linear constrains imposed either on the human body mechanical variables or on neural variables; we discuss three reference frames relevant to these processes. We discuss underlying basic principles of such algorithms, their architecture, and efficiency, and compare the outcomes of implementation of such algorithms with the results of experiments performed on the human hand.
[ { "created": "Tue, 23 Jun 2015 09:25:20 GMT", "version": "v1" } ]
2015-06-24
[ [ "Akulin", "V. M.", "" ], [ "Carlier", "F.", "" ], [ "Solnik", "Stanislaw", "" ], [ "Latash", "M. L.", "" ] ]
We address the problem of stability of motor actions implemented by the central nervous system based on simple algorithms potentially reflecting physical (including physiological) processes within the body. A number of conceptually simple algorithms that solve motor tasks with a high probability of success may be based on feedback schemes that ensure stability of subspaces of neural variables associated with accomplishing those tasks. The task is formulated in terms of linear constrains imposed either on the human body mechanical variables or on neural variables; we discuss three reference frames relevant to these processes. We discuss underlying basic principles of such algorithms, their architecture, and efficiency, and compare the outcomes of implementation of such algorithms with the results of experiments performed on the human hand.
2308.16309
EPTCS
George A. Elder (Queen Mary University of London) and Conrad Bessant (Queen Mary University of London)
Inferring Compensatory Kinase Networks in Yeast using Prolog
In Proceedings ICLP 2023, arXiv:2308.14898
EPTCS 385, 2023, pp. 260-273
10.4204/EPTCS.385.26
null
q-bio.MN cs.SC
http://creativecommons.org/licenses/by/4.0/
Signalling pathways are conserved across different species, therefore making yeast a model organism to study these via disruption of kinase activity. Yeast has 159 genes that encode protein kinases and phosphatases, and 136 of these have counterparts in humans. Therefore any insight in this model organism could potentially offer indications of mechanisms of action in the human kinome. The study utilises a Prolog-based approach, data from a yeast kinase deletions strains study and publicly available kinase-protein associations. Prolog, a programming language that is well-suited for symbolic reasoning is used to reason over the data and infer compensatory kinase networks. This approach is based on the idea that when a kinase is knocked out, other kinases may compensate for this loss of activity. Background knowledge on kinases targeting proteins is used to guide the analysis. This knowledge is used to infer the potential compensatory interactions between kinases based on the changes in phosphorylation observed in the phosphoproteomics data from the yeast study. The results demonstrate the effectiveness of the Prolog-based approach in analysing complex cell signalling mechanisms in yeast. The inferred compensatory kinase networks provide new insights into the regulation of cell signalling in yeast and may aid in the identification of potential therapeutic targets for modulating signalling pathways in yeast and other organisms.
[ { "created": "Wed, 30 Aug 2023 20:29:41 GMT", "version": "v1" } ]
2023-09-01
[ [ "Elder", "George A.", "", "Queen Mary University of London" ], [ "Bessant", "Conrad", "", "Queen Mary University of London" ] ]
Signalling pathways are conserved across different species, therefore making yeast a model organism to study these via disruption of kinase activity. Yeast has 159 genes that encode protein kinases and phosphatases, and 136 of these have counterparts in humans. Therefore any insight in this model organism could potentially offer indications of mechanisms of action in the human kinome. The study utilises a Prolog-based approach, data from a yeast kinase deletions strains study and publicly available kinase-protein associations. Prolog, a programming language that is well-suited for symbolic reasoning is used to reason over the data and infer compensatory kinase networks. This approach is based on the idea that when a kinase is knocked out, other kinases may compensate for this loss of activity. Background knowledge on kinases targeting proteins is used to guide the analysis. This knowledge is used to infer the potential compensatory interactions between kinases based on the changes in phosphorylation observed in the phosphoproteomics data from the yeast study. The results demonstrate the effectiveness of the Prolog-based approach in analysing complex cell signalling mechanisms in yeast. The inferred compensatory kinase networks provide new insights into the regulation of cell signalling in yeast and may aid in the identification of potential therapeutic targets for modulating signalling pathways in yeast and other organisms.
1703.04263
Alexander Molochkov
Alexander Molochkov, Alexander Begun, Antti Niemi
Gauge symmetries and structure of proteins
8 pages, 5 figures. Prepared for the proceedings of the XII Quark Confinement and the Hadron Spectrum, 29 August to 3 September 2016, Thessaloniki, Greece
null
10.1051/epjconf/201713704004
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the gauge field theory approach to protein structure study, which allows a natural way to introduce collective degrees of freedom and nonlinear topological structures. Local symmetry of proteins and its breaking in the medium is considered, what allows to derive Abelian Higgs model of protein backbone, correct folding of which is defined by gauge symmetry breaking due hydrophobic forces. Within this model structure of protein backbone is defined by superposition of one-dimensional topological solitons (kinks), what allows to reproduce the three-dimensional structure of the protein backbone with precision up to 1A and to predict its dynamics.
[ { "created": "Mon, 13 Mar 2017 06:04:42 GMT", "version": "v1" } ]
2017-04-05
[ [ "Molochkov", "Alexander", "" ], [ "Begun", "Alexander", "" ], [ "Niemi", "Antti", "" ] ]
We discuss the gauge field theory approach to protein structure study, which allows a natural way to introduce collective degrees of freedom and nonlinear topological structures. Local symmetry of proteins and its breaking in the medium is considered, what allows to derive Abelian Higgs model of protein backbone, correct folding of which is defined by gauge symmetry breaking due hydrophobic forces. Within this model structure of protein backbone is defined by superposition of one-dimensional topological solitons (kinks), what allows to reproduce the three-dimensional structure of the protein backbone with precision up to 1A and to predict its dynamics.
1007.4122
Tsvi Tlusty
Tsvi Tlusty
A model for the emergence of the genetic code as a transition in a noisy information channel
Keywords: genetic code, rate-distortion theory, biological information channels
Journal of Theoretical Biology Volume 249, Issue 2, 21 November 2007, Pages 331-342
10.1016/j.jtbi.2007.07.029
null
q-bio.QM cond-mat.stat-mech cs.IT math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. Some argue that the specific form of the code with its twenty amino-acids might be a 'frozen accident' because of the overwhelming effects of any further change. Others see it as a consequence of primordial biochemical pathways and their evolution. Here we examine a scenario in which evolution drives the emergence of a genetic code by selecting for an amino-acid map that minimizes the impact of errors. We treat the stochastic mapping of codons to amino-acids as a noisy information channel with a natural fitness measure. Organisms compete by the fitness of their codes and, as a result, a genetic code emerges at a supercritical transition in the noisy channel, when the mapping of codons to amino-acids becomes nonrandom. At the phase transition, a small expansion is valid and the emergent code is governed by smooth modes of the Laplacian of errors. These modes are in turn governed by the topology of the error-graph, in which codons are connected if they are likely to be confused. This topology sets an upper bound - which is related to the classical map-coloring problem - on the number of possible amino-acids. The suggested scenario is generic and may describe a mechanism for the formation of other error-prone biological codes, such as the recognition of DNA sites by proteins in the transcription regulatory network.
[ { "created": "Fri, 23 Jul 2010 13:13:37 GMT", "version": "v1" } ]
2010-07-26
[ [ "Tlusty", "Tsvi", "" ] ]
The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. Some argue that the specific form of the code with its twenty amino-acids might be a 'frozen accident' because of the overwhelming effects of any further change. Others see it as a consequence of primordial biochemical pathways and their evolution. Here we examine a scenario in which evolution drives the emergence of a genetic code by selecting for an amino-acid map that minimizes the impact of errors. We treat the stochastic mapping of codons to amino-acids as a noisy information channel with a natural fitness measure. Organisms compete by the fitness of their codes and, as a result, a genetic code emerges at a supercritical transition in the noisy channel, when the mapping of codons to amino-acids becomes nonrandom. At the phase transition, a small expansion is valid and the emergent code is governed by smooth modes of the Laplacian of errors. These modes are in turn governed by the topology of the error-graph, in which codons are connected if they are likely to be confused. This topology sets an upper bound - which is related to the classical map-coloring problem - on the number of possible amino-acids. The suggested scenario is generic and may describe a mechanism for the formation of other error-prone biological codes, such as the recognition of DNA sites by proteins in the transcription regulatory network.
1409.1943
R.K. Brojen Singh
Md. Jahoor Alam, Shazia Kunvar and R.K. Brojen Singh
The co-existence of states in p53 dynamics driven by miRNA
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The regulating mechanism of miRNA on p53 dynamics in p53-MDM2-miRNA model network incorporating reactive oxygen species (ROS) is studied. The study shows that miRNA drives p53 dynamics at various states, namely, stabilized states and oscillating states (damped and sustain oscillation). We found the co-existence of these states within certain range of the concentartion level of miRNA in the system. This co-existence in p53 dynamics is the signature of the system's survival at various states, normal, activated and apoptosis driven by a constant concentration of miRNA.
[ { "created": "Mon, 1 Sep 2014 11:39:42 GMT", "version": "v1" } ]
2014-09-09
[ [ "Alam", "Md. Jahoor", "" ], [ "Kunvar", "Shazia", "" ], [ "Singh", "R. K. Brojen", "" ] ]
The regulating mechanism of miRNA on p53 dynamics in p53-MDM2-miRNA model network incorporating reactive oxygen species (ROS) is studied. The study shows that miRNA drives p53 dynamics at various states, namely, stabilized states and oscillating states (damped and sustain oscillation). We found the co-existence of these states within certain range of the concentartion level of miRNA in the system. This co-existence in p53 dynamics is the signature of the system's survival at various states, normal, activated and apoptosis driven by a constant concentration of miRNA.
1410.6752
Pouyan R. Fard
Pouyan R. Fard, Moritz Grosse-Wentrup
The Influence of Decoding Accuracy on Perceived Control: A Simulated BCI Study
null
null
null
null
q-bio.NC cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the relationship between the decoding accuracy of a brain-computer interface (BCI) and a subject's subjective feeling of control is important for determining a lower limit on decoding accuracy for a BCI that is to be deployed outside a laboratory environment. We investigated this relationship by systematically varying the level of control in a simulated BCI task. We find that a binary decoding accuracy of 65% is required for users to report more often than not that they are feeling in control of the system. Decoding accuracies above 75%, on the other hand, added little in terms of the level of perceived control. We further find that the probability of perceived control does not only depend on the actual decoding accuracy, but is also in influenced by whether subjects successfully complete the given task in the allotted time frame.
[ { "created": "Fri, 24 Oct 2014 17:51:12 GMT", "version": "v1" } ]
2014-10-27
[ [ "Fard", "Pouyan R.", "" ], [ "Grosse-Wentrup", "Moritz", "" ] ]
Understanding the relationship between the decoding accuracy of a brain-computer interface (BCI) and a subject's subjective feeling of control is important for determining a lower limit on decoding accuracy for a BCI that is to be deployed outside a laboratory environment. We investigated this relationship by systematically varying the level of control in a simulated BCI task. We find that a binary decoding accuracy of 65% is required for users to report more often than not that they are feeling in control of the system. Decoding accuracies above 75%, on the other hand, added little in terms of the level of perceived control. We further find that the probability of perceived control does not only depend on the actual decoding accuracy, but is also in influenced by whether subjects successfully complete the given task in the allotted time frame.
1603.00442
Baltazar Espinoza
Victor Moreno, Baltazar Espinoza, Derdei Bichara, Susan A. Holechek and Carlos Castillo-Chavez
Role of short-term dispersal on the dynamics of Zika virus
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In November 2015, El Salvador reported their first case of Zika virus (Zv) leading to an explosive outbreak that in just two months had over 6000 suspected cases. Many communities along with national agencies initiated the process to implement control measures that ranged from vector control and the use of repellents to the suggestion of avoiding pregnancies for two years, the latter one, in response to the growing number of microcephaly cases in Brazil. In our study, we explore the impact of short term mobility between two idealized interconnected communities where disparities and violence contribute to the Zv epidemic. Using a Lagrangian modeling approach in a two-patch setting, it is shown via simulations that short term mobility may be beneficial in the control of a Zv outbreak when risk is relative low and patch disparities are not too extreme. However, when the reproductive number is too high, there seems to be no benefits. This paper is dedicated to the inauguration of the Centro de Modelamiento Matem\'{a}tico Carlos Castillo-Ch\'{a}vez at Universidad Francisco Gavidia in San Salvador, El Salvador.
[ { "created": "Tue, 1 Mar 2016 20:26:06 GMT", "version": "v1" }, { "created": "Fri, 11 Mar 2016 20:07:24 GMT", "version": "v2" }, { "created": "Wed, 16 Mar 2016 20:31:09 GMT", "version": "v3" } ]
2016-03-18
[ [ "Moreno", "Victor", "" ], [ "Espinoza", "Baltazar", "" ], [ "Bichara", "Derdei", "" ], [ "Holechek", "Susan A.", "" ], [ "Castillo-Chavez", "Carlos", "" ] ]
In November 2015, El Salvador reported their first case of Zika virus (Zv) leading to an explosive outbreak that in just two months had over 6000 suspected cases. Many communities along with national agencies initiated the process to implement control measures that ranged from vector control and the use of repellents to the suggestion of avoiding pregnancies for two years, the latter one, in response to the growing number of microcephaly cases in Brazil. In our study, we explore the impact of short term mobility between two idealized interconnected communities where disparities and violence contribute to the Zv epidemic. Using a Lagrangian modeling approach in a two-patch setting, it is shown via simulations that short term mobility may be beneficial in the control of a Zv outbreak when risk is relative low and patch disparities are not too extreme. However, when the reproductive number is too high, there seems to be no benefits. This paper is dedicated to the inauguration of the Centro de Modelamiento Matem\'{a}tico Carlos Castillo-Ch\'{a}vez at Universidad Francisco Gavidia in San Salvador, El Salvador.
0708.2707
Peter Csermely
Shijun Wang, Mate S. Szalay, Changshui Zhang, Peter Csermely
Learning and innovative elements of strategy adoption rules expand cooperative network topologies
14 pages, 3 Figures + a Supplementary Material with 25 pages, 3 Tables, 12 Figures and 116 references
PLoS ONE 3, e1917 (2008)
10.1371/journal.pone.0001917
null
q-bio.MN cond-mat.dis-nn nlin.AO physics.bio-ph
null
Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoners Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.
[ { "created": "Mon, 20 Aug 2007 16:56:46 GMT", "version": "v1" }, { "created": "Mon, 14 Apr 2008 12:32:08 GMT", "version": "v2" } ]
2012-03-01
[ [ "Wang", "Shijun", "" ], [ "Szalay", "Mate S.", "" ], [ "Zhang", "Changshui", "" ], [ "Csermely", "Peter", "" ] ]
Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoners Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.
2001.05548
Chen Liu
Nanyan Zhu, Chen Liu, Zakary S. Singer, Tal Danino, Andrew F. Laine, Jia Guo
Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features
7 pages, 4 figures, 1 table. Nanyan Zhu and Chen Liu share equal contribution and are listed as co-first authors
null
null
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to extrapolate gene expression dynamics in living single cells requires robust cell segmentation, and one of the challenges is the amorphous or irregularly shaped cell boundaries. To address this issue, we modified the U-Net architecture to segment cells in fluorescence widefield microscopy images and quantitatively evaluated its performance. We also proposed a novel loss function approach that emphasizes the segmentation accuracy on cell boundaries and encourages shape feature preservation. With a 97% sensitivity, 93% specificity, 91% Jaccard similarity, and 95% Dice coefficient, our proposed method called Residual Attention U-Net with edge-enhancement surpassed the state-of-the-art U-Net in segmentation performance as evaluated by the traditional metrics. More remarkably, the same proposed candidate also performed the best in terms of the preservation of valuable shape features, namely area, eccentricity, major axis length, solidity and orientation. These improvements on shape feature preservation can serve as useful assets for downstream cell tracking and quantification of changes in cell statistics or features over time.
[ { "created": "Wed, 15 Jan 2020 20:44:39 GMT", "version": "v1" } ]
2020-01-17
[ [ "Zhu", "Nanyan", "" ], [ "Liu", "Chen", "" ], [ "Singer", "Zakary S.", "" ], [ "Danino", "Tal", "" ], [ "Laine", "Andrew F.", "" ], [ "Guo", "Jia", "" ] ]
The ability to extrapolate gene expression dynamics in living single cells requires robust cell segmentation, and one of the challenges is the amorphous or irregularly shaped cell boundaries. To address this issue, we modified the U-Net architecture to segment cells in fluorescence widefield microscopy images and quantitatively evaluated its performance. We also proposed a novel loss function approach that emphasizes the segmentation accuracy on cell boundaries and encourages shape feature preservation. With a 97% sensitivity, 93% specificity, 91% Jaccard similarity, and 95% Dice coefficient, our proposed method called Residual Attention U-Net with edge-enhancement surpassed the state-of-the-art U-Net in segmentation performance as evaluated by the traditional metrics. More remarkably, the same proposed candidate also performed the best in terms of the preservation of valuable shape features, namely area, eccentricity, major axis length, solidity and orientation. These improvements on shape feature preservation can serve as useful assets for downstream cell tracking and quantification of changes in cell statistics or features over time.
2011.06962
Sophie Beraud-Dufour
Thierry Coppola (IPMC), Sophie B\'eraud-Dufour (IPMC), Patricia Lebrun (UNSA), Nicolas Blondeau (IPMC)
Bridging the Gap Between Diabetes and Stroke in Search of High Clinical Relevance Therapeutic Targets
NeuroMolecular Medicine, Humana Press, 2019
null
10.1007/s12017-019-08563-5
null
q-bio.TO q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diabetes affects more than 425 million people worldwide, a scale approaching pandemic proportion. Diabetes represents a major risk factor for stroke, and therefore is actively addressed for stroke prevention. However, how diabetes affects stroke severity has not yet been extensively considered, which is surprising given the evident but understudied common mechanistic features of both pathologies. The increase in number of diabetic people, in the incidence of stroke in presence of this specific risk factor, and the exacerbation of ischemic brain damage in diabetic conditions (at least in animal models) warrant the need to integrate this comorbidity in pre-clinical studies of brain ischemia to develop novel therapeutic approaches. Therefore, a better understanding of the commonalties involved in the course of both diseases would offer the promise of discovering novel neuroprotective pathways that would be more appropriated to clinical situations. In this article, we will review the relevant mechanisms that have been identified as common traits of both pathologies and that could be to our knowledge, potential targets for both pathologies.
[ { "created": "Fri, 13 Nov 2020 15:19:22 GMT", "version": "v1" } ]
2020-11-16
[ [ "Coppola", "Thierry", "", "IPMC" ], [ "Béraud-Dufour", "Sophie", "", "IPMC" ], [ "Lebrun", "Patricia", "", "UNSA" ], [ "Blondeau", "Nicolas", "", "IPMC" ] ]
Diabetes affects more than 425 million people worldwide, a scale approaching pandemic proportion. Diabetes represents a major risk factor for stroke, and therefore is actively addressed for stroke prevention. However, how diabetes affects stroke severity has not yet been extensively considered, which is surprising given the evident but understudied common mechanistic features of both pathologies. The increase in number of diabetic people, in the incidence of stroke in presence of this specific risk factor, and the exacerbation of ischemic brain damage in diabetic conditions (at least in animal models) warrant the need to integrate this comorbidity in pre-clinical studies of brain ischemia to develop novel therapeutic approaches. Therefore, a better understanding of the commonalties involved in the course of both diseases would offer the promise of discovering novel neuroprotective pathways that would be more appropriated to clinical situations. In this article, we will review the relevant mechanisms that have been identified as common traits of both pathologies and that could be to our knowledge, potential targets for both pathologies.
1510.05234
Fr\'ed\'eric Pro\"ia
Fr\'ed\'eric Pro\"ia, Alix Pernet, Tatiana Thouroude, Gilles Michel, J\'er\'emy Clotault
On the characterization of flowering curves using Gaussian mixture models
28 pages, 27 figures
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a statistical methodology applied to the characterization of flowering curves using Gaussian mixture models. Our study relies on a set of rosebushes flowering data, and Gaussian mixture models are mainly used to quantify the reblooming properties of each one. In this regard, we also suggest our own selection criterion to take into account the lack of symmetry of most of the flowering curves. Three classes are created on the basis of a principal component analysis conducted on a set of reblooming indicators, and a subclassification is made using a longitudinal $k$--means algorithm which also highlights the role played by the precocity of the flowering. In this way, we obtain an overview of the correlations between the features we decided to retain on each curve. In particular, results suggest the lack of correlation between reblooming and flowering precocity. The pertinent indicators obtained in this study will be a first step towards the comprehension of the environmental and genetic control of these biological processes.
[ { "created": "Sun, 18 Oct 2015 12:40:18 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2016 19:16:17 GMT", "version": "v2" } ]
2016-04-19
[ [ "Proïa", "Frédéric", "" ], [ "Pernet", "Alix", "" ], [ "Thouroude", "Tatiana", "" ], [ "Michel", "Gilles", "" ], [ "Clotault", "Jérémy", "" ] ]
In this paper, we develop a statistical methodology applied to the characterization of flowering curves using Gaussian mixture models. Our study relies on a set of rosebushes flowering data, and Gaussian mixture models are mainly used to quantify the reblooming properties of each one. In this regard, we also suggest our own selection criterion to take into account the lack of symmetry of most of the flowering curves. Three classes are created on the basis of a principal component analysis conducted on a set of reblooming indicators, and a subclassification is made using a longitudinal $k$--means algorithm which also highlights the role played by the precocity of the flowering. In this way, we obtain an overview of the correlations between the features we decided to retain on each curve. In particular, results suggest the lack of correlation between reblooming and flowering precocity. The pertinent indicators obtained in this study will be a first step towards the comprehension of the environmental and genetic control of these biological processes.
1712.06151
M. Reza Shaebani
Bjorn Becker, M. Reza Shaebani, Domenik Rammo, Tobias Bubel, Ludger Santen, Manfred J. Schmitt
Cargo binding promotes KDEL receptor clustering at the mammalian cell surface
11 pages, 5 figures
Sci. Rep. 6, 28940, 2016
10.1038/srep28940
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transmembrane receptor clustering is a ubiquitous phenomenon in pro- and eukaryotic cells to physically sense receptor/ligand interactions and subsequently translate an exogenous signal into a cellular response. Despite that receptor cluster formation has been described for a wide variety of receptors, ranging from chemotactic receptors in bacteria to growth factor and neurotransmitter receptors in mammalian cells, a mechanistic understanding of the underlying molecular processes is still puzzling. In an attempt to fill this gap we followed a combined experimental and theoretical approach by dissecting and modulating cargo binding, internalization and cellular response mediated by KDEL receptors (KDELRs) at the mammalian cell surface after interaction with a model cargo/ligand. Using a fluorescent variant of ricin toxin A chain as KDELR-ligand, eGFP-RTA (H/KDEL), we demonstrate that cargo binding induces dose-dependent receptor cluster formation at and subsequent internalization from the membrane which is associated and counteracted by anterograde and microtubule-assisted receptor transport to preferred docking sites at the plasma membrane. By means of analytical arguments and extensive numerical simulations we show that cargo-synchronized receptor transport from and to the membrane is causative for KDELR/cargo cluster formation at the mammalian cell surface.
[ { "created": "Sun, 17 Dec 2017 17:43:15 GMT", "version": "v1" } ]
2017-12-19
[ [ "Becker", "Bjorn", "" ], [ "Shaebani", "M. Reza", "" ], [ "Rammo", "Domenik", "" ], [ "Bubel", "Tobias", "" ], [ "Santen", "Ludger", "" ], [ "Schmitt", "Manfred J.", "" ] ]
Transmembrane receptor clustering is a ubiquitous phenomenon in pro- and eukaryotic cells to physically sense receptor/ligand interactions and subsequently translate an exogenous signal into a cellular response. Despite that receptor cluster formation has been described for a wide variety of receptors, ranging from chemotactic receptors in bacteria to growth factor and neurotransmitter receptors in mammalian cells, a mechanistic understanding of the underlying molecular processes is still puzzling. In an attempt to fill this gap we followed a combined experimental and theoretical approach by dissecting and modulating cargo binding, internalization and cellular response mediated by KDEL receptors (KDELRs) at the mammalian cell surface after interaction with a model cargo/ligand. Using a fluorescent variant of ricin toxin A chain as KDELR-ligand, eGFP-RTA (H/KDEL), we demonstrate that cargo binding induces dose-dependent receptor cluster formation at and subsequent internalization from the membrane which is associated and counteracted by anterograde and microtubule-assisted receptor transport to preferred docking sites at the plasma membrane. By means of analytical arguments and extensive numerical simulations we show that cargo-synchronized receptor transport from and to the membrane is causative for KDELR/cargo cluster formation at the mammalian cell surface.
1407.3895
Kun Gao Ph.D
Kun Gao and Jonathan Miller
Human-chimpanzee alignment: Ortholog Exponentials and Paralog Power Laws
Main text: 31 pages, 13 figures, 1 table; Supplementary materials: 9 pages, 9 figures, 1 table
null
10.1016/j.compbiolchem.2014.08.010
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomic subsequences conserved between closely related species such as human and chimpanzee exhibit an exponential length distribution, in contrast to the algebraic length distribution observed for sequences shared between distantly related genomes. We find that the former exponential can be further decomposed into an exponential component primarily composed of orthologous sequences, and a truncated algebraic component primarily composed of paralogous sequences.
[ { "created": "Tue, 15 Jul 2014 06:54:43 GMT", "version": "v1" }, { "created": "Thu, 28 Aug 2014 02:03:44 GMT", "version": "v2" } ]
2014-08-29
[ [ "Gao", "Kun", "" ], [ "Miller", "Jonathan", "" ] ]
Genomic subsequences conserved between closely related species such as human and chimpanzee exhibit an exponential length distribution, in contrast to the algebraic length distribution observed for sequences shared between distantly related genomes. We find that the former exponential can be further decomposed into an exponential component primarily composed of orthologous sequences, and a truncated algebraic component primarily composed of paralogous sequences.
2205.14544
Ran Rubin
Ran Rubin and Haim Sompolinsky
Temporal support vectors for spiking neuronal networks
null
null
null
null
q-bio.NC cond-mat.dis-nn stat.ML
http://creativecommons.org/licenses/by/4.0/
When neural circuits learn to perform a task, it is often the case that there are many sets of synaptic connections that are consistent with the task. However, only a small number of possible solutions are robust to noise in the input and are capable of generalizing their performance of the task to new inputs. Finding such good solutions is an important goal of learning systems in general and neuronal circuits in particular. For systems operating with static inputs and outputs, a well known approach to the problem is the large margin methods such as Support Vector Machines (SVM). By maximizing the distance of the data vectors from the decision surface, these solutions enjoy increased robustness to noise and enhanced generalization abilities. Furthermore, the use of the kernel method enables SVMs to perform classification tasks that require nonlinear decision surfaces. However, for dynamical systems with event based outputs, such as spiking neural networks and other continuous time threshold crossing systems, this optimality criterion is inapplicable due to the strong temporal correlations in their input and output. We introduce a novel extension of the static SVMs - The Temporal Support Vector Machine (T-SVM). The T-SVM finds a solution that maximizes a new construct - the dynamical margin. We show that T-SVM and its kernel extensions generate robust synaptic weight vectors in spiking neurons and enable their learning of tasks that require nonlinear spatial integration of synaptic inputs. We propose T-SVM with nonlinear kernels as a new model of the computational role of the nonlinearities and extensive morphologies of neuronal dendritic trees.
[ { "created": "Sat, 28 May 2022 23:47:15 GMT", "version": "v1" } ]
2022-05-31
[ [ "Rubin", "Ran", "" ], [ "Sompolinsky", "Haim", "" ] ]
When neural circuits learn to perform a task, it is often the case that there are many sets of synaptic connections that are consistent with the task. However, only a small number of possible solutions are robust to noise in the input and are capable of generalizing their performance of the task to new inputs. Finding such good solutions is an important goal of learning systems in general and neuronal circuits in particular. For systems operating with static inputs and outputs, a well known approach to the problem is the large margin methods such as Support Vector Machines (SVM). By maximizing the distance of the data vectors from the decision surface, these solutions enjoy increased robustness to noise and enhanced generalization abilities. Furthermore, the use of the kernel method enables SVMs to perform classification tasks that require nonlinear decision surfaces. However, for dynamical systems with event based outputs, such as spiking neural networks and other continuous time threshold crossing systems, this optimality criterion is inapplicable due to the strong temporal correlations in their input and output. We introduce a novel extension of the static SVMs - The Temporal Support Vector Machine (T-SVM). The T-SVM finds a solution that maximizes a new construct - the dynamical margin. We show that T-SVM and its kernel extensions generate robust synaptic weight vectors in spiking neurons and enable their learning of tasks that require nonlinear spatial integration of synaptic inputs. We propose T-SVM with nonlinear kernels as a new model of the computational role of the nonlinearities and extensive morphologies of neuronal dendritic trees.
q-bio/0406041
Tini Garske
Tini Garske and Uwe Grimm
Maximum principle and mutation thresholds for four-letter sequence evolution
25 pages, 16 figures
JSTAT (2004) P07007
10.1088/1742-5468/2004/07/P07007
null
q-bio.PE
null
A four-state mutation-selection model for the evolution of populations of DNA-sequences is investigated with particular interest in the phenomenon of error thresholds. The mutation model considered is the Kimura 3ST mutation scheme, fitness functions, which determine the selection process, come from the permutation-invariant class. Error thresholds can be found for various fitness functions, the phase diagrams are more interesting than for equivalent two-state models. Results for (small) finite sequence lengths are compared with those for infinite sequence length, obtained via a maximum principle that is equivalent to the principle of minimal free energy in physics.
[ { "created": "Mon, 21 Jun 2004 15:04:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Garske", "Tini", "" ], [ "Grimm", "Uwe", "" ] ]
A four-state mutation-selection model for the evolution of populations of DNA-sequences is investigated with particular interest in the phenomenon of error thresholds. The mutation model considered is the Kimura 3ST mutation scheme, fitness functions, which determine the selection process, come from the permutation-invariant class. Error thresholds can be found for various fitness functions, the phase diagrams are more interesting than for equivalent two-state models. Results for (small) finite sequence lengths are compared with those for infinite sequence length, obtained via a maximum principle that is equivalent to the principle of minimal free energy in physics.
1403.4178
Jan Karbowski
Jan Karbowski
Constancy and trade-offs in the neuroanatomical and metabolic design of the cerebral cortex
Review article on invariants in the brain of mammals
Front. Neural Circuits 8: 9 (2014)
10.3389/fncir.2014.00009
null
q-bio.NC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mammalian brains span about 4 orders of magnitude in cortical volume and have to operate in different environments that require diverse behavioral skills. Despite these geometric and behavioral diversities, the examination of cerebral cortex across species reveals that it contains a substantial number of conserved characteristics that are associated with neuroanatomy and metabolism, i.e. with neuronal connectivity and function. Some of these cortical constants or invariants have been known for a long time but not sufficiently appreciated, and others were only recently discovered. The focus of this review is to present the cortical invariants and discuss their role in the efficient information processing. Global conservation in neuroanatomy and metabolism, as well as their correlated regional and developmental variability suggest that these two parallel systems are mutually coupled. It is argued that energetic constraint on cortical organization can be strong if cerebral blood supplied is either below or above a certain level, and it is rather soft otherwise. Moreover, because maximization or minimization of parameters associated with cortical connectivity, function and cost often leads to conflicts in design, it is argued that the architecture of the cerebral cortex is a result of structural and functional compromises.
[ { "created": "Mon, 17 Mar 2014 17:25:35 GMT", "version": "v1" } ]
2014-05-19
[ [ "Karbowski", "Jan", "" ] ]
Mammalian brains span about 4 orders of magnitude in cortical volume and have to operate in different environments that require diverse behavioral skills. Despite these geometric and behavioral diversities, the examination of cerebral cortex across species reveals that it contains a substantial number of conserved characteristics that are associated with neuroanatomy and metabolism, i.e. with neuronal connectivity and function. Some of these cortical constants or invariants have been known for a long time but not sufficiently appreciated, and others were only recently discovered. The focus of this review is to present the cortical invariants and discuss their role in the efficient information processing. Global conservation in neuroanatomy and metabolism, as well as their correlated regional and developmental variability suggest that these two parallel systems are mutually coupled. It is argued that energetic constraint on cortical organization can be strong if cerebral blood supplied is either below or above a certain level, and it is rather soft otherwise. Moreover, because maximization or minimization of parameters associated with cortical connectivity, function and cost often leads to conflicts in design, it is argued that the architecture of the cerebral cortex is a result of structural and functional compromises.
1912.02901
Yufen Chen
Yufen Chen, Amy A Herrold, Zoran Martinovich, Anne J Blood, Nicole Vike, Alexa E Walter, Jaroslaw Harezlak, Peter H Seidenberg, Manish Bhomia, Barbara Knollmann-Ritschel, James L Reilly, Eric A Nauman, Thomas M Talavage, Linda Papa, Semyon Slobounov, Hans C Breiter (for the Concussion Neuroimaging Consortium)
Brain perfusion mediates the relationship between miRNA levels and postural control
32 pages, 2 figures, 5 tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcriptomics, regional cerebral blood flow (rCBF), and a spatial motor virtual reality task were integrated using mediation analysis in a novel demonstration of "imaging omics". Data collected in NCAA Division I football athletes cleared for play before in-season training showed significant relationships in a) elevated levels of miR-30d and miR-92a to elevated putamen rCBF, (b) elevated putamen rCBF to compromised balance scores, and (c) compromised balance scores to elevated miRNA levels. rCBF acted as a mediator variable (minimum 70% mediation, significant Sobel's test) between abnormal miRNA levels and compromised balance scores. Given the involvement of these miRNAs in inflammation and immune function, and that vascular perfusion is a component of the inflammatory response, these findings support a chronic inflammatory model of repetitive head acceleration events (HAEs). rCBF, a systems biology measure, was necessary for miRNA to affect behavior. These results suggest miRNA as a potential diagnostic biomarker for repetitive HAEs.
[ { "created": "Thu, 5 Dec 2019 22:16:54 GMT", "version": "v1" } ]
2019-12-09
[ [ "Chen", "Yufen", "", "for the Concussion Neuroimaging\n Consortium" ], [ "Herrold", "Amy A", "", "for the Concussion Neuroimaging\n Consortium" ], [ "Martinovich", "Zoran", "", "for the Concussion Neuroimaging\n Consortium" ], [ "Blood", "Anne...
Transcriptomics, regional cerebral blood flow (rCBF), and a spatial motor virtual reality task were integrated using mediation analysis in a novel demonstration of "imaging omics". Data collected in NCAA Division I football athletes cleared for play before in-season training showed significant relationships in a) elevated levels of miR-30d and miR-92a to elevated putamen rCBF, (b) elevated putamen rCBF to compromised balance scores, and (c) compromised balance scores to elevated miRNA levels. rCBF acted as a mediator variable (minimum 70% mediation, significant Sobel's test) between abnormal miRNA levels and compromised balance scores. Given the involvement of these miRNAs in inflammation and immune function, and that vascular perfusion is a component of the inflammatory response, these findings support a chronic inflammatory model of repetitive head acceleration events (HAEs). rCBF, a systems biology measure, was necessary for miRNA to affect behavior. These results suggest miRNA as a potential diagnostic biomarker for repetitive HAEs.
1503.04992
Xiaoyan Ma
Xiaoyan Ma, Daphne Ezer, Carmen Navarro and Boris Adryan
Reliable scaling of Position Weight Matrices for binding strength comparisons between transcription factors
18 pages, 5 figures and another 6 supplementary figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scoring DNA sequences against Position Weight Matrices (PWMs) is a widely adopted method to identify putative transcription factor binding sites. While common bioinformatics tools produce scores that can reflect the binding strength between a specific transcription factor and the DNA, these scores are not directly comparable between different transcription factors. Here, we provide two different ways to find the scaling parameter $\lambda$ that allows us to infer binding energy from a PWM score. The first approach uses a PWM and background genomic sequence as input to estimate $\lambda$ for a specific transcription factor, which we applied to show that $\lambda$ distributions for different transcription factor families correspond with their DNA binding properties. Our second method can reliably convert $\lambda$ between different PWMs of the same transcription factor, which allows us to directly compare PWMs that were generated by different approaches. These two approaches provide consistent and computationally efficient ways to scale PWMs scores and estimate transcription factor binding sites strength.
[ { "created": "Tue, 17 Mar 2015 10:56:23 GMT", "version": "v1" } ]
2015-03-18
[ [ "Ma", "Xiaoyan", "" ], [ "Ezer", "Daphne", "" ], [ "Navarro", "Carmen", "" ], [ "Adryan", "Boris", "" ] ]
Scoring DNA sequences against Position Weight Matrices (PWMs) is a widely adopted method to identify putative transcription factor binding sites. While common bioinformatics tools produce scores that can reflect the binding strength between a specific transcription factor and the DNA, these scores are not directly comparable between different transcription factors. Here, we provide two different ways to find the scaling parameter $\lambda$ that allows us to infer binding energy from a PWM score. The first approach uses a PWM and background genomic sequence as input to estimate $\lambda$ for a specific transcription factor, which we applied to show that $\lambda$ distributions for different transcription factor families correspond with their DNA binding properties. Our second method can reliably convert $\lambda$ between different PWMs of the same transcription factor, which allows us to directly compare PWMs that were generated by different approaches. These two approaches provide consistent and computationally efficient ways to scale PWMs scores and estimate transcription factor binding sites strength.
2203.00999
Vikram Singh
Vikram Singh and Vikram Singh
DeepAutoPIN: An automorphism orbits based deep neural network for characterizing the organizational diversity of protein interactomes across the tree of life
29 pages, 4 figures, 1 algorithm, 2 supplementary files
null
null
null
q-bio.MN cs.AI q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The enormous diversity of life forms thriving in drastically different environmental milieus involves a complex interplay among constituent proteins interacting with each other. However, the organizational principles characterizing the evolution of protein interaction networks (PINs) across the tree of life are largely unknown. Here we study 4,738 PINs belonging to 16 phyla to discover phyla-specific architectural features and examine if there are some evolutionary constraints imposed on the networks' topologies. We utilized positional information of a network's nodes by normalizing the frequencies of automorphism orbits appearing in graphlets of sizes 2-5. We report that orbit usage profiles (OUPs) of networks belonging to the three domains of life are contrastingly different not only at the domain level but also at the scale of phyla. Integrating the information related to protein families, domains, subcellular location, gene ontology, and pathways, our results indicate that wiring patterns of PINs in different phyla are not randomly generated rather they are shaped by evolutionary constraints imposed on them. There exist subtle but substantial variations in the wiring patterns of PINs that enable OUPs to differentiate among different superfamilies. A deep neural network was trained on differentially expressed orbits resulting in a prediction accuracy of 85%.
[ { "created": "Wed, 2 Mar 2022 10:10:20 GMT", "version": "v1" }, { "created": "Mon, 29 Jan 2024 05:34:37 GMT", "version": "v2" } ]
2024-01-30
[ [ "Singh", "Vikram", "" ], [ "Singh", "Vikram", "" ] ]
The enormous diversity of life forms thriving in drastically different environmental milieus involves a complex interplay among constituent proteins interacting with each other. However, the organizational principles characterizing the evolution of protein interaction networks (PINs) across the tree of life are largely unknown. Here we study 4,738 PINs belonging to 16 phyla to discover phyla-specific architectural features and examine if there are some evolutionary constraints imposed on the networks' topologies. We utilized positional information of a network's nodes by normalizing the frequencies of automorphism orbits appearing in graphlets of sizes 2-5. We report that orbit usage profiles (OUPs) of networks belonging to the three domains of life are contrastingly different not only at the domain level but also at the scale of phyla. Integrating the information related to protein families, domains, subcellular location, gene ontology, and pathways, our results indicate that wiring patterns of PINs in different phyla are not randomly generated rather they are shaped by evolutionary constraints imposed on them. There exist subtle but substantial variations in the wiring patterns of PINs that enable OUPs to differentiate among different superfamilies. A deep neural network was trained on differentially expressed orbits resulting in a prediction accuracy of 85%.
1108.4286
Alexandre Kabla
Alexandre J Kabla
Collective Cell Migration: Leadership, Invasion and Segregation
null
null
null
null
q-bio.CB physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of biological processes, such as embryo development, cancer metastasis or wound healing, rely on cells moving in concert. The mechanisms leading to the emergence of coordinated motion remain however largely unexplored. Although biomolecular signalling is known to be involved in most occurrences of collective migration, the role of physical and mechanical interactions has only been recently investigated. In this paper, a versatile framework for cell motility is implemented in-silico in order to study the minimal requirements for the coordination of a group of epithelial cells. We find that cell motility and cell-cell mechanical interactions are sufficient to generate a broad array of behaviours commonly observed in vitro and in vivo. Cell streaming, sheet migration and susceptibility to leader cells are examples of behaviours spontaneously emerging from these simple assumptions, which might explain why collective effects are so ubiquitous in nature. This analysis provides also new insights into cancer metastasis and cell sorting, suggesting in particular that collective invasion might result from an emerging coordination in a system where single cells are mechanically unable to invade.
[ { "created": "Mon, 22 Aug 2011 12:14:23 GMT", "version": "v1" } ]
2011-08-23
[ [ "Kabla", "Alexandre J", "" ] ]
A number of biological processes, such as embryo development, cancer metastasis or wound healing, rely on cells moving in concert. The mechanisms leading to the emergence of coordinated motion remain however largely unexplored. Although biomolecular signalling is known to be involved in most occurrences of collective migration, the role of physical and mechanical interactions has only been recently investigated. In this paper, a versatile framework for cell motility is implemented in-silico in order to study the minimal requirements for the coordination of a group of epithelial cells. We find that cell motility and cell-cell mechanical interactions are sufficient to generate a broad array of behaviours commonly observed in vitro and in vivo. Cell streaming, sheet migration and susceptibility to leader cells are examples of behaviours spontaneously emerging from these simple assumptions, which might explain why collective effects are so ubiquitous in nature. This analysis provides also new insights into cancer metastasis and cell sorting, suggesting in particular that collective invasion might result from an emerging coordination in a system where single cells are mechanically unable to invade.
2305.19667
Christof Bertram
Christof A. Bertram, Taryn A. Donovan, Alexander Bartel
Systematic Review of Methods and Prognostic Value of Mitotic Activity. Part 2: Canine Tumors
null
Veterinary Pathology. 2024;0(0)
10.1177/03009858241239565
null
q-bio.SC q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
One of the most relevant prognostication tests for tumors is cellular proliferation, which is most commonly measured by the mitotic activity in routine tumor sections. The goal of this systematic review is to scholarly analyze the methods and prognostic relevance of histologically measuring mitotic activity in canine tumors. A total of 137 articles that correlated the mitotic activity in canine tumors with patient outcome were identified through a systematic (PubMed and Scopus) and manual (Google Scholar) literature search and eligibility screening process. These studies determined the mitotic count (MC, number of mitotic figures per tumor area) in 126 instances, presumably the mitotic count (method not specified) in 6 instances and the mitotic index (MI, proportion of mitotic figures per tumor cells) in 5 instances. A particularly high risk of bias was identified in the available details of the MC methods and statistical analysis, which often did not quantify the prognostic discriminative ability of the MC and only reported p-values. A significant association of the MC with survival was found in 72/109 (66%) studies. However, survival was evaluated by at least three studies in only 7 tumor types/groups, of which a prognostic relevance is apparent for mast cell tumors of the skin, cutaneous melanoma and soft tissue sarcoma of the skin. None of the studies on the MI found a prognostic relevance. Further studies on the MC and MI with standardized methods are needed to prove the prognostic benefit of this test for further tumor types.
[ { "created": "Wed, 31 May 2023 09:07:04 GMT", "version": "v1" } ]
2024-04-17
[ [ "Bertram", "Christof A.", "" ], [ "Donovan", "Taryn A.", "" ], [ "Bartel", "Alexander", "" ] ]
One of the most relevant prognostication tests for tumors is cellular proliferation, which is most commonly measured by the mitotic activity in routine tumor sections. The goal of this systematic review is to scholarly analyze the methods and prognostic relevance of histologically measuring mitotic activity in canine tumors. A total of 137 articles that correlated the mitotic activity in canine tumors with patient outcome were identified through a systematic (PubMed and Scopus) and manual (Google Scholar) literature search and eligibility screening process. These studies determined the mitotic count (MC, number of mitotic figures per tumor area) in 126 instances, presumably the mitotic count (method not specified) in 6 instances and the mitotic index (MI, proportion of mitotic figures per tumor cells) in 5 instances. A particularly high risk of bias was identified in the available details of the MC methods and statistical analysis, which often did not quantify the prognostic discriminative ability of the MC and only reported p-values. A significant association of the MC with survival was found in 72/109 (66%) studies. However, survival was evaluated by at least three studies in only 7 tumor types/groups, of which a prognostic relevance is apparent for mast cell tumors of the skin, cutaneous melanoma and soft tissue sarcoma of the skin. None of the studies on the MI found a prognostic relevance. Further studies on the MC and MI with standardized methods are needed to prove the prognostic benefit of this test for further tumor types.
1903.04512
Zhanshan (Sam) Ma
Zhanshan (Sam) Ma, Lianwei Li, Ya-Ping Zhang
Individual-Level SNP Diversity and Similarity Profiles
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classic concepts of genetic (gene) diversity (heterozygosity) such as Nei (1973: PNAS) and Nei and Li (1979: PNAS) nucleotide diversity were defined within the context of populations. Although variations are often measured in population context, the basic carriers of variation are individuals. Hence, measuring variations such as SNP of individual against a reference genome, which has been ignored currently, is certainly of its own right. Indeed, similar practice has been a tradition in ecology, where the basic framework of diversity measure is individual community sample. We propose to use Renyi-entropy-derived Hill numbers to define SNP (single nucleotide polymorphism) diversity (including alpha-, beta-, and gamma-diversities) and similarity profiles. Hill numbers are derived from Renyi entropy, of which Shannon entropy is a special case and which have found widely applications including measuring the quantum information entanglement, wealth distribution in economics and ecological diversity. The newly proposed SNP diversity not only complements the existing genetic diversity concepts by offering individual-level metrics, but also offers building blocks for comparative genetic analysis at higher levels. The profile concept also helps to resolve a dilemma in measuring diversity: the choice from various diversity indexes, because diversity profile unifies some of the most commonly used indexes (as special cases) with different diversity orders (along the rareness-commonness spectrum of gene mutations). Finally, the profiles can be estimated with rarefaction approach, which may help to relieve some effect of insufficient sequencing coverage.
[ { "created": "Mon, 11 Mar 2019 18:01:14 GMT", "version": "v1" } ]
2019-03-13
[ [ "Zhanshan", "", "", "Sam" ], [ "Ma", "", "" ], [ "Li", "Lianwei", "" ], [ "Zhang", "Ya-Ping", "" ] ]
Classic concepts of genetic (gene) diversity (heterozygosity) such as Nei (1973: PNAS) and Nei and Li (1979: PNAS) nucleotide diversity were defined within the context of populations. Although variations are often measured in population context, the basic carriers of variation are individuals. Hence, measuring variations such as SNP of individual against a reference genome, which has been ignored currently, is certainly of its own right. Indeed, similar practice has been a tradition in ecology, where the basic framework of diversity measure is individual community sample. We propose to use Renyi-entropy-derived Hill numbers to define SNP (single nucleotide polymorphism) diversity (including alpha-, beta-, and gamma-diversities) and similarity profiles. Hill numbers are derived from Renyi entropy, of which Shannon entropy is a special case and which have found widely applications including measuring the quantum information entanglement, wealth distribution in economics and ecological diversity. The newly proposed SNP diversity not only complements the existing genetic diversity concepts by offering individual-level metrics, but also offers building blocks for comparative genetic analysis at higher levels. The profile concept also helps to resolve a dilemma in measuring diversity: the choice from various diversity indexes, because diversity profile unifies some of the most commonly used indexes (as special cases) with different diversity orders (along the rareness-commonness spectrum of gene mutations). Finally, the profiles can be estimated with rarefaction approach, which may help to relieve some effect of insufficient sequencing coverage.
1712.08265
Junzhe Zhao
Junzhe Zhao, Benjamin A. Hall
Computational Modelling of Aquaporin Co-regulation in Cancer
Data is used without full credit and explicit agreement from another author and fellow experimentalists
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A computational model of aquaporin regulation in cancer cells has been constructed as a Qualitative Network in the software BioModelAnalyzer (BMA). The model connects some important aquaporins expressed in human cancer to common phenotypes via a number of fundamental, dysregulated signalling pathways. Based on over 60 publications, this model can not only reproduce the results reported in a discrete, qualitative manner, but also reconcile the seemingly incompatible phenotype with research consensus by suggesting molecular mechanisms accountable for it. Novel predictions have also been made by mimicking real-life experiments in the model.
[ { "created": "Fri, 22 Dec 2017 00:49:26 GMT", "version": "v1" }, { "created": "Thu, 4 Jan 2018 17:04:11 GMT", "version": "v2" } ]
2018-01-08
[ [ "Zhao", "Junzhe", "" ], [ "Hall", "Benjamin A.", "" ] ]
A computational model of aquaporin regulation in cancer cells has been constructed as a Qualitative Network in the software BioModelAnalyzer (BMA). The model connects some important aquaporins expressed in human cancer to common phenotypes via a number of fundamental, dysregulated signalling pathways. Based on over 60 publications, this model can not only reproduce the results reported in a discrete, qualitative manner, but also reconcile the seemingly incompatible phenotype with research consensus by suggesting molecular mechanisms accountable for it. Novel predictions have also been made by mimicking real-life experiments in the model.
1704.06548
Jason T. L. Wang
Yasser Abduallah, Turki Turki, Kevin Byron, Zongxuan Du, Miguel Cervantes-Cervantes, Jason T. L. Wang
MapReduce Algorithms for Inferring Gene Regulatory Networks from Time-Series Microarray Data Using an Information-Theoretic Approach
19 pages, 5 figures
BioMed Research International, 2017
null
null
q-bio.MN cs.CE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections, are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
[ { "created": "Wed, 15 Mar 2017 04:42:35 GMT", "version": "v1" } ]
2017-04-24
[ [ "Abduallah", "Yasser", "" ], [ "Turki", "Turki", "" ], [ "Byron", "Kevin", "" ], [ "Du", "Zongxuan", "" ], [ "Cervantes-Cervantes", "Miguel", "" ], [ "Wang", "Jason T. L.", "" ] ]
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections, are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
1905.02024
Ilenna Jones
Ilenna Simone Jones and Konrad Paul Kording
Quantifying the role of neurons for behavior is a mediation question
4 pages, 2 figures
Behav Brain Sci 42 (2019) e233
10.1017/S0140525X19001444
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Many systems neuroscientists want to understand neurons in terms of mediation; we want to understand how neurons are involved in the causal chain from stimulus to behavior. Unfortunately, most tools are inappropriate for that while our language takes mediation for granted. Here we discuss the contrast between our conceptual drive towards mediation and the difficulty of obtaining meaningful evidence.
[ { "created": "Mon, 6 May 2019 13:15:30 GMT", "version": "v1" } ]
2020-09-04
[ [ "Jones", "Ilenna Simone", "" ], [ "Kording", "Konrad Paul", "" ] ]
Many systems neuroscientists want to understand neurons in terms of mediation; we want to understand how neurons are involved in the causal chain from stimulus to behavior. Unfortunately, most tools are inappropriate for that while our language takes mediation for granted. Here we discuss the contrast between our conceptual drive towards mediation and the difficulty of obtaining meaningful evidence.
2305.16634
Zachary Wu
Kadina E. Johnston, Clara Fannjiang, Bruce J. Wittmann, Brian L. Hie, Kevin K. Yang, Zachary Wu
Machine Learning for Protein Engineering
Initial book chapter submission on February 28, 2022, to be published by Springer Nature
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Directed evolution of proteins has been the most effective method for protein engineering. However, a new paradigm is emerging, fusing the library generation and screening approaches of traditional directed evolution with computation through the training of machine learning models on protein sequence fitness data. This chapter highlights successful applications of machine learning to protein engineering and directed evolution, organized by the improvements that have been made with respect to each step of the directed evolution cycle. Additionally, we provide an outlook for the future based on the current direction of the field, namely in the development of calibrated models and in incorporating other modalities, such as protein structure.
[ { "created": "Fri, 26 May 2023 05:19:17 GMT", "version": "v1" } ]
2023-05-29
[ [ "Johnston", "Kadina E.", "" ], [ "Fannjiang", "Clara", "" ], [ "Wittmann", "Bruce J.", "" ], [ "Hie", "Brian L.", "" ], [ "Yang", "Kevin K.", "" ], [ "Wu", "Zachary", "" ] ]
Directed evolution of proteins has been the most effective method for protein engineering. However, a new paradigm is emerging, fusing the library generation and screening approaches of traditional directed evolution with computation through the training of machine learning models on protein sequence fitness data. This chapter highlights successful applications of machine learning to protein engineering and directed evolution, organized by the improvements that have been made with respect to each step of the directed evolution cycle. Additionally, we provide an outlook for the future based on the current direction of the field, namely in the development of calibrated models and in incorporating other modalities, such as protein structure.
1305.0113
Duncan Gillespie
Duncan O. S. Gillespie, Meredith V. Trotter and Shripad D. Tuljapurkar
Divergence in age-patterns of mortality change drives international divergence in lifespan inequality
Updated version of Working Paper number 127 of the Morrison Institute for Population and Resource Studies, Stanford University. In press, Demography, expected July 2014
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past six decades, lifespan inequality has varied greatly within and among countries even while life expectancy has continued to increase. How and why does mortality change generate this diversity? We derive a precise link between changes in age-specific mortality and lifespan inequality, measured as the variance of age at death. Key to this relationship is a young-old threshold age, below and above which mortality decline respectively decreases and increases lifespan inequality. First, we show that shifts in the threshold's location modified the correlation between changes in life expectancy and lifespan inequality over the last two centuries. Second, we analyze the post Second World War trajectories of lifespan inequality in a set of developed countries, Japan, Canada and the United States (US), where thresholds centered on retirement age. Our method reveals how divergence in the age-pattern of mortality change drives international divergence in lifespan inequality. Most strikingly, early in the 1980s, mortality increases in young US males led lifespan inequality to remain high in the US, while in Canada the decline of inequality continued. In general, our wider international comparisons show that mortality change varied most at young working ages after the Second World War, particularly for males. We conclude that if mortality continues to stagnate at young ages, yet declines steadily at old ages, increases in lifespan inequality will become a common feature of future demographic change.
[ { "created": "Wed, 1 May 2013 07:46:10 GMT", "version": "v1" }, { "created": "Thu, 21 Nov 2013 08:20:26 GMT", "version": "v2" } ]
2013-11-22
[ [ "Gillespie", "Duncan O. S.", "" ], [ "Trotter", "Meredith V.", "" ], [ "Tuljapurkar", "Shripad D.", "" ] ]
In the past six decades, lifespan inequality has varied greatly within and among countries even while life expectancy has continued to increase. How and why does mortality change generate this diversity? We derive a precise link between changes in age-specific mortality and lifespan inequality, measured as the variance of age at death. Key to this relationship is a young-old threshold age, below and above which mortality decline respectively decreases and increases lifespan inequality. First, we show that shifts in the threshold's location modified the correlation between changes in life expectancy and lifespan inequality over the last two centuries. Second, we analyze the post Second World War trajectories of lifespan inequality in a set of developed countries, Japan, Canada and the United States (US), where thresholds centered on retirement age. Our method reveals how divergence in the age-pattern of mortality change drives international divergence in lifespan inequality. Most strikingly, early in the 1980s, mortality increases in young US males led lifespan inequality to remain high in the US, while in Canada the decline of inequality continued. In general, our wider international comparisons show that mortality change varied most at young working ages after the Second World War, particularly for males. We conclude that if mortality continues to stagnate at young ages, yet declines steadily at old ages, increases in lifespan inequality will become a common feature of future demographic change.
1406.2893
Jose Vilar
Jose M. G. Vilar and Leonor Saiz
Suppression and enhancement of transcriptional noise by DNA looping
13 pages, 6 figures, supplementary information
Phys. Rev. E 89, 062703 (2014)
10.1103/PhysRevE.89.062703
null
q-bio.SC cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA looping has been observed to enhance and suppress transcriptional noise but it is uncertain which of these two opposite effects is to be expected for given conditions. Here, we derive analytical expressions for the main quantifiers of transcriptional noise in terms of the molecular parameters and elucidate the role of DNA looping. Our results rationalize paradoxical experimental observations and provide the first quantitative explanation of landmark individual-cell measurements at the single molecule level on the classical lac operon genetic system [Choi et al., Science 322, 442-446 (2008)].
[ { "created": "Wed, 11 Jun 2014 12:59:40 GMT", "version": "v1" } ]
2014-06-12
[ [ "Vilar", "Jose M. G.", "" ], [ "Saiz", "Leonor", "" ] ]
DNA looping has been observed to enhance and suppress transcriptional noise but it is uncertain which of these two opposite effects is to be expected for given conditions. Here, we derive analytical expressions for the main quantifiers of transcriptional noise in terms of the molecular parameters and elucidate the role of DNA looping. Our results rationalize paradoxical experimental observations and provide the first quantitative explanation of landmark individual-cell measurements at the single molecule level on the classical lac operon genetic system [Choi et al., Science 322, 442-446 (2008)].
1108.4950
Michele Bellingeri
Simone Vincenzi and Michele Bellingeri
Consequences of catastrophic disturbances on population persistence and adaptations
22 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The intensification and increased frequency of weather extremes is emerging as one of the most important aspects of climate change. We use Monte Carlo simulation to understand and predict the consequences of variations in trends (i.e., directional change) and stochasticity (i.e., increase in variance) of climate variables and consequent selection pressure by using simple models of population dynamics. Higher variance of climate variables increases the probability of weather extremes and consequent catastrophic disturbances. Parameters of the model are selection pressure, mutation, directional and stochastic variation of the environment. We follow the population dynamics and the distribution of a trait that describes the adaptation of the individual to the optimum phenotype defined by the environmental conditions. The survival chances of a population depend quite strongly on the selection pressure and decrease with increasing variance of the climate variable. In general, the system is able to track the directional component of the optimum phenotype. Intermediate levels of mutation generally increase the probability of tracking the changing optimum and thus decrease the risk of extinction of a population. With high mutation, the higher probability of maladaptation decreases the survival chances of the populations, even with high variability of the optimum phenotype.
[ { "created": "Wed, 24 Aug 2011 20:49:29 GMT", "version": "v1" } ]
2011-08-26
[ [ "Vincenzi", "Simone", "" ], [ "Bellingeri", "Michele", "" ] ]
The intensification and increased frequency of weather extremes is emerging as one of the most important aspects of climate change. We use Monte Carlo simulation to understand and predict the consequences of variations in trends (i.e., directional change) and stochasticity (i.e., increase in variance) of climate variables and consequent selection pressure by using simple models of population dynamics. Higher variance of climate variables increases the probability of weather extremes and consequent catastrophic disturbances. Parameters of the model are selection pressure, mutation, directional and stochastic variation of the environment. We follow the population dynamics and the distribution of a trait that describes the adaptation of the individual to the optimum phenotype defined by the environmental conditions. The survival chances of a population depend quite strongly on the selection pressure and decrease with increasing variance of the climate variable. In general, the system is able to track the directional component of the optimum phenotype. Intermediate levels of mutation generally increase the probability of tracking the changing optimum and thus decrease the risk of extinction of a population. With high mutation, the higher probability of maladaptation decreases the survival chances of the populations, even with high variability of the optimum phenotype.
2304.10383
Jean-Baptiste Gramain
Jean-Baptiste Gramain
A discrete model for the growth and spread of the Scottish populations of red squirrels (Sciurus vulgaris) and grey squirrels (Sciurus carolinensis)
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, a model, discrete in space and time, is developed to describe the growth and spread of the Scottish populations of red squirrels (Sciurus vulgaris) and grey squirrel (Sciurus carolinensis). The initial state for the model is designed using a large dataset of records of sightings of individuals of both species reported by members of the public. Choices of parameters involved in the model and their values are informed by the analysis of this dataset for the period 2011-2016, and model predictions are compared to records for the years 2006-2019.
[ { "created": "Thu, 20 Apr 2023 15:21:18 GMT", "version": "v1" } ]
2023-04-21
[ [ "Gramain", "Jean-Baptiste", "" ] ]
In this article, a model, discrete in space and time, is developed to describe the growth and spread of the Scottish populations of red squirrels (Sciurus vulgaris) and grey squirrel (Sciurus carolinensis). The initial state for the model is designed using a large dataset of records of sightings of individuals of both species reported by members of the public. Choices of parameters involved in the model and their values are informed by the analysis of this dataset for the period 2011-2016, and model predictions are compared to records for the years 2006-2019.
1304.5090
Margareta Segerst{\aa}hl
Margareta Segerst{\aa}hl
Formal Model of Living Organisms
Submitted to the 12th European Conference on Artificial Life (Ecal 2013), manuscript revised before final extended submission deadline
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modeling formalism is proposed for the description and study of living and life-like systems. It provides an abstract conceptual model framework for real life and evolution of biological organisms. It is proposed, that this model formalism provides a novel system view and immediately applicable conceptual tools for understanding complex life and evolution. The modeling principle is very generic suggesting that it can be directly applied also to the study of engineered and artificial systems.
[ { "created": "Thu, 18 Apr 2013 11:29:14 GMT", "version": "v1" }, { "created": "Thu, 13 Jun 2013 11:41:20 GMT", "version": "v2" } ]
2013-06-14
[ [ "Segerståhl", "Margareta", "" ] ]
A modeling formalism is proposed for the description and study of living and life-like systems. It provides an abstract conceptual model framework for real life and evolution of biological organisms. It is proposed, that this model formalism provides a novel system view and immediately applicable conceptual tools for understanding complex life and evolution. The modeling principle is very generic suggesting that it can be directly applied also to the study of engineered and artificial systems.
1207.2816
Naoki Masuda Dr.
Naoki Masuda and Hiroshi Kori
Formation of feedforward networks and frequency synchrony by spike-timing-dependent plasticity
9 figures
Journal of Computational Neuroscience, 22, 327-345 (2007)
10.1007/s10827-007-0022-1
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spike-timing-dependent plasticity (STDP) with asymmetric learning windows is commonly found in the brain and useful for a variety of spike-based computations such as input filtering and associative memory. A natural consequence of STDP is establishment of causality in the sense that a neuron learns to fire with a lag after specific presynaptic neurons have fired. The effect of STDP on synchrony is elusive because spike synchrony implies unitary spike events of different neurons rather than a causal delayed relationship between neurons. We explore how synchrony can be facilitated by STDP in oscillator networks with a pacemaker. We show that STDP with asymmetric learning windows leads to self-organization of feedforward networks starting from the pacemaker. As a result, STDP drastically facilitates frequency synchrony. Even though differences in spike times are lessened as a result of synaptic plasticity, the finite time lag remains so that perfect spike synchrony is not realized. In contrast to traditional mechanisms of large-scale synchrony based on mutual interaction of coupled neurons, the route to synchrony discovered here is enslavement of downstream neurons by upstream ones. Facilitation of such feedforward synchrony does not occur for STDP with symmetric learning windows.
[ { "created": "Thu, 12 Jul 2012 00:30:41 GMT", "version": "v1" } ]
2012-07-13
[ [ "Masuda", "Naoki", "" ], [ "Kori", "Hiroshi", "" ] ]
Spike-timing-dependent plasticity (STDP) with asymmetric learning windows is commonly found in the brain and useful for a variety of spike-based computations such as input filtering and associative memory. A natural consequence of STDP is establishment of causality in the sense that a neuron learns to fire with a lag after specific presynaptic neurons have fired. The effect of STDP on synchrony is elusive because spike synchrony implies unitary spike events of different neurons rather than a causal delayed relationship between neurons. We explore how synchrony can be facilitated by STDP in oscillator networks with a pacemaker. We show that STDP with asymmetric learning windows leads to self-organization of feedforward networks starting from the pacemaker. As a result, STDP drastically facilitates frequency synchrony. Even though differences in spike times are lessened as a result of synaptic plasticity, the finite time lag remains so that perfect spike synchrony is not realized. In contrast to traditional mechanisms of large-scale synchrony based on mutual interaction of coupled neurons, the route to synchrony discovered here is enslavement of downstream neurons by upstream ones. Facilitation of such feedforward synchrony does not occur for STDP with symmetric learning windows.
1602.00681
Yuri A. Dabaghian
Andrey Babichev and Yuri Dabaghian
Persistent memories in transient networks
8 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial awareness in mammals is based on an internalized representation of the environment, encoded by large networks of spiking neurons. While such representations can last for a long time, the underlying neuronal network is transient: neuronal cells die every day, synaptic connections appear and disappear, the networks constantly change their architecture due to various forms of synaptic and structural plasticity. How can a network with a dynamic architecture encode a stable map of space? We address this question using a physiological model of a "flickering" neuronal network and demonstrate that it can maintain a robust topological representation of space.
[ { "created": "Sun, 31 Jan 2016 08:25:30 GMT", "version": "v1" } ]
2016-02-03
[ [ "Babichev", "Andrey", "" ], [ "Dabaghian", "Yuri", "" ] ]
Spatial awareness in mammals is based on an internalized representation of the environment, encoded by large networks of spiking neurons. While such representations can last for a long time, the underlying neuronal network is transient: neuronal cells die every day, synaptic connections appear and disappear, the networks constantly change their architecture due to various forms of synaptic and structural plasticity. How can a network with a dynamic architecture encode a stable map of space? We address this question using a physiological model of a "flickering" neuronal network and demonstrate that it can maintain a robust topological representation of space.
1606.08313
Pedro Mediano
Pedro A.M. Mediano, Juan Carlos Farah and Murray Shanahan
Integrated Information and Metastability in Systems of Coupled Oscillators
5 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been shown that sets of oscillators in a modular network can exhibit a rich variety of metastable chimera states, in which synchronisation and desynchronisation coexist. Independently, under the guise of integrated information theory, researchers have attempted to quantify the extent to which a complex dynamical system presents a balance of integrated and segregated activity. In this paper we bring these two areas of research together by showing that the system of oscillators in question exhibits a critical peak of integrated information that coincides with peaks in other measures such as metastability and coalition entropy.
[ { "created": "Mon, 27 Jun 2016 15:15:35 GMT", "version": "v1" } ]
2016-06-28
[ [ "Mediano", "Pedro A. M.", "" ], [ "Farah", "Juan Carlos", "" ], [ "Shanahan", "Murray", "" ] ]
It has been shown that sets of oscillators in a modular network can exhibit a rich variety of metastable chimera states, in which synchronisation and desynchronisation coexist. Independently, under the guise of integrated information theory, researchers have attempted to quantify the extent to which a complex dynamical system presents a balance of integrated and segregated activity. In this paper we bring these two areas of research together by showing that the system of oscillators in question exhibits a critical peak of integrated information that coincides with peaks in other measures such as metastability and coalition entropy.
2111.11152
Smith Gupta
Smith Gupta
Clustering based method for finding spikes in insect neurons
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Spikes can be easily detected inmostintracellular recordings as sharp peaks. However, insome experimental preparations,because of unipolar morphology or other characteristicsof the recorded neurons, the sizes of the spikes recorded from the soma can be much smaller. The experimental settings and the quality of the recording can also affect the observed amplitudes of the spikes. Whole-cell patch-clamp recordings from the somata of projection neurons of the antennal lobe in Drosophila or mosquitoes can show spikes with amplitudes as small as 2 mV. Moreover, the observed spikes often ride on relatively large depolarizations, which makes it difficult for the standard thresholding-based approaches to distinguish them from noise or sharp EPSPs present in the signal. For spike detection in such neuronal recordings, we propose a clustering-based algorithm that separates peaks corresponding to action potentials from those corresponding to noise. Candidate peaks, including many noise peaks, are first selected according to their sharpness, and then a feature vector is extracted for each peak. The 3-dimensional feature vector contains the absolute value of the peak voltage, height of the spike, and the magnitude of the second derivative minima attained during the spike. In most recordings, this 3D space reveals two natural clusters, separating the noise peaks from the true action potentials. Some parameters of the algorithm can be optionally altered by the user to improve detection, which comes handy in the few recordings where the default parameters do not work well. In summary, the algorithm facilitates accurate spike detection to enable the interpretation and analysis of patch-clamp data from neuronal recordings in invertebrates. The algorithm is implemented as an freely available open-source tool.
[ { "created": "Mon, 22 Nov 2021 12:30:25 GMT", "version": "v1" } ]
2021-11-23
[ [ "Gupta", "Smith", "" ] ]
Spikes can be easily detected inmostintracellular recordings as sharp peaks. However, insome experimental preparations,because of unipolar morphology or other characteristicsof the recorded neurons, the sizes of the spikes recorded from the soma can be much smaller. The experimental settings and the quality of the recording can also affect the observed amplitudes of the spikes. Whole-cell patch-clamp recordings from the somata of projection neurons of the antennal lobe in Drosophila or mosquitoes can show spikes with amplitudes as small as 2 mV. Moreover, the observed spikes often ride on relatively large depolarizations, which makes it difficult for the standard thresholding-based approaches to distinguish them from noise or sharp EPSPs present in the signal. For spike detection in such neuronal recordings, we propose a clustering-based algorithm that separates peaks corresponding to action potentials from those corresponding to noise. Candidate peaks, including many noise peaks, are first selected according to their sharpness, and then a feature vector is extracted for each peak. The 3-dimensional feature vector contains the absolute value of the peak voltage, height of the spike, and the magnitude of the second derivative minima attained during the spike. In most recordings, this 3D space reveals two natural clusters, separating the noise peaks from the true action potentials. Some parameters of the algorithm can be optionally altered by the user to improve detection, which comes handy in the few recordings where the default parameters do not work well. In summary, the algorithm facilitates accurate spike detection to enable the interpretation and analysis of patch-clamp data from neuronal recordings in invertebrates. The algorithm is implemented as an freely available open-source tool.
2407.13196
Romuald A. Janik
Dante R. Chialvo and Romuald A. Janik
Statistical thermodynamics of the human brain activity, the Hagedorn temperature and the Zipf law
5+4 pages
null
null
null
q-bio.NC cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well established that the brain spontaneously traverses through a very large number of states. Nevertheless, despite its relevance to understanding brain function, a formal description of this phenomenon is still lacking. To this end, we introduce a machine learning based method allowing for the determination of the probabilities of all possible states at a given coarse-graining, from which all the thermodynamics can be derived. This is a challenge not unique to the brain, since similar problems are at the heart of the statistical mechanics of complex systems. This paper uncovers a linear scaling of the entropies and energies of the brain states, a behaviour first conjectured by Hagedorn to be typical at the limiting temperature in which ordinary matter disintegrates into quark matter. Equivalently, this establishes the existence of a Zipf law scaling underlying the appearance of a wide range of brain states. Based on our estimation of the density of states for large scale functional magnetic resonance imaging (fMRI) human brain recordings, we observe that the brain operates asymptotically at the Hagedorn temperature. The presented approach is not only relevant to brain function but should be applicable for a wide variety of complex systems.
[ { "created": "Thu, 18 Jul 2024 06:17:22 GMT", "version": "v1" } ]
2024-07-19
[ [ "Chialvo", "Dante R.", "" ], [ "Janik", "Romuald A.", "" ] ]
It is well established that the brain spontaneously traverses through a very large number of states. Nevertheless, despite its relevance to understanding brain function, a formal description of this phenomenon is still lacking. To this end, we introduce a machine learning based method allowing for the determination of the probabilities of all possible states at a given coarse-graining, from which all the thermodynamics can be derived. This is a challenge not unique to the brain, since similar problems are at the heart of the statistical mechanics of complex systems. This paper uncovers a linear scaling of the entropies and energies of the brain states, a behaviour first conjectured by Hagedorn to be typical at the limiting temperature in which ordinary matter disintegrates into quark matter. Equivalently, this establishes the existence of a Zipf law scaling underlying the appearance of a wide range of brain states. Based on our estimation of the density of states for large scale functional magnetic resonance imaging (fMRI) human brain recordings, we observe that the brain operates asymptotically at the Hagedorn temperature. The presented approach is not only relevant to brain function but should be applicable for a wide variety of complex systems.
2007.15205
Ines Hipolito
Ines Hipolito, Maxwell Ramstead, Axel Constant, Karl Friston
Cognition coming about: self-organisation and free-energy
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wright and Bourkes compelling article rightly points out that existing models of embryogenesis fail to explain the mechanisms and functional significance of the dynamic connections among neurons. We pursue their account of Dynamic Logic by appealing to the Markov blanket formalism that underwrites the Free Energy Principle. We submit that this allows one to model embryogenesis as self-organisation in a dynamical system that minimises free-energy. The ensuing formalism may be extended to also explain the autonomous emergence of cognition, specifically in the brain, as a dynamic self-assembling process.
[ { "created": "Thu, 30 Jul 2020 03:09:34 GMT", "version": "v1" } ]
2020-07-31
[ [ "Hipolito", "Ines", "" ], [ "Ramstead", "Maxwell", "" ], [ "Constant", "Axel", "" ], [ "Friston", "Karl", "" ] ]
Wright and Bourkes compelling article rightly points out that existing models of embryogenesis fail to explain the mechanisms and functional significance of the dynamic connections among neurons. We pursue their account of Dynamic Logic by appealing to the Markov blanket formalism that underwrites the Free Energy Principle. We submit that this allows one to model embryogenesis as self-organisation in a dynamical system that minimises free-energy. The ensuing formalism may be extended to also explain the autonomous emergence of cognition, specifically in the brain, as a dynamic self-assembling process.
2004.07750
Gaurav Goswami
Gaurav Goswami, Jayanti Prasad and Mansi Dhuria
Extracting the effective contact rate of COVID-19 pandemic
9 pages, 5 figures, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the absence of any available vaccines or drugs, prevention of the spread of Coronavirus Disease 2019 (COVID-19) pandemic is being achieved by putting many mitigation measures in place. It is indispensable to have robust and reliable ways of evaluating the effectiveness of these measures. In this work, we assume that, at a very coarse-grained level of description, the overall effect of all the mitigation measures is that we can still describe the spread of the pandemic using the most basic Susceptible-Exposed-Infectious-Removed ($SEIR$) model but with an "effective contact rate" ($\beta$) which is time-dependent. We then use the time series data of the number of infected individuals in the population to extract the instantaneous effective contact rate which is the result of various social interventions put in place. This approach has the potential to be significantly useful while evaluating the impact of mitigation measures on the spread of COVID-19 in near future.
[ { "created": "Thu, 16 Apr 2020 16:33:01 GMT", "version": "v1" } ]
2020-04-17
[ [ "Goswami", "Gaurav", "" ], [ "Prasad", "Jayanti", "" ], [ "Dhuria", "Mansi", "" ] ]
In the absence of any available vaccines or drugs, prevention of the spread of Coronavirus Disease 2019 (COVID-19) pandemic is being achieved by putting many mitigation measures in place. It is indispensable to have robust and reliable ways of evaluating the effectiveness of these measures. In this work, we assume that, at a very coarse-grained level of description, the overall effect of all the mitigation measures is that we can still describe the spread of the pandemic using the most basic Susceptible-Exposed-Infectious-Removed ($SEIR$) model but with an "effective contact rate" ($\beta$) which is time-dependent. We then use the time series data of the number of infected individuals in the population to extract the instantaneous effective contact rate which is the result of various social interventions put in place. This approach has the potential to be significantly useful while evaluating the impact of mitigation measures on the spread of COVID-19 in near future.
1610.03668
Binoy V V
V. V. Binoy and P. S. Prasanth
Diet variation in climbing perch populations inhabiting eight different types of ecosystems
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present study revealed that populations of climbing perch (Anabas testudineus) inhabiting river, backwater, shallow water channel, ponds with and without vegetation cover, marsh, sewage canal and aquaculture tank varied significantly in the number of food items consumed. Chironomous larvae, organic debris and filamentous algae were the common ingredients of the menu of this species across focal ecosystems, whereas sewage canal population was found surviving solely on insect larvae and organic debris.
[ { "created": "Wed, 12 Oct 2016 11:03:12 GMT", "version": "v1" } ]
2016-10-13
[ [ "Binoy", "V. V.", "" ], [ "Prasanth", "P. S.", "" ] ]
The present study revealed that populations of climbing perch (Anabas testudineus) inhabiting river, backwater, shallow water channel, ponds with and without vegetation cover, marsh, sewage canal and aquaculture tank varied significantly in the number of food items consumed. Chironomous larvae, organic debris and filamentous algae were the common ingredients of the menu of this species across focal ecosystems, whereas sewage canal population was found surviving solely on insect larvae and organic debris.
1506.04450
Genki Ichinose
Genki Ichinose, Masaya Saito, Hiroki Sayama and Hugues Bersini
Transitions between homophilic and heterophilic modes of cooperation
16 pages, 7 figures
Journal of Artificial Societies and Social Simulation 18, 3, 2015
10.18564/jasss.2932
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperation is ubiquitous in biological and social systems. Previous studies revealed that a preference toward similar appearance promotes cooperation, a phenomenon called tag-mediated cooperation or communitarian cooperation. This effect is enhanced when a spatial structure is incorporated, because space allows agents sharing an identical tag to regroup to form locally cooperative clusters. In spatially distributed settings, one can also consider migration of organisms, which has a potential to further promote evolution of cooperation by facilitating spatial clustering. However, it has not yet been considered in spatial tag-mediated cooperation models. Here we show, using computer simulations of a spatial model of evolutionary games with organismal migration, that tag-based segregation and homophilic cooperation arise for a wide range of parameters. In the meantime, our results also show another evolutionarily stable outcome, where a high level of heterophilic cooperation is maintained in spatially well-mixed patterns. We found that these two different forms of tag-mediated cooperation appear alternately as the parameter for temptation to defect is increased.
[ { "created": "Mon, 15 Jun 2015 00:03:35 GMT", "version": "v1" }, { "created": "Fri, 20 Nov 2015 11:52:03 GMT", "version": "v2" } ]
2017-04-06
[ [ "Ichinose", "Genki", "" ], [ "Saito", "Masaya", "" ], [ "Sayama", "Hiroki", "" ], [ "Bersini", "Hugues", "" ] ]
Cooperation is ubiquitous in biological and social systems. Previous studies revealed that a preference toward similar appearance promotes cooperation, a phenomenon called tag-mediated cooperation or communitarian cooperation. This effect is enhanced when a spatial structure is incorporated, because space allows agents sharing an identical tag to regroup to form locally cooperative clusters. In spatially distributed settings, one can also consider migration of organisms, which has a potential to further promote evolution of cooperation by facilitating spatial clustering. However, it has not yet been considered in spatial tag-mediated cooperation models. Here we show, using computer simulations of a spatial model of evolutionary games with organismal migration, that tag-based segregation and homophilic cooperation arise for a wide range of parameters. In the meantime, our results also show another evolutionarily stable outcome, where a high level of heterophilic cooperation is maintained in spatially well-mixed patterns. We found that these two different forms of tag-mediated cooperation appear alternately as the parameter for temptation to defect is increased.
2004.14802
Kasturi Saha
Madhur Parashar, Kasturi Saha, Sharba Bandyopadhyay
Axon Hillock Currents Allow Single-Neuron-Resolution 3-Dimensional Functional Neural Imaging Using Diamond Quantum Defect-Based Vector Magnetometry
null
null
null
null
q-bio.NC cond-mat.mes-hall physics.app-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetic field sensing, with its recent advances, is emerging as a viable alternative to measure functional activity of single neurons in the brain by sensing action potential associated magnetic fields (APMFs). Measurement of APMFs of large axons of worms have been possible due to their size. In the mammalian brain, axon sizes, their numbers and routes, restricts using such functional imaging methods. With segmented model of mammalian pyramidal neurons, we show that the APMF of intra-axonal currents in the axon hillock are two orders of magnitude larger than other neuronal locations. Expected 2-dimensional vector magnetic field maps of naturalistic spiking activity of a volume of neurons via widefield diamond-nitrogen-vacancy-center-magnetometry (DNVM) were simulated. A dictionary based matching pursuit type algorithm applied to the data using the axon-hillock's APMF signature allowed spatiotemporal reconstruction of APs in the volume of brain tissue at single cell resolution. Enhancement of APMF signals coupled with NVMM advances thus can potentially replace current functional brain mapping techniques.
[ { "created": "Wed, 29 Apr 2020 03:49:41 GMT", "version": "v1" } ]
2020-05-01
[ [ "Parashar", "Madhur", "" ], [ "Saha", "Kasturi", "" ], [ "Bandyopadhyay", "Sharba", "" ] ]
Magnetic field sensing, with its recent advances, is emerging as a viable alternative to measure functional activity of single neurons in the brain by sensing action potential associated magnetic fields (APMFs). Measurement of APMFs of large axons of worms have been possible due to their size. In the mammalian brain, axon sizes, their numbers and routes, restricts using such functional imaging methods. With segmented model of mammalian pyramidal neurons, we show that the APMF of intra-axonal currents in the axon hillock are two orders of magnitude larger than other neuronal locations. Expected 2-dimensional vector magnetic field maps of naturalistic spiking activity of a volume of neurons via widefield diamond-nitrogen-vacancy-center-magnetometry (DNVM) were simulated. A dictionary based matching pursuit type algorithm applied to the data using the axon-hillock's APMF signature allowed spatiotemporal reconstruction of APs in the volume of brain tissue at single cell resolution. Enhancement of APMF signals coupled with NVMM advances thus can potentially replace current functional brain mapping techniques.
1007.1442
Haiyan Wang
Haiyan Wang
Spreading speeds and traveling waves for a model of epidermal wound healing
null
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we shall establish the spreading speed and existence of traveling waves for a non-cooperative system arising from epidermal wound healing and characterize the spreading speed as the slowest speed of a family of non-constant traveling wave solutions. Our results on the spreading speed and traveling waves can also be applied to a large class of non-cooperative reaction-diffusion systems.
[ { "created": "Thu, 8 Jul 2010 19:19:49 GMT", "version": "v1" } ]
2010-07-09
[ [ "Wang", "Haiyan", "" ] ]
In this paper, we shall establish the spreading speed and existence of traveling waves for a non-cooperative system arising from epidermal wound healing and characterize the spreading speed as the slowest speed of a family of non-constant traveling wave solutions. Our results on the spreading speed and traveling waves can also be applied to a large class of non-cooperative reaction-diffusion systems.
1005.4335
Hsiu-Hau Lin
Yen-Chih Lin, Tzay-Ming Hong, Hsiu-Hau Lin
Discreteness of populations enervates biodiversity in evolution
15 pages, 4 figures and 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biodiversity widely observed in ecological systems is attributed to the dynamical balance among the competing species. The time-varying populations of the interacting species are often captured rather well by a set of deterministic replicator equations in the evolutionary game theory. However, intrinsic fluctuations arisen from the discreteness of populations lead to stochastic derivations from the smooth evolution trajectories. The role of these fluctuations is shown to be critical at causing extinction and deteriorating the biodiversity of ecosystem. We use children's rock-paper-scissors game to demonstrate how the intrinsic fluctuations arise from the discrete populations and why the biodiversity of the ecosystem decays exponentially, disregarding the detail parameters for competing mechanism and initial distributions. The dissipative trend in biodiversity can be analogized to the gradual erosion of kinetic energy of a moving particle due to air drag or fluid viscosity. The dissipation-fluctuation theorem in statistical physics seals the fate of these originally conserved quantities. This concept in physics can be generalized to scrutinize the errors that might be incurred in the ecological, biological, and quantitative economic modeling for which the ingredients are all discrete in number.
[ { "created": "Mon, 24 May 2010 14:28:12 GMT", "version": "v1" } ]
2010-05-25
[ [ "Lin", "Yen-Chih", "" ], [ "Hong", "Tzay-Ming", "" ], [ "Lin", "Hsiu-Hau", "" ] ]
Biodiversity widely observed in ecological systems is attributed to the dynamical balance among the competing species. The time-varying populations of the interacting species are often captured rather well by a set of deterministic replicator equations in the evolutionary game theory. However, intrinsic fluctuations arisen from the discreteness of populations lead to stochastic derivations from the smooth evolution trajectories. The role of these fluctuations is shown to be critical at causing extinction and deteriorating the biodiversity of ecosystem. We use children's rock-paper-scissors game to demonstrate how the intrinsic fluctuations arise from the discrete populations and why the biodiversity of the ecosystem decays exponentially, disregarding the detail parameters for competing mechanism and initial distributions. The dissipative trend in biodiversity can be analogized to the gradual erosion of kinetic energy of a moving particle due to air drag or fluid viscosity. The dissipation-fluctuation theorem in statistical physics seals the fate of these originally conserved quantities. This concept in physics can be generalized to scrutinize the errors that might be incurred in the ecological, biological, and quantitative economic modeling for which the ingredients are all discrete in number.
1901.08030
Massimiliano Bonomi
Thomas L\"ohr, Carlo Camilloni, Massimiliano Bonomi, Michele Vendruscolo
A practical guide to the simultaneous determination of protein structure and dynamics using metainference
49 pages, 9 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate protein structural ensembles can be determined with metainference, a Bayesian inference method that integrates experimental information with prior knowledge of the system and deals with all sources of uncertainty and errors as well as with system heterogeneity. Furthermore, metainference can be implemented using the metadynamics approach, which enables the computational study of complex biological systems requiring extensive conformational sampling. In this chapter, we provide a step-by-step guide to perform and analyse metadynamic metainference simulations using the ISDB module of the open-source PLUMED library, as well as a series of practical tips to avoid common mistakes. Specifically, we will guide the reader in the process of learning how to model the structural ensemble of a small disordered peptide by combining state-of-the-art molecular mechanics force fields with nuclear magnetic resonance data, including chemical shifts, scalar couplings and residual dipolar couplings.
[ { "created": "Wed, 23 Jan 2019 18:04:31 GMT", "version": "v1" } ]
2019-01-24
[ [ "Löhr", "Thomas", "" ], [ "Camilloni", "Carlo", "" ], [ "Bonomi", "Massimiliano", "" ], [ "Vendruscolo", "Michele", "" ] ]
Accurate protein structural ensembles can be determined with metainference, a Bayesian inference method that integrates experimental information with prior knowledge of the system and deals with all sources of uncertainty and errors as well as with system heterogeneity. Furthermore, metainference can be implemented using the metadynamics approach, which enables the computational study of complex biological systems requiring extensive conformational sampling. In this chapter, we provide a step-by-step guide to perform and analyse metadynamic metainference simulations using the ISDB module of the open-source PLUMED library, as well as a series of practical tips to avoid common mistakes. Specifically, we will guide the reader in the process of learning how to model the structural ensemble of a small disordered peptide by combining state-of-the-art molecular mechanics force fields with nuclear magnetic resonance data, including chemical shifts, scalar couplings and residual dipolar couplings.
2304.09813
Bastian Pietras
Bastian Pietras
Pulse shape and voltage-dependent synchronization in spiking neuron networks
65 pages, 11 figures
null
null
null
q-bio.NC nlin.AO physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the $\theta$-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and $\theta$-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism in neural networks, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission.
[ { "created": "Wed, 19 Apr 2023 16:58:53 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 19:06:35 GMT", "version": "v2" } ]
2024-01-23
[ [ "Pietras", "Bastian", "" ] ]
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the $\theta$-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and $\theta$-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism in neural networks, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission.
q-bio/0401009
Hideo Hasegawa
Hideo Hasegawa (Tokyo Gakugei Univ.)
Graded persisting activity of heterogeneous neuron ensembles subject to white noises
16 pages, 5 figures; revised Figs. 5 and 6
null
null
null
q-bio.NC
null
Effects of distractions such as noises and parameter heterogeneity have been studied on the firing activity of ensemble neurons, each of which is described by the extended Morris-Lecar model showing the graded persisting firings with the aid of an included ${\rm Ca}^{2+}$-dependent cation current. Although the sustained activity of {\it single} neurons is rather robust in a sense that the activity is realized even in the presence of the distractions, the graded frequency of sustained firings is vulnerable to them. It has been shown, however, that the graded persisting activity of {\it ensemble} neurons becomes much robust to the distractions by the pooling (ensemble) effect. When the coupling is introduced, the synchronization of firings in ensemble neurons is enhanced, which is beneficial to firings of target neurons.
[ { "created": "Wed, 7 Jan 2004 21:19:29 GMT", "version": "v1" }, { "created": "Mon, 12 Jan 2004 01:12:36 GMT", "version": "v2" }, { "created": "Wed, 14 Jan 2004 06:06:32 GMT", "version": "v3" }, { "created": "Thu, 22 Jan 2004 00:58:05 GMT", "version": "v4" } ]
2007-05-23
[ [ "Hasegawa", "Hideo", "", "Tokyo Gakugei Univ." ] ]
Effects of distractions such as noises and parameter heterogeneity have been studied on the firing activity of ensemble neurons, each of which is described by the extended Morris-Lecar model showing the graded persisting firings with the aid of an included ${\rm Ca}^{2+}$-dependent cation current. Although the sustained activity of {\it single} neurons is rather robust in a sense that the activity is realized even in the presence of the distractions, the graded frequency of sustained firings is vulnerable to them. It has been shown, however, that the graded persisting activity of {\it ensemble} neurons becomes much robust to the distractions by the pooling (ensemble) effect. When the coupling is introduced, the synchronization of firings in ensemble neurons is enhanced, which is beneficial to firings of target neurons.
1508.04453
Rinaldo Schinazi
Rinaldo B. Schinazi
Testing randomness for cancer risk
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are numerous stochastic models for cancer risk for a given tissue. Many rely on the following two hypotheses. 1. There is a fixed probability that a given cell division will eventually lead to a cancerous cell. 2. Cell divisions are nefarious or not independently of each other. We show that recent data on cancer risk and number of stem divisions is consistent with hypotheses 1 and 2.
[ { "created": "Mon, 17 Aug 2015 19:26:49 GMT", "version": "v1" } ]
2015-08-20
[ [ "Schinazi", "Rinaldo B.", "" ] ]
There are numerous stochastic models for cancer risk for a given tissue. Many rely on the following two hypotheses. 1. There is a fixed probability that a given cell division will eventually lead to a cancerous cell. 2. Cell divisions are nefarious or not independently of each other. We show that recent data on cancer risk and number of stem divisions is consistent with hypotheses 1 and 2.
1602.00282
Ananthu James
Ananthu James
Role of epistasis on the fixation probability of a non-mutator in an adapted asexual population
null
Journal of Theoretical Biology 407 (2016) 225-237
10.1016/j.jtbi.2016.07.006
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mutation rate of a well adapted population is prone to reduction so as to have a lower mutational load. We aim to understand the role of epistatic interactions between the fitness affecting mutations in this process. Using a multitype branching process, the fixation probability of a single non-mutator emerging in a large asexual mutator population is analytically calculated here. The mutator population undergoes deleterious mutations at constant, but at a much higher rate than that of the non-mutator. We find that antagonistic epistasis lowers the chances of mutation rate reduction, while synergistic epistasis enhances it. Below a critical value of epistasis, the fixation probability behaves non-monotonically with variation in mutation rate of the background population. Moreover, the variation of this critical value of the epistasis parameter with the strength of the mutator is discussed in the Appendix. For synergistic epistasis, when selection is varied, the fixation probability reduces overall, with damped oscillations.
[ { "created": "Sun, 31 Jan 2016 17:04:17 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2016 07:09:31 GMT", "version": "v2" } ]
2016-09-20
[ [ "James", "Ananthu", "" ] ]
The mutation rate of a well adapted population is prone to reduction so as to have a lower mutational load. We aim to understand the role of epistatic interactions between the fitness affecting mutations in this process. Using a multitype branching process, the fixation probability of a single non-mutator emerging in a large asexual mutator population is analytically calculated here. The mutator population undergoes deleterious mutations at constant, but at a much higher rate than that of the non-mutator. We find that antagonistic epistasis lowers the chances of mutation rate reduction, while synergistic epistasis enhances it. Below a critical value of epistasis, the fixation probability behaves non-monotonically with variation in mutation rate of the background population. Moreover, the variation of this critical value of the epistasis parameter with the strength of the mutator is discussed in the Appendix. For synergistic epistasis, when selection is varied, the fixation probability reduces overall, with damped oscillations.
2401.18006
Jonathan Kim
Jonathan W. Kim and Ahmed Alaa and Danilo Bernardo
EEG-GPT: Exploring Capabilities of Large Language Models for EEG Classification and Interpretation
null
null
null
null
q-bio.QM cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
In conventional machine learning (ML) approaches applied to electroencephalography (EEG), this is often a limited focus, isolating specific brain activities occurring across disparate temporal scales (from transient spikes in milliseconds to seizures lasting minutes) and spatial scales (from localized high-frequency oscillations to global sleep activity). This siloed approach limits the development EEG ML models that exhibit multi-scale electrophysiological understanding and classification capabilities. Moreover, typical ML EEG approaches utilize black-box approaches, limiting their interpretability and trustworthiness in clinical contexts. Thus, we propose EEG-GPT, a unifying approach to EEG classification that leverages advances in large language models (LLM). EEG-GPT achieves excellent performance comparable to current state-of-the-art deep learning methods in classifying normal from abnormal EEG in a few-shot learning paradigm utilizing only 2% of training data. Furthermore, it offers the distinct advantages of providing intermediate reasoning steps and coordinating specialist EEG tools across multiple scales in its operation, offering transparent and interpretable step-by-step verification, thereby promoting trustworthiness in clinical contexts.
[ { "created": "Wed, 31 Jan 2024 17:08:34 GMT", "version": "v1" }, { "created": "Sat, 3 Feb 2024 23:32:08 GMT", "version": "v2" } ]
2024-02-06
[ [ "Kim", "Jonathan W.", "" ], [ "Alaa", "Ahmed", "" ], [ "Bernardo", "Danilo", "" ] ]
In conventional machine learning (ML) approaches applied to electroencephalography (EEG), this is often a limited focus, isolating specific brain activities occurring across disparate temporal scales (from transient spikes in milliseconds to seizures lasting minutes) and spatial scales (from localized high-frequency oscillations to global sleep activity). This siloed approach limits the development EEG ML models that exhibit multi-scale electrophysiological understanding and classification capabilities. Moreover, typical ML EEG approaches utilize black-box approaches, limiting their interpretability and trustworthiness in clinical contexts. Thus, we propose EEG-GPT, a unifying approach to EEG classification that leverages advances in large language models (LLM). EEG-GPT achieves excellent performance comparable to current state-of-the-art deep learning methods in classifying normal from abnormal EEG in a few-shot learning paradigm utilizing only 2% of training data. Furthermore, it offers the distinct advantages of providing intermediate reasoning steps and coordinating specialist EEG tools across multiple scales in its operation, offering transparent and interpretable step-by-step verification, thereby promoting trustworthiness in clinical contexts.
q-bio/0310039
Myoung Won Cho
Myoung Won Cho, Seunghwan Kim
Different ocular dominance map formation by influence of orientation columns in visual cortices
4 pages, 4 figure
null
10.1103/PhysRevLett.94.068701
null
q-bio.NC
null
In animal experiments, the observed orientation preference (OP) and ocular dominance (OD) columns in the visual cortex of the brain show various pattern types. Here, we show that the different visual map formations in various species are due to the crossover behavior in anisotropic systems composed of orientational and scalar components such as easy-plane Heisenberg models. We predict the transition boundary between different pattern types with the anisotropy as a main bifurcation parameter, which is consistent with experimental observations.
[ { "created": "Fri, 31 Oct 2003 06:48:23 GMT", "version": "v1" }, { "created": "Mon, 17 Nov 2003 08:02:21 GMT", "version": "v2" }, { "created": "Mon, 12 Jul 2004 09:01:09 GMT", "version": "v3" }, { "created": "Wed, 17 Nov 2004 12:41:24 GMT", "version": "v4" } ]
2013-05-29
[ [ "Cho", "Myoung Won", "" ], [ "Kim", "Seunghwan", "" ] ]
In animal experiments, the observed orientation preference (OP) and ocular dominance (OD) columns in the visual cortex of the brain show various pattern types. Here, we show that the different visual map formations in various species are due to the crossover behavior in anisotropic systems composed of orientational and scalar components such as easy-plane Heisenberg models. We predict the transition boundary between different pattern types with the anisotropy as a main bifurcation parameter, which is consistent with experimental observations.
2310.14621
Haiping Huang
Zhanghan Lin and Haiping Huang
Spiking mode-based neural networks
30 pages, 10 figures, submitted to PRE
Phys. Rev. E 110, 024306 (2024)
10.1103/PhysRevE.110.024306
null
q-bio.NC cond-mat.dis-nn cs.AI cs.NE
http://creativecommons.org/licenses/by/4.0/
Spiking neural networks play an important role in brain-like neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input, output modes and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks -- digit classification and selective sensory integration tasks. Our method accelerate the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
[ { "created": "Mon, 23 Oct 2023 06:54:17 GMT", "version": "v1" }, { "created": "Mon, 3 Jun 2024 07:27:04 GMT", "version": "v2" }, { "created": "Thu, 18 Jul 2024 06:49:07 GMT", "version": "v3" } ]
2024-08-15
[ [ "Lin", "Zhanghan", "" ], [ "Huang", "Haiping", "" ] ]
Spiking neural networks play an important role in brain-like neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input, output modes and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks -- digit classification and selective sensory integration tasks. Our method accelerate the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
1512.04574
Johannes Buch
Johannes L{\o}rup Buch
Localisation of antifreeze proteins in Rhagium mordax using immunofluorescence
Masters thesis project report. Contains detailed laboratory protocols in appendix. Accepted December 2011
null
null
null
q-bio.TO q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Larvae of the blackspotted pliers support beetle, Rhagium mordax, express antifreeze proteins in their haemolymph during temperate climate winter. It is believed that they also express antifreeze proteins in their cuticle as a means of preventing inoculative freezing. Larvae of Rhagium mordax were collected during winter (March) and summer (May) of 2011. Larvae were fixated, embedded in paraffin wax, sectioned on a microtome, incubated with custom made anti-AFP antibodies and isualised on a fluorescence microscope. The larvae of both winter and summer showed AFP activity in their cuticle, gut lumen and -epithelium. Due to the long synthesis process of AFPs, the larvae contain them all year round. The distribution of these AFPs change during summer, possibly relocating to vesicles in the cuticle and gut lumen/epithelium.
[ { "created": "Fri, 6 Nov 2015 20:04:47 GMT", "version": "v1" } ]
2015-12-16
[ [ "Buch", "Johannes Lørup", "" ] ]
Larvae of the blackspotted pliers support beetle, Rhagium mordax, express antifreeze proteins in their haemolymph during temperate climate winter. It is believed that they also express antifreeze proteins in their cuticle as a means of preventing inoculative freezing. Larvae of Rhagium mordax were collected during winter (March) and summer (May) of 2011. Larvae were fixated, embedded in paraffin wax, sectioned on a microtome, incubated with custom made anti-AFP antibodies and isualised on a fluorescence microscope. The larvae of both winter and summer showed AFP activity in their cuticle, gut lumen and -epithelium. Due to the long synthesis process of AFPs, the larvae contain them all year round. The distribution of these AFPs change during summer, possibly relocating to vesicles in the cuticle and gut lumen/epithelium.
2112.03859
Amy Kinsley
Amy C. Kinsley, Robert G. Haight, Nicholas Snellgrove, Petra Muellner, Ulrich Muellner, Meg Duhr, Nicholas B. D. Phelps
AIS Explorer: Prioritization for watercraft inspections-A decision-support tool for aquatic invasive species management
24 pages, 5 figures, 1 table
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Invasions of aquatic invasive species have imposed significant economic and ecological damage to global aquatic ecosystems. Once an invasive population has established in a new habitat, eradication can be financially and logistically impossible, motivating management strategies to rely heavily upon prevention measures aimed at reducing the introduction and spread. To be productive, on-the-ground management of aquatic invasive species requires effective decision-making surrounding the allocation of limited resources. Watercraft inspections play an important role in managing aquatic invasive species by preventing the overland transport of invasive species between waterbodies and providing education to boaters. In this study, we developed and tested an interactive web-based decision-support tool, AIS Explorer: Prioritization for Watercraft Inspections, to guide AIS managers in developing efficient watercraft inspection plans. The decision-support tool is informed by a novel network model that maximized the number of inspected watercraft that move from AIS-infested to uninfested waterbodies, within and outside of counties in Minnesota, USA. It was iteratively built with stakeholder feedback, including consultations with county managers, beta-testing of the web-based application, and workshops to educate and train end-users. The co-development and implementation of data-driven decision support tools demonstrate how interdisciplinary methods can be used to connect science and management to support decision-making. The AIS Explorer: Prioritization for Watercraft Inspections application makes optimized research outputs accessible in multiple dynamic forms that maintain pace with the identification of new infestations and local needs. In addition, the decision support tool has supported improved and closer communication between AIS managers and researchers on this topic.
[ { "created": "Tue, 7 Dec 2021 17:52:01 GMT", "version": "v1" } ]
2021-12-08
[ [ "Kinsley", "Amy C.", "" ], [ "Haight", "Robert G.", "" ], [ "Snellgrove", "Nicholas", "" ], [ "Muellner", "Petra", "" ], [ "Muellner", "Ulrich", "" ], [ "Duhr", "Meg", "" ], [ "Phelps", "Nicholas B. D.", ""...
Invasions of aquatic invasive species have imposed significant economic and ecological damage to global aquatic ecosystems. Once an invasive population has established in a new habitat, eradication can be financially and logistically impossible, motivating management strategies to rely heavily upon prevention measures aimed at reducing the introduction and spread. To be productive, on-the-ground management of aquatic invasive species requires effective decision-making surrounding the allocation of limited resources. Watercraft inspections play an important role in managing aquatic invasive species by preventing the overland transport of invasive species between waterbodies and providing education to boaters. In this study, we developed and tested an interactive web-based decision-support tool, AIS Explorer: Prioritization for Watercraft Inspections, to guide AIS managers in developing efficient watercraft inspection plans. The decision-support tool is informed by a novel network model that maximized the number of inspected watercraft that move from AIS-infested to uninfested waterbodies, within and outside of counties in Minnesota, USA. It was iteratively built with stakeholder feedback, including consultations with county managers, beta-testing of the web-based application, and workshops to educate and train end-users. The co-development and implementation of data-driven decision support tools demonstrate how interdisciplinary methods can be used to connect science and management to support decision-making. The AIS Explorer: Prioritization for Watercraft Inspections application makes optimized research outputs accessible in multiple dynamic forms that maintain pace with the identification of new infestations and local needs. In addition, the decision support tool has supported improved and closer communication between AIS managers and researchers on this topic.
2008.03167
Asma Azizi
Asma Azizi, Zhuolin Qu, Bryan Lewis, James Mac Hyman
Generating a Heterosexual Bipartite Network Embedded in Social Network
null
null
10.1007/s41109-020-00348-1
null
q-bio.QM cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe how to generate a heterosexual network with a prescribed joint-degree distribution that is embedded in a prescribed large-scale social contact network. The structure of a sexual network plays an important role in how sexually transmitted infections (STIs) spread. Generating an ensemble of networks that mimics the real-world is crucial to evaluating robust mitigation strategies for controling STIs. Most of the current algorithms to generate sexual networks only use sexual activity data, such as the number of partners per month, to generate the sexual network. Real-world sexual networks also depend on biased mixing based on age, location, and social and work activities. We describe an approach to use a broad range of social activity data to generate possible heterosexual networks. We start with a large-scale simulation of thousands of people in a city as they go through their daily activities, including work, school, shopping, and activities at home. We extract a social network from these activities where the nodes are the people and the edges indicate a social interaction, such as working in the same location. This social network captures the correlations between people of different ages, living in different locations, their economic status, and other demographic factors. We use the social contact network to define a bipartite heterosexual network that is embedded within an extended social network. The resulting sexual network captures the biased mixing inherent in the social network, and models based on this pairing of networks can be used to investigate novel intervention strategies based on the social contacts of infected people. We illustrate the approach in a model for the spread of Chlamydia in the heterosexual network representing the young sexually active community in New Orleans.
[ { "created": "Wed, 5 Aug 2020 19:30:24 GMT", "version": "v1" } ]
2021-04-19
[ [ "Azizi", "Asma", "" ], [ "Qu", "Zhuolin", "" ], [ "Lewis", "Bryan", "" ], [ "Mac Hyman", "James", "" ] ]
We describe how to generate a heterosexual network with a prescribed joint-degree distribution that is embedded in a prescribed large-scale social contact network. The structure of a sexual network plays an important role in how sexually transmitted infections (STIs) spread. Generating an ensemble of networks that mimics the real-world is crucial to evaluating robust mitigation strategies for controling STIs. Most of the current algorithms to generate sexual networks only use sexual activity data, such as the number of partners per month, to generate the sexual network. Real-world sexual networks also depend on biased mixing based on age, location, and social and work activities. We describe an approach to use a broad range of social activity data to generate possible heterosexual networks. We start with a large-scale simulation of thousands of people in a city as they go through their daily activities, including work, school, shopping, and activities at home. We extract a social network from these activities where the nodes are the people and the edges indicate a social interaction, such as working in the same location. This social network captures the correlations between people of different ages, living in different locations, their economic status, and other demographic factors. We use the social contact network to define a bipartite heterosexual network that is embedded within an extended social network. The resulting sexual network captures the biased mixing inherent in the social network, and models based on this pairing of networks can be used to investigate novel intervention strategies based on the social contacts of infected people. We illustrate the approach in a model for the spread of Chlamydia in the heterosexual network representing the young sexually active community in New Orleans.
2205.12341
Thomas Gregor
Fernando W. Rossine, Gabriel Vercelli, Corina E. Tarnita, and Thomas Gregor
Structured foraging of soil predators unveils functional responses to bacterial defenses
null
null
10.1073/pnas.2210995119
null
q-bio.PE physics.bio-ph q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Predators and their foraging strategies often determine ecosystem structure and function. Yet, the role of protozoan predators in microbial soil ecosystems remains elusive despite the importance of these ecosystems to global biogeochemical cycles. In particular, amoebae -- the most abundant soil protozoan predators of bacteria -- remineralize soil nutrients and shape the bacterial community. However, their foraging strategies and their role as microbial ecosystem engineers remain unknown. Here we present a multi-scale approach, connecting microscopic single-cell analysis and macroscopic whole ecosystem dynamics, to expose a phylogenetically widespread foraging strategy, in which an amoeba population spontaneously partitions between cells with fast, polarized movement and cells with slow, unpolarized movement. Such differentiated motion gives rise to efficient colony expansion and consumption of the bacterial substrate. From these insights we construct a theoretical model that predicts how disturbances to amoeba growth rate and movement disrupt their predation efficiency. These disturbances correspond to distinct classes of bacterial defenses, which allows us to experimentally validate our predictions. All considered, our characterization of amoeba foraging identifies amoeba mobility, and not amoeba growth, as the core determinant of predation efficiency and a key target for bacterial defense systems.
[ { "created": "Tue, 24 May 2022 19:38:14 GMT", "version": "v1" } ]
2023-01-11
[ [ "Rossine", "Fernando W.", "" ], [ "Vercelli", "Gabriel", "" ], [ "Tarnita", "Corina E.", "" ], [ "Gregor", "Thomas", "" ] ]
Predators and their foraging strategies often determine ecosystem structure and function. Yet, the role of protozoan predators in microbial soil ecosystems remains elusive despite the importance of these ecosystems to global biogeochemical cycles. In particular, amoebae -- the most abundant soil protozoan predators of bacteria -- remineralize soil nutrients and shape the bacterial community. However, their foraging strategies and their role as microbial ecosystem engineers remain unknown. Here we present a multi-scale approach, connecting microscopic single-cell analysis and macroscopic whole ecosystem dynamics, to expose a phylogenetically widespread foraging strategy, in which an amoeba population spontaneously partitions between cells with fast, polarized movement and cells with slow, unpolarized movement. Such differentiated motion gives rise to efficient colony expansion and consumption of the bacterial substrate. From these insights we construct a theoretical model that predicts how disturbances to amoeba growth rate and movement disrupt their predation efficiency. These disturbances correspond to distinct classes of bacterial defenses, which allows us to experimentally validate our predictions. All considered, our characterization of amoeba foraging identifies amoeba mobility, and not amoeba growth, as the core determinant of predation efficiency and a key target for bacterial defense systems.
0712.4397
William Bialek
William Bialek and Rama Ranganathan
Rediscovering the power of pairwise interactions
null
null
null
null
q-bio.QM
null
Two recent streams of work suggest that pairwise interactions may be sufficient to capture the complexity of biological systems ranging from protein structure to networks of neurons. In one approach, possible amino acid sequences in a family of proteins are generated by Monte Carlo annealing of a "Hamiltonian" that forces pairwise correlations among amino acid substitutions to be close to the observed correlations. In the other approach, the observed correlations among pairs of neurons are used to construct a maximum entropy model for the states of the network as a whole. We show that, in certain limits, these two approaches are mathematically equivalent, and we comment on open problems suggested by this framework
[ { "created": "Fri, 28 Dec 2007 19:53:09 GMT", "version": "v1" } ]
2007-12-31
[ [ "Bialek", "William", "" ], [ "Ranganathan", "Rama", "" ] ]
Two recent streams of work suggest that pairwise interactions may be sufficient to capture the complexity of biological systems ranging from protein structure to networks of neurons. In one approach, possible amino acid sequences in a family of proteins are generated by Monte Carlo annealing of a "Hamiltonian" that forces pairwise correlations among amino acid substitutions to be close to the observed correlations. In the other approach, the observed correlations among pairs of neurons are used to construct a maximum entropy model for the states of the network as a whole. We show that, in certain limits, these two approaches are mathematically equivalent, and we comment on open problems suggested by this framework
q-bio/0612038
Dmitri Parkhomchuk
Dmitri V. Parkhomchuk
Information Theory of Genomes
12 pages, 7 figures, added some comments
null
null
null
q-bio.GN q-bio.PE
null
Relation of genome sizes to organisms complexity is still described rather equivocally. Neither the number of genes (G-value), nor the total amount of DNA (C-value) correlates consistently with phenotype complexity. Using information theory considerations we developed a model that allows a quantative estimate for the amount of functional information in a genomic sequence. This model easily answers the long-standing question of why GC content is increased in functional regions. The model allows consistent estimate of genome complexities, resolving the major discrepancies of G- and C-values. For related organisms with similarly complex phenotypes, this estimate provides biological insights into their niches complexities. This theoretical framework suggests that biological information can rapidly evolve on demand from environment, mainly in non-coding genomic sequence and explains the role of duplications in the evolution of biological information. Knowing the approximate amount of functionality in a genomic sequence is useful for many applications such as phylogenetics analyses, in-silico functional elements discovery or prioritising targets for genotyping and sequencing.
[ { "created": "Tue, 19 Dec 2006 14:31:13 GMT", "version": "v1" }, { "created": "Fri, 12 Jan 2007 16:34:44 GMT", "version": "v2" } ]
2007-05-23
[ [ "Parkhomchuk", "Dmitri V.", "" ] ]
Relation of genome sizes to organisms complexity is still described rather equivocally. Neither the number of genes (G-value), nor the total amount of DNA (C-value) correlates consistently with phenotype complexity. Using information theory considerations we developed a model that allows a quantative estimate for the amount of functional information in a genomic sequence. This model easily answers the long-standing question of why GC content is increased in functional regions. The model allows consistent estimate of genome complexities, resolving the major discrepancies of G- and C-values. For related organisms with similarly complex phenotypes, this estimate provides biological insights into their niches complexities. This theoretical framework suggests that biological information can rapidly evolve on demand from environment, mainly in non-coding genomic sequence and explains the role of duplications in the evolution of biological information. Knowing the approximate amount of functionality in a genomic sequence is useful for many applications such as phylogenetics analyses, in-silico functional elements discovery or prioritising targets for genotyping and sequencing.
2303.12526
Jiahang Li
Jiahang Li, Steffen Waldherr, Wolfram Weckwerth
COVRECON: Combining Genome-scale Metabolic Network Reconstruction and Data-driven Inverse Modeling to Reveal Changes in Metabolic Interaction Networks
none
https://academic.oup.com/bioinformatics/article/39/7/btad397/7218933 2023
10.1093/bioinformatics/btad397
null
q-bio.MN math.OC
http://creativecommons.org/licenses/by/4.0/
One central goal of systems biology is to infer biochemical regulations from large-scale OMICS data. Many aspects of cellular physiology and organism phenotypes could be understood as a result of the metabolic interaction network dynamics. Previously, we have derived a mathematical method addressing this problem using metabolomics data for the inverse calculation of a biochemical Jacobian network. However, these algorithms for this inference are limited by two issues: they rely on structural network information that needs to be assembled manually, and they are numerically unstable due to ill-conditioned regression problems, which makes them inadequate for dealing with large-scale metabolic networks. In this work, we present a novel regression-loss based inverse Jacobian algorithm and related workflow COVRECON. It consists of two parts: a, Sim-Network and b, Inverse differential Jacobian evaluation. Sim-Network automatically generates an organism-specific enzyme and reaction dataset from Bigg and KEGG databases, which is then used to reconstruct the Jacobian's structure for a specific metabolomics dataset. Instead of directly solving a regression problem, the new inverse differential Jacobian part is based on a more robust approach and rates the biochemical interactions according to their relevance from large-scale metabolomics data. This approach is illustrated by in silico stochastic analysis with different-sized metabolic networks from the BioModels database. The advantages of COVRECON are that 1) it automatically reconstructs a data-driven superpathway metabolic interaction model; 2) more general network structures can be considered; 3) the new inverse algorithms improve stability, decrease computation time, and extend to large-scale models
[ { "created": "Tue, 21 Mar 2023 16:41:39 GMT", "version": "v1" } ]
2023-09-19
[ [ "Li", "Jiahang", "" ], [ "Waldherr", "Steffen", "" ], [ "Weckwerth", "Wolfram", "" ] ]
One central goal of systems biology is to infer biochemical regulations from large-scale OMICS data. Many aspects of cellular physiology and organism phenotypes could be understood as a result of the metabolic interaction network dynamics. Previously, we have derived a mathematical method addressing this problem using metabolomics data for the inverse calculation of a biochemical Jacobian network. However, these algorithms for this inference are limited by two issues: they rely on structural network information that needs to be assembled manually, and they are numerically unstable due to ill-conditioned regression problems, which makes them inadequate for dealing with large-scale metabolic networks. In this work, we present a novel regression-loss based inverse Jacobian algorithm and related workflow COVRECON. It consists of two parts: a, Sim-Network and b, Inverse differential Jacobian evaluation. Sim-Network automatically generates an organism-specific enzyme and reaction dataset from Bigg and KEGG databases, which is then used to reconstruct the Jacobian's structure for a specific metabolomics dataset. Instead of directly solving a regression problem, the new inverse differential Jacobian part is based on a more robust approach and rates the biochemical interactions according to their relevance from large-scale metabolomics data. This approach is illustrated by in silico stochastic analysis with different-sized metabolic networks from the BioModels database. The advantages of COVRECON are that 1) it automatically reconstructs a data-driven superpathway metabolic interaction model; 2) more general network structures can be considered; 3) the new inverse algorithms improve stability, decrease computation time, and extend to large-scale models
1808.07429
Max Souza
Fabio A. C. C. Chalub and Max O. Souza
Fitness potentials and qualitative properties of the Wright-Fisher dynamics
null
null
10.1016/j.jtbi.2018.08.021
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a mechanistic formalism for the study of evolutionary dynamics models based on the diffusion approximation described by the Kimura Equation. In this formalism, the central component is the fitness potential, from which we obtain an expression for the amount of work necessary for a given type to reach fixation. In particular, within this interpretation, we develop a graphical analysis --- similar to the one used in classical mechanics --- providing the basic tool for a simple heuristic that describes both the short and long term dynamics. As a by-product, we provide a new definition of an evolutionary stable state in finite populations that includes the case of mixed populations. We finish by showing that our theory -- rigorous for two types evolution without mutations-- is also consistent with the multi-type case, and with the inclusion of rare mutations.
[ { "created": "Wed, 22 Aug 2018 16:33:08 GMT", "version": "v1" } ]
2018-08-23
[ [ "Chalub", "Fabio A. C. C.", "" ], [ "Souza", "Max O.", "" ] ]
We present a mechanistic formalism for the study of evolutionary dynamics models based on the diffusion approximation described by the Kimura Equation. In this formalism, the central component is the fitness potential, from which we obtain an expression for the amount of work necessary for a given type to reach fixation. In particular, within this interpretation, we develop a graphical analysis --- similar to the one used in classical mechanics --- providing the basic tool for a simple heuristic that describes both the short and long term dynamics. As a by-product, we provide a new definition of an evolutionary stable state in finite populations that includes the case of mixed populations. We finish by showing that our theory -- rigorous for two types evolution without mutations-- is also consistent with the multi-type case, and with the inclusion of rare mutations.
2401.14191
Daniel Schindler
Adan A. Ramirez Rojas, Cedric K. Brinkmann, Daniel Schindler
Validation of Golden Gate assemblies using highly multiplexed Nanopore amplicon sequencing
25 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Golden Gate cloning has revolutionized synthetic biology. Its concept of modular, highly characterized libraries of parts that can be combined into higher order assemblies allows engineering principles to be applied to biological systems. The basic parts, typically stored in level 0 plasmids, are sequence validated by the method of choice and can be combined into higher order assemblies on demand. Higher order assemblies are typically transcriptional units, and multiple transcriptional units can be assembled into multi-gene constructs. Higher order Golden Gate assembly based on defined and validated parts usually does not introduce sequence changes. Therefore, simple validation of the assemblies, e.g. by colony PCR or restriction digest pattern analysis, is sufficient. However, in many experimental setups, researchers do not use defined parts, but rather part libraries, resulting in assemblies of high combinatorial complexity where sequencing again becomes mandatory. Here we present a detailed protocol for the use of a highly multiplexed dual barcode amplicon sequencing using the Nanopore sequencing platform for in-house sequence validation. The workflow, called DuBA.flow, is a start-to-finish procedure that provides all necessary steps from a single colony to the final easy-to-interpret sequencing report.
[ { "created": "Thu, 25 Jan 2024 14:01:23 GMT", "version": "v1" } ]
2024-01-26
[ [ "Rojas", "Adan A. Ramirez", "" ], [ "Brinkmann", "Cedric K.", "" ], [ "Schindler", "Daniel", "" ] ]
Golden Gate cloning has revolutionized synthetic biology. Its concept of modular, highly characterized libraries of parts that can be combined into higher order assemblies allows engineering principles to be applied to biological systems. The basic parts, typically stored in level 0 plasmids, are sequence validated by the method of choice and can be combined into higher order assemblies on demand. Higher order assemblies are typically transcriptional units, and multiple transcriptional units can be assembled into multi-gene constructs. Higher order Golden Gate assembly based on defined and validated parts usually does not introduce sequence changes. Therefore, simple validation of the assemblies, e.g. by colony PCR or restriction digest pattern analysis, is sufficient. However, in many experimental setups, researchers do not use defined parts, but rather part libraries, resulting in assemblies of high combinatorial complexity where sequencing again becomes mandatory. Here we present a detailed protocol for the use of a highly multiplexed dual barcode amplicon sequencing using the Nanopore sequencing platform for in-house sequence validation. The workflow, called DuBA.flow, is a start-to-finish procedure that provides all necessary steps from a single colony to the final easy-to-interpret sequencing report.
1409.3274
Mike Steel Prof.
Robert W. Scotland and Mike Steel
Circumstances in which parsimony but not compatibility will be provably misleading
37 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic methods typically rely on an appropriate model of how data evolved in order to infer an accurate phylogenetic tree. For molecular data, standard statistical methods have provided an effective strategy for extracting phylogenetic information from aligned sequence data when each site (character) is subject to a common process. However, for other types of data (e.g. morphological data), characters can be too ambiguous, homoplastic or saturated to develop models that are effective at capturing the underlying process of change. To address this, we examine the properties of a classic but neglected method for inferring splits in an underlying tree, namely, maximum compatibility. By adopting a simple and extreme model in which each character either fits perfectly on some tree, or is entirely random (but it is not known which class any character belongs to) we are able to derive exact and explicit formulae regarding the performance of maximum compatibility. We show that this method is able to identify a set of non-trivial homoplasy-free characters, when the number $n$ of taxa is large, even when the number of random characters is large. By contrast, we show that a method that makes more uniform use of all the data --- maximum parsimony --- can provably estimate trees in which {\em none} of the original homoplasy-free characters support splits.
[ { "created": "Wed, 10 Sep 2014 22:59:21 GMT", "version": "v1" }, { "created": "Tue, 20 Jan 2015 00:22:55 GMT", "version": "v2" } ]
2015-01-21
[ [ "Scotland", "Robert W.", "" ], [ "Steel", "Mike", "" ] ]
Phylogenetic methods typically rely on an appropriate model of how data evolved in order to infer an accurate phylogenetic tree. For molecular data, standard statistical methods have provided an effective strategy for extracting phylogenetic information from aligned sequence data when each site (character) is subject to a common process. However, for other types of data (e.g. morphological data), characters can be too ambiguous, homoplastic or saturated to develop models that are effective at capturing the underlying process of change. To address this, we examine the properties of a classic but neglected method for inferring splits in an underlying tree, namely, maximum compatibility. By adopting a simple and extreme model in which each character either fits perfectly on some tree, or is entirely random (but it is not known which class any character belongs to) we are able to derive exact and explicit formulae regarding the performance of maximum compatibility. We show that this method is able to identify a set of non-trivial homoplasy-free characters, when the number $n$ of taxa is large, even when the number of random characters is large. By contrast, we show that a method that makes more uniform use of all the data --- maximum parsimony --- can provably estimate trees in which {\em none} of the original homoplasy-free characters support splits.
1708.00909
Joshua Glaser
Joshua I. Glaser, Ari S. Benjamin, Raeed H. Chowdhury, Matthew G. Perich, Lee E. Miller, Konrad P. Kording
Machine learning for neural decoding
null
null
null
null
q-bio.NC cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding performance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help advance engineering applications such as brain machine interfaces.
[ { "created": "Wed, 2 Aug 2017 19:53:22 GMT", "version": "v1" }, { "created": "Fri, 4 May 2018 16:58:31 GMT", "version": "v2" }, { "created": "Fri, 20 Sep 2019 02:46:47 GMT", "version": "v3" }, { "created": "Fri, 3 Jul 2020 15:25:31 GMT", "version": "v4" } ]
2020-07-06
[ [ "Glaser", "Joshua I.", "" ], [ "Benjamin", "Ari S.", "" ], [ "Chowdhury", "Raeed H.", "" ], [ "Perich", "Matthew G.", "" ], [ "Miller", "Lee E.", "" ], [ "Kording", "Konrad P.", "" ] ]
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding performance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help advance engineering applications such as brain machine interfaces.
2211.02935
Zheng Kou
Junjie Li, Jietong Zhao, Yanqing Su, Jiahao Shen, Yaohua Liu, Xinyue Fan, Zheng Kou
Efficient Cavity Searching for Gene Network of Influenza A Virus
work in progress
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High order structures (cavities and cliques) of the gene network of influenza A virus reveal tight associations among viruses during evolution and are key signals that indicate viral cross-species infection and cause pandemics. As indicators for sensing the dynamic changes of viral genes, these higher order structures have been the focus of attention in the field of virology. However, the size of the viral gene network is usually huge, and searching these structures in the networks introduces unacceptable delay. To mitigate this issue, in this paper, we propose a simple-yet-effective model named HyperSearch based on deep learning to search cavities in a computable complex network for influenza virus genetics. Extensive experiments conducted on a public influenza virus dataset demonstrate the effectiveness of HyperSearch over other advanced deep-learning methods without any elaborated model crafting. Moreover, HyperSearch can finish the search works in minutes while 0-1 programming takes days. Since the proposed method is simple and easy to be transferred to other complex networks, HyperSearch has the potential to facilitate the monitoring of dynamic changes in viral genes and help humans keep up with the pace of virus mutations.
[ { "created": "Sat, 5 Nov 2022 16:24:55 GMT", "version": "v1" } ]
2022-11-08
[ [ "Li", "Junjie", "" ], [ "Zhao", "Jietong", "" ], [ "Su", "Yanqing", "" ], [ "Shen", "Jiahao", "" ], [ "Liu", "Yaohua", "" ], [ "Fan", "Xinyue", "" ], [ "Kou", "Zheng", "" ] ]
High order structures (cavities and cliques) of the gene network of influenza A virus reveal tight associations among viruses during evolution and are key signals that indicate viral cross-species infection and cause pandemics. As indicators for sensing the dynamic changes of viral genes, these higher order structures have been the focus of attention in the field of virology. However, the size of the viral gene network is usually huge, and searching these structures in the networks introduces unacceptable delay. To mitigate this issue, in this paper, we propose a simple-yet-effective model named HyperSearch based on deep learning to search cavities in a computable complex network for influenza virus genetics. Extensive experiments conducted on a public influenza virus dataset demonstrate the effectiveness of HyperSearch over other advanced deep-learning methods without any elaborated model crafting. Moreover, HyperSearch can finish the search works in minutes while 0-1 programming takes days. Since the proposed method is simple and easy to be transferred to other complex networks, HyperSearch has the potential to facilitate the monitoring of dynamic changes in viral genes and help humans keep up with the pace of virus mutations.
1707.00772
Timoth\'ee Proix Ph.D.
Timoth\'ee Proix, Viktor K. Jirsa, Fabrice Bartolomei, Maxime Guye, Wilson Truccolo
Predicting the spatiotemporal diversity of seizure propagation and termination in human focal epilepsy
10 pages + 9 pages Supporting Information (SI), 7 figures, 1 SI table, 7 SI figures
null
10.1038/s41467-018-02973-y
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown that seizures can spread and terminate across brain areas via a rich diversity of spatiotemporal patterns. In particular, while the location of the seizure onset area is usually in-variant across seizures in a same patient, the source of traveling (2-3 Hz) spike-and-wave discharges (SWDs) during seizures can either move with the slower propagating ictal wavefront or remain stationary at the seizure onset area. In addition, although most focal seizures terminate quasi-synchronously across brain areas, some evolve into distinct ictal clusters and terminate asynchronously. To provide a unifying perspective on the observed diversity of spatiotemporal dynamics for seizure spread and termination, we introduce here the Epileptor neural field model. Two mechanisms play an essential role. First, while the slow ictal wavefront propagates as a front in excitable neural media, the faster SWDs propagation results from coupled-oscillator dynamics. Second, multiple time scales interact during seizure spread, allowing for low-voltage fast-activity (>10 Hz) to hamper seizure spread and for SWD propagation to affect the way a seizure terminates. These dynamics, together with variations in short and long-range connectivity strength, play a central role on seizure spread, maintenance and termination. We demonstrate how Epileptor field models incorporating the above mechanisms predict the previously reported diversity in seizure spread patterns. Furthermore, we confirm the predictions for synchronous or asynchronous (clustered) seizure termination in human seizures recorded via stereotactic EEG. Our new insights into seizure spatiotemporal dynamics may also contribute to the development of new closed-loop neuromodulation therapies for focal epilepsy.
[ { "created": "Mon, 3 Jul 2017 22:08:12 GMT", "version": "v1" } ]
2018-05-09
[ [ "Proix", "Timothée", "" ], [ "Jirsa", "Viktor K.", "" ], [ "Bartolomei", "Fabrice", "" ], [ "Guye", "Maxime", "" ], [ "Truccolo", "Wilson", "" ] ]
Recent studies have shown that seizures can spread and terminate across brain areas via a rich diversity of spatiotemporal patterns. In particular, while the location of the seizure onset area is usually in-variant across seizures in a same patient, the source of traveling (2-3 Hz) spike-and-wave discharges (SWDs) during seizures can either move with the slower propagating ictal wavefront or remain stationary at the seizure onset area. In addition, although most focal seizures terminate quasi-synchronously across brain areas, some evolve into distinct ictal clusters and terminate asynchronously. To provide a unifying perspective on the observed diversity of spatiotemporal dynamics for seizure spread and termination, we introduce here the Epileptor neural field model. Two mechanisms play an essential role. First, while the slow ictal wavefront propagates as a front in excitable neural media, the faster SWDs propagation results from coupled-oscillator dynamics. Second, multiple time scales interact during seizure spread, allowing for low-voltage fast-activity (>10 Hz) to hamper seizure spread and for SWD propagation to affect the way a seizure terminates. These dynamics, together with variations in short and long-range connectivity strength, play a central role on seizure spread, maintenance and termination. We demonstrate how Epileptor field models incorporating the above mechanisms predict the previously reported diversity in seizure spread patterns. Furthermore, we confirm the predictions for synchronous or asynchronous (clustered) seizure termination in human seizures recorded via stereotactic EEG. Our new insights into seizure spatiotemporal dynamics may also contribute to the development of new closed-loop neuromodulation therapies for focal epilepsy.
1903.02147
Tamal Batabyal
Tamal Batabyal, Barry Condron, Scott T. Acton
NeuroPath2Path: Classification and elastic morphing between neuronal arbors using path-wise similarity
Submitted to Neuroinformatics
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The shape and connectivity of a neuron determine its function. Modern imaging methods have proven successful at extracting such information. However, in order to analyze this type of data, neuronal morphology needs to be encoded in a graph-theoretic method. This encoding enables the use of high throughput informatic methods to extract and infer brain function. The application of graph-theoretic methods to neuronal morphological representation comes with certain difficulties. Here we report a novel, effective method to accomplish this task. The morphology of a neuron, which consists of its overall size, global shape, local branch patterns, and cell-specific biophysical properties, can vary significantly with the cell's identity, location, as well as developmental and physiological state. Various algorithms have been developed to customize shape based statistical and graph related features for quantitative analysis of neuromorphology, followed by the classification of neuron cell types using the features. Unlike the classical feature extraction based methods from imaged or 3D reconstructed neurons, we propose a model based on the rooted path decomposition from the soma to the dendrites of a neuron and extract morphological features on each path. We hypothesize that measuring the distance between two neurons can be realized by minimizing the cost of continuously morphing the set of all rooted paths of one neuron to another. To validate this claim, we first establish the correspondence of paths between two neurons using a modified Munkres algorithm. Next, an elastic deformation framework that employs the square root velocity function is established to perform the continuous morphing, which, in addition, provides an effective visualization tool. We experimentally show the efficacy of NeuroPath2Path, NeuroP2P, over the state of the art.
[ { "created": "Wed, 6 Mar 2019 02:52:59 GMT", "version": "v1" } ]
2019-03-07
[ [ "Batabyal", "Tamal", "" ], [ "Condron", "Barry", "" ], [ "Acton", "Scott T.", "" ] ]
The shape and connectivity of a neuron determine its function. Modern imaging methods have proven successful at extracting such information. However, in order to analyze this type of data, neuronal morphology needs to be encoded in a graph-theoretic method. This encoding enables the use of high throughput informatic methods to extract and infer brain function. The application of graph-theoretic methods to neuronal morphological representation comes with certain difficulties. Here we report a novel, effective method to accomplish this task. The morphology of a neuron, which consists of its overall size, global shape, local branch patterns, and cell-specific biophysical properties, can vary significantly with the cell's identity, location, as well as developmental and physiological state. Various algorithms have been developed to customize shape based statistical and graph related features for quantitative analysis of neuromorphology, followed by the classification of neuron cell types using the features. Unlike the classical feature extraction based methods from imaged or 3D reconstructed neurons, we propose a model based on the rooted path decomposition from the soma to the dendrites of a neuron and extract morphological features on each path. We hypothesize that measuring the distance between two neurons can be realized by minimizing the cost of continuously morphing the set of all rooted paths of one neuron to another. To validate this claim, we first establish the correspondence of paths between two neurons using a modified Munkres algorithm. Next, an elastic deformation framework that employs the square root velocity function is established to perform the continuous morphing, which, in addition, provides an effective visualization tool. We experimentally show the efficacy of NeuroPath2Path, NeuroP2P, over the state of the art.
1104.1946
Wolfgang Keil
Wolfgang Keil and Fred Wolf
Coverage, Continuity and Visual Cortical Architecture
100 pages, including an Appendix, 21 + 7 figures
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.
[ { "created": "Mon, 11 Apr 2011 13:38:22 GMT", "version": "v1" }, { "created": "Sun, 4 Dec 2011 13:17:50 GMT", "version": "v2" } ]
2015-03-19
[ [ "Keil", "Wolfgang", "" ], [ "Wolf", "Fred", "" ] ]
The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.
2011.06694
Mohamad Hamieh
Hamieh Mohamad, Doumit Mary, Toufaily Joumana, Hamieh Tayssir
Study of genetic mutations and dynamic spread of SARS-CoV-2 pandemic and prediction of its evolution according to the SIR model
13 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
In this work, we aim to study that the dynamics behavior for cumulative number of SARS-CoV-2 pandemic can provide information on the overall behavior of the spread over daily time.The cumulative data can be synthesized in an empirical form obtained from a Susceptible-Infected-Recovered (SIR) model previously studied on a Euclidean network. From the study we carried out, we can conclude that the SIR model on the Euclidean network can reproduce data from several countries with a deviation of precision for given parameter values. This gives an insight into the different agents that influence the behavior of SARS-CoV-2 especially during the virus mutation period. We are thus trying to analyze the effect of genetic mutations in different countries, and how a specific mutation can make the virus more contagious.
[ { "created": "Fri, 13 Nov 2020 00:03:49 GMT", "version": "v1" } ]
2020-11-16
[ [ "Mohamad", "Hamieh", "" ], [ "Mary", "Doumit", "" ], [ "Joumana", "Toufaily", "" ], [ "Tayssir", "Hamieh", "" ] ]
In this work, we aim to study that the dynamics behavior for cumulative number of SARS-CoV-2 pandemic can provide information on the overall behavior of the spread over daily time.The cumulative data can be synthesized in an empirical form obtained from a Susceptible-Infected-Recovered (SIR) model previously studied on a Euclidean network. From the study we carried out, we can conclude that the SIR model on the Euclidean network can reproduce data from several countries with a deviation of precision for given parameter values. This gives an insight into the different agents that influence the behavior of SARS-CoV-2 especially during the virus mutation period. We are thus trying to analyze the effect of genetic mutations in different countries, and how a specific mutation can make the virus more contagious.
2206.06131
Ran Liu
Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer
Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers
accepted by NeurIPS 2022
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how they contribute to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that builds descriptions of both the individual as well as the collective population dynamics. Rather than combining all of our data into our model at the onset, we develop a separable architecture that operates on individual time-series first before passing them forward; this induces a permutation-invariance property and can be used to transfer across systems of different size and order. After demonstrating that our model can be applied to successfully recover complex interactions and dynamics in many-body systems, we apply our approach to populations of neurons in the nervous system. On neural activity datasets, we show that our model not only yields robust decoding performance, but also provides impressive performance in transfer across recordings of different animals without any neuron-level correspondence. By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.
[ { "created": "Fri, 10 Jun 2022 07:14:57 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2022 17:49:08 GMT", "version": "v2" } ]
2022-10-21
[ [ "Liu", "Ran", "" ], [ "Azabou", "Mehdi", "" ], [ "Dabagia", "Max", "" ], [ "Xiao", "Jingyun", "" ], [ "Dyer", "Eva L.", "" ] ]
Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how they contribute to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that builds descriptions of both the individual as well as the collective population dynamics. Rather than combining all of our data into our model at the onset, we develop a separable architecture that operates on individual time-series first before passing them forward; this induces a permutation-invariance property and can be used to transfer across systems of different size and order. After demonstrating that our model can be applied to successfully recover complex interactions and dynamics in many-body systems, we apply our approach to populations of neurons in the nervous system. On neural activity datasets, we show that our model not only yields robust decoding performance, but also provides impressive performance in transfer across recordings of different animals without any neuron-level correspondence. By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.
0906.3023
Nelson Fernandes
N. M. Fernandes, B. D. L. Pinto, L. O. B. Almeida, J. F. W. Slaets, R. K\"oberle
Recording from two neurons: second order stimulus reconstruction from spike trains and population coding
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the reconstruction of visual stimuli from spike trains, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. If the reconstructed stimulus is to be represented by a Volterra series and correlations between spikes are to be taken into account, first order expansions are insufficient and we have to go to second order, at least. In this case higher order correlation functions have to be manipulated, whose size may become prohibitively large. We therefore develop a Gaussian-like representation for fourth order correlation functions, which works exceedingly well in the case of the fly. The reconstructions using this Gaussian-like representation are very similar to the reconstructions using the experimental correlation functions. The overall contribution to rotational stimulus reconstruction of the second order kernels - measured by a chi-squared averaged over the whole experiment - is only about 8% of the first order contribution. Yet if we introduce an instant-dependent chi-square to measure the contribution of second order kernels at special events, we observe an up to 100% improvement. As may be expected, for translational stimuli the reconstructions are rather poor. The Gaussian-like representation could be a valuable aid in population coding with large number of neurons.
[ { "created": "Tue, 16 Jun 2009 20:44:27 GMT", "version": "v1" }, { "created": "Mon, 28 Sep 2009 12:20:41 GMT", "version": "v2" } ]
2009-09-28
[ [ "Fernandes", "N. M.", "" ], [ "Pinto", "B. D. L.", "" ], [ "Almeida", "L. O. B.", "" ], [ "Slaets", "J. F. W.", "" ], [ "Köberle", "R.", "" ] ]
We study the reconstruction of visual stimuli from spike trains, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. If the reconstructed stimulus is to be represented by a Volterra series and correlations between spikes are to be taken into account, first order expansions are insufficient and we have to go to second order, at least. In this case higher order correlation functions have to be manipulated, whose size may become prohibitively large. We therefore develop a Gaussian-like representation for fourth order correlation functions, which works exceedingly well in the case of the fly. The reconstructions using this Gaussian-like representation are very similar to the reconstructions using the experimental correlation functions. The overall contribution to rotational stimulus reconstruction of the second order kernels - measured by a chi-squared averaged over the whole experiment - is only about 8% of the first order contribution. Yet if we introduce an instant-dependent chi-square to measure the contribution of second order kernels at special events, we observe an up to 100% improvement. As may be expected, for translational stimuli the reconstructions are rather poor. The Gaussian-like representation could be a valuable aid in population coding with large number of neurons.
2210.01956
Mateusz Wilinski
Mateusz Wilinski, Lauren Castro, Jeffrey Keithley, Carrie Manore, Josefina Campos, Ethan Romero-Severson, Daryl Domman, Andrey Y. Lokhov
Congruity of genomic and epidemiological data in modeling of local cholera outbreaks
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cholera continues to be a global health threat. Understanding how cholera spreads between locations is fundamental to the rational, evidence-based design of intervention and control efforts. Traditionally, cholera transmission models have utilized cholera case count data. More recently, whole genome sequence data has qualitatively described cholera transmission. Integrating these data streams may provide much more accurate models of cholera spread, however no systematic analyses have been performed so far to compare traditional case-count models to the phylodynamic models from genomic data for cholera transmission. Here, we use high-fidelity case count and whole genome sequencing data from the 1991-1998 cholera epidemic in Argentina to directly compare the epidemiological model parameters estimated from these two data sources. We find that phylodynamic methods applied to cholera genomics data provide comparable estimates that are in line with established methods. Our methodology represents a critical step in building a framework for integrating case-count and genomic data sources for cholera epidemiology and other bacterial pathogens.
[ { "created": "Tue, 4 Oct 2022 23:13:53 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 21:21:18 GMT", "version": "v2" } ]
2023-04-03
[ [ "Wilinski", "Mateusz", "" ], [ "Castro", "Lauren", "" ], [ "Keithley", "Jeffrey", "" ], [ "Manore", "Carrie", "" ], [ "Campos", "Josefina", "" ], [ "Romero-Severson", "Ethan", "" ], [ "Domman", "Daryl", "" ...
Cholera continues to be a global health threat. Understanding how cholera spreads between locations is fundamental to the rational, evidence-based design of intervention and control efforts. Traditionally, cholera transmission models have utilized cholera case count data. More recently, whole genome sequence data has qualitatively described cholera transmission. Integrating these data streams may provide much more accurate models of cholera spread, however no systematic analyses have been performed so far to compare traditional case-count models to the phylodynamic models from genomic data for cholera transmission. Here, we use high-fidelity case count and whole genome sequencing data from the 1991-1998 cholera epidemic in Argentina to directly compare the epidemiological model parameters estimated from these two data sources. We find that phylodynamic methods applied to cholera genomics data provide comparable estimates that are in line with established methods. Our methodology represents a critical step in building a framework for integrating case-count and genomic data sources for cholera epidemiology and other bacterial pathogens.
2403.11867
Lars Reining
Lars C. Reining, Thomas S. A. Wallis
A psychophysical evaluation of techniques for Mooney image generation
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Mooney images can contribute to our understanding of the processes involved in visual perception, because they allow a dissociation between image content and image understanding. Mooney images are generated by first smoothing and subsequently thresholding an image. In most previous studies this was performed manually, using subjective criteria for generation. This manual process could eventually be avoided by using automatic generation techniques. The field of computer image processing offers numerous techniques for image thresholding, but these are only rarely used to create Mooney images. Furthermore, there is little research on the perceptual effects of smoothing and thresholding. Therefore, in this study we investigated how the choice of different thresholding techniques and amount of smoothing affects the interpretability of Mooney images for human participants. We generated Mooney images using four different thresholding techniques and, in a second experiment, parametrically varied the level of smoothing. Participants identified the concepts shown in Mooney images and rated their interpretability. Although the techniques generate physically-different Mooney images, identification performance and subjective ratings were similar across the different techniques. This indicates that finding the perfect threshold in the process of generating Mooney images is not critical for Mooney image interpretability, at least for globally-applied thresholds. The degree of smoothing applied before thresholding, on the other hand, requires more tuning depending on the noise of the original image and the desired interpretability of the resulting Mooney image. Future work in automatic Mooney image generation should pursue local thresholding techniques, where different thresholds are applied to image regions depending on the local image content.
[ { "created": "Mon, 18 Mar 2024 15:20:57 GMT", "version": "v1" } ]
2024-03-19
[ [ "Reining", "Lars C.", "" ], [ "Wallis", "Thomas S. A.", "" ] ]
Mooney images can contribute to our understanding of the processes involved in visual perception, because they allow a dissociation between image content and image understanding. Mooney images are generated by first smoothing and subsequently thresholding an image. In most previous studies this was performed manually, using subjective criteria for generation. This manual process could eventually be avoided by using automatic generation techniques. The field of computer image processing offers numerous techniques for image thresholding, but these are only rarely used to create Mooney images. Furthermore, there is little research on the perceptual effects of smoothing and thresholding. Therefore, in this study we investigated how the choice of different thresholding techniques and amount of smoothing affects the interpretability of Mooney images for human participants. We generated Mooney images using four different thresholding techniques and, in a second experiment, parametrically varied the level of smoothing. Participants identified the concepts shown in Mooney images and rated their interpretability. Although the techniques generate physically-different Mooney images, identification performance and subjective ratings were similar across the different techniques. This indicates that finding the perfect threshold in the process of generating Mooney images is not critical for Mooney image interpretability, at least for globally-applied thresholds. The degree of smoothing applied before thresholding, on the other hand, requires more tuning depending on the noise of the original image and the desired interpretability of the resulting Mooney image. Future work in automatic Mooney image generation should pursue local thresholding techniques, where different thresholds are applied to image regions depending on the local image content.
2302.07582
Malin Luking
Malin Luking, David van der Spoel, Johan Elf and Gareth A. Tribello
Can molecular dynamics be used to simulate biomolecular recognition?
null
null
10.1063/5.0146899
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many problems in biochemistry that are difficult to study experimentally. Simulation methods are appealing due to direct availability of atomic coordinates as a function of time. However, direct molecular simulations are challenged by the size of systems and the time scales needed to describe relevant motions. In theory, enhanced sampling algorithms can help to overcome some of the limitations of molecular simulations. Here, we discuss a problem in biochemistry that offers a significant challenge for enhanced sampling methods and that could, therefore, serve as a benchmark for comparing approaches that use machine learning to find suitable collective variables. More in particular, we study the transitions LacI undergoes upon moving between being non-specifically and specifically bound to DNA. It is found that many degrees of freedom change during this transition and that the transition does not occur reversibly in simulations if only a subset of these degrees of freedom are biased. We also explain why this problem is so important to biologists and the transformative impact that a simulation of it would have on the understanding of DNA regulation.
[ { "created": "Wed, 15 Feb 2023 10:47:19 GMT", "version": "v1" }, { "created": "Mon, 20 Feb 2023 14:27:33 GMT", "version": "v2" }, { "created": "Mon, 17 Apr 2023 11:57:30 GMT", "version": "v3" } ]
2023-05-24
[ [ "Luking", "Malin", "" ], [ "van der Spoel", "David", "" ], [ "Elf", "Johan", "" ], [ "Tribello", "Gareth A.", "" ] ]
There are many problems in biochemistry that are difficult to study experimentally. Simulation methods are appealing due to direct availability of atomic coordinates as a function of time. However, direct molecular simulations are challenged by the size of systems and the time scales needed to describe relevant motions. In theory, enhanced sampling algorithms can help to overcome some of the limitations of molecular simulations. Here, we discuss a problem in biochemistry that offers a significant challenge for enhanced sampling methods and that could, therefore, serve as a benchmark for comparing approaches that use machine learning to find suitable collective variables. More in particular, we study the transitions LacI undergoes upon moving between being non-specifically and specifically bound to DNA. It is found that many degrees of freedom change during this transition and that the transition does not occur reversibly in simulations if only a subset of these degrees of freedom are biased. We also explain why this problem is so important to biologists and the transformative impact that a simulation of it would have on the understanding of DNA regulation.
1406.5104
Venkateshan Kannan
Venkateshan Kannan, Jesper Tegn\`er
On the Theory and Algorithm for rigorous discretization in applications of Information Theory
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify fundamental issues with discretization when estimating information-theoretic quantities in the analysis of data. These difficulties are theoretical in nature and arise with discrete datasets carrying significant implications for the corresponding claims and results. Here we describe the origins of the methodological problems, and provide a clear illustration of their impact with the example of biological network reconstruction. We propose an algorithm (shared information metric) that corrects for the biases and the resulting improved performance of the algorithm demonstrates the need to take due consideration of this issue in different contexts.
[ { "created": "Thu, 19 Jun 2014 16:37:54 GMT", "version": "v1" }, { "created": "Mon, 23 Jun 2014 10:47:36 GMT", "version": "v2" } ]
2014-06-24
[ [ "Kannan", "Venkateshan", "" ], [ "Tegnèr", "Jesper", "" ] ]
We identify fundamental issues with discretization when estimating information-theoretic quantities in the analysis of data. These difficulties are theoretical in nature and arise with discrete datasets carrying significant implications for the corresponding claims and results. Here we describe the origins of the methodological problems, and provide a clear illustration of their impact with the example of biological network reconstruction. We propose an algorithm (shared information metric) that corrects for the biases and the resulting improved performance of the algorithm demonstrates the need to take due consideration of this issue in different contexts.
2209.13527
Naveed Ahmed Azam Dr.
Jianshen Zhu, Naveed Ahmed Azam, Shengjuan Cao, Ryota Ido, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi and Tatsuya Akutsu
Molecular Design Based on Integer Programming and Quadratic Descriptors in a Two-layered Model
arXiv admin note: substantial text overlap with arXiv:2108.10266, arXiv:2107.02381, arXiv:2109.02628
null
null
null
q-bio.BM cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
A novel framework has recently been proposed for designing the molecular structure of chemical compounds with a desired chemical property, where design of novel drugs is an important topic in bioinformatics and chemo-informatics. The framework infers a desired chemical graph by solving a mixed integer linear program (MILP) that simulates the computation process of a feature function defined by a two-layered model on chemical graphs and a prediction function constructed by a machine learning method. A set of graph theoretical descriptors in the feature function plays a key role to derive a compact formulation of such an MILP. To improve the learning performance of prediction functions in the framework maintaining the compactness of the MILP, this paper utilizes the product of two of those descriptors as a new descriptor and then designs a method of reducing the number of descriptors. The results of our computational experiments suggest that the proposed method improved the learning performance for many chemical properties and can infer a chemical structure with up to 50 non-hydrogen atoms.
[ { "created": "Tue, 13 Sep 2022 08:27:25 GMT", "version": "v1" } ]
2022-09-28
[ [ "Zhu", "Jianshen", "" ], [ "Azam", "Naveed Ahmed", "" ], [ "Cao", "Shengjuan", "" ], [ "Ido", "Ryota", "" ], [ "Haraguchi", "Kazuya", "" ], [ "Zhao", "Liang", "" ], [ "Nagamochi", "Hiroshi", "" ], [ ...
A novel framework has recently been proposed for designing the molecular structure of chemical compounds with a desired chemical property, where design of novel drugs is an important topic in bioinformatics and chemo-informatics. The framework infers a desired chemical graph by solving a mixed integer linear program (MILP) that simulates the computation process of a feature function defined by a two-layered model on chemical graphs and a prediction function constructed by a machine learning method. A set of graph theoretical descriptors in the feature function plays a key role to derive a compact formulation of such an MILP. To improve the learning performance of prediction functions in the framework maintaining the compactness of the MILP, this paper utilizes the product of two of those descriptors as a new descriptor and then designs a method of reducing the number of descriptors. The results of our computational experiments suggest that the proposed method improved the learning performance for many chemical properties and can infer a chemical structure with up to 50 non-hydrogen atoms.
2302.06573
Agnieszka Szcz\k{e}sna
Andrzej Polanski, Mateusz Kania, Jaros{\l}aw Gil, Wojciech {\L}abaj, Ewa Lach, Agnieszka Szcz\k{e}sna
Propagation of weakly advantageous mutations in cancer cell population
Removed Figure 6, corrections
null
null
null
q-bio.PE cs.NA math.NA
http://creativecommons.org/licenses/by-nc-nd/4.0/
Research into somatic mutations in cancer cell DNA and their role in tumour growth and progression between successive stages is crucial for improving our understanding of cancer evolution. Mathematical and computer modelling can provide valuable insights into the scenarios of cancer growth, the roles of somatic mutations, and the types and strengths of evolutionary forces they introduce. Previous studies have developed mathematical models of cancer evolution, incorporating driver and passenger somatic mutations. Driver mutations were assumed to have a strong advantageous effect on the growth of the cancer cell population, while passenger mutations were considered fully neutral or mildly deleterious. However, according to several studies, passenger mutations may have a weakly advantageous effect on tumour growth. In this paper, we develop models of cancer evolution with somatic mutations that introduce a weakly advantageous force to the evolution of cancer cells. The models used in this study can be classified into two categories: deterministic and stochastic. Deterministic models are based on systems of differential equations that balance the average number of cells and mutations during evolution. To verify the results of our deterministic modelling, we use a stochastic model based on the Gillespie algorithm. We compare the predictions of our modelling with some observational data on cancer evolution.
[ { "created": "Mon, 13 Feb 2023 18:14:10 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2023 20:08:42 GMT", "version": "v2" }, { "created": "Fri, 4 Aug 2023 07:35:46 GMT", "version": "v3" }, { "created": "Fri, 23 Feb 2024 18:12:47 GMT", "version": "v4" } ]
2024-02-26
[ [ "Polanski", "Andrzej", "" ], [ "Kania", "Mateusz", "" ], [ "Gil", "Jarosław", "" ], [ "Łabaj", "Wojciech", "" ], [ "Lach", "Ewa", "" ], [ "Szczęsna", "Agnieszka", "" ] ]
Research into somatic mutations in cancer cell DNA and their role in tumour growth and progression between successive stages is crucial for improving our understanding of cancer evolution. Mathematical and computer modelling can provide valuable insights into the scenarios of cancer growth, the roles of somatic mutations, and the types and strengths of evolutionary forces they introduce. Previous studies have developed mathematical models of cancer evolution, incorporating driver and passenger somatic mutations. Driver mutations were assumed to have a strong advantageous effect on the growth of the cancer cell population, while passenger mutations were considered fully neutral or mildly deleterious. However, according to several studies, passenger mutations may have a weakly advantageous effect on tumour growth. In this paper, we develop models of cancer evolution with somatic mutations that introduce a weakly advantageous force to the evolution of cancer cells. The models used in this study can be classified into two categories: deterministic and stochastic. Deterministic models are based on systems of differential equations that balance the average number of cells and mutations during evolution. To verify the results of our deterministic modelling, we use a stochastic model based on the Gillespie algorithm. We compare the predictions of our modelling with some observational data on cancer evolution.
1304.5674
Alain Destexhe
Claude Bedard and Alain Destexhe
Generalized cable theory for neurons in complex and heterogeneous media
Phys Rev E, in press, 2013 (11 figures)
Physical Review E 88: 022709, 2013
10.1103/PhysRevE.88.022709
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Cable theory has been developed over the last decades, usually assuming that the extracellular space around membranes is a perfect resistor. However, extracellular media may display more complex electrical properties due to various phenomena, such as polarization, ionic diffusion or capacitive effects, but their impact on cable properties is not known. In this paper, we generalize cable theory for membranes embedded in arbitrarily complex extracellular media. We outline the generalized cable equations, then consider specific cases. The simplest case is a resistive medium, in which case the equations recover the traditional cable equations. We show that for more complex media, for example in the presence of ionic diffusion, the impact on cable properties such as voltage attenuation can be significant. We illustrate this numerically always by comparing the generalized cable to the traditional cable. We conclude that the nature of intracellular and extracellular media may have a strong influence on cable filtering as well as on the passive integrative properties of neurons.
[ { "created": "Sat, 20 Apr 2013 22:37:43 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2013 21:30:09 GMT", "version": "v2" }, { "created": "Fri, 26 Jul 2013 21:28:03 GMT", "version": "v3" } ]
2013-09-19
[ [ "Bedard", "Claude", "" ], [ "Destexhe", "Alain", "" ] ]
Cable theory has been developed over the last decades, usually assuming that the extracellular space around membranes is a perfect resistor. However, extracellular media may display more complex electrical properties due to various phenomena, such as polarization, ionic diffusion or capacitive effects, but their impact on cable properties is not known. In this paper, we generalize cable theory for membranes embedded in arbitrarily complex extracellular media. We outline the generalized cable equations, then consider specific cases. The simplest case is a resistive medium, in which case the equations recover the traditional cable equations. We show that for more complex media, for example in the presence of ionic diffusion, the impact on cable properties such as voltage attenuation can be significant. We illustrate this numerically always by comparing the generalized cable to the traditional cable. We conclude that the nature of intracellular and extracellular media may have a strong influence on cable filtering as well as on the passive integrative properties of neurons.
2010.05411
Attayeb Mohsen
Attayeb Mohsen, Lokesh P. Tripathi, Kenji Mizuguchi
Deep Learning Prediction of Adverse Drug Reactions Using Open TG-GATEs and FAERS Databases
null
null
10.3389/fddsv.2021.768792
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advancements in Artificial intelligence (AI) and the accumulation of healthrelated big data, it has become increasingly feasible and commonplace to leverage machine learning technologies to analyze clinical and omics metadata to assess the possibility of adverse drug reactions or events (ADRs) in the course of drug discovery. Here, we have described a novel approach that combined drug-induced gene expression profile from Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation Systems) and ADR occurrence information from FAERS (FDA [Food and Drug Administration] Adverse Events Reporting System) database to predict the likelihood of ADRs. We generated a total of 14 models using Deep Neural Networks (DNN) to predict different ADRs; in the validation tests, our models achieved a mean accuracy of 85.71%, indicating that our approach successfully and consistently predicted ADRs for a wide range of drugs. As an example, we have described the ADR model in the context of Duodenal ulcer. We believe that our models will help predict the likelihood of ADRs while testing novel pharmaceutical compounds, and will be useful for researchers in drug discovery.
[ { "created": "Mon, 12 Oct 2020 02:42:12 GMT", "version": "v1" } ]
2022-04-14
[ [ "Mohsen", "Attayeb", "" ], [ "Tripathi", "Lokesh P.", "" ], [ "Mizuguchi", "Kenji", "" ] ]
With the advancements in Artificial intelligence (AI) and the accumulation of healthrelated big data, it has become increasingly feasible and commonplace to leverage machine learning technologies to analyze clinical and omics metadata to assess the possibility of adverse drug reactions or events (ADRs) in the course of drug discovery. Here, we have described a novel approach that combined drug-induced gene expression profile from Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation Systems) and ADR occurrence information from FAERS (FDA [Food and Drug Administration] Adverse Events Reporting System) database to predict the likelihood of ADRs. We generated a total of 14 models using Deep Neural Networks (DNN) to predict different ADRs; in the validation tests, our models achieved a mean accuracy of 85.71%, indicating that our approach successfully and consistently predicted ADRs for a wide range of drugs. As an example, we have described the ADR model in the context of Duodenal ulcer. We believe that our models will help predict the likelihood of ADRs while testing novel pharmaceutical compounds, and will be useful for researchers in drug discovery.
1409.7827
Yue Wang
Ye Tian, Yi Fu, Guoqiang Yu, Bai Zhang, and Yue Wang
A Statistical Approach to Identifying Significant Transgenerational Methylation Changes
4 pages, 4 figures
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epigenetic aberrations have profound effects on phenotypic output. Genome wide methylation alterations are inheritable to pass down the aberrations through multiple generations. We developed a statistical method, Genome-wide Identification of Significant Methylation Alteration, GISAIM, to study the significant transgenerational methylation changes. GISAIM finds the significant methylation aberrations that are inherited through multiple generations. In a concrete biological study, we investigated whether exposing pregnant rats (F0) to a high fat (HF) diet throughout pregnancy or ethinyl estradiol (EE2)-supplemented diet during gestation days 14 20 affects carcinogen-induced mammary cancer risk in daughters (F1), granddaughters (F2) and great-granddaughters (F3). Mammary tumorigenesis was higher in daughters and granddaughters of HF rat dams, and in daughters, granddaughters and great-granddaughters of EE2 rat dams. Outcross experiments showed that increased mammary cancer risk was transmitted to HF granddaughters equally through the female or male germlines, but is only transmitted to EE2 granddaughters through the female germline. Transgenerational effect on mammary cancer risk was associated with increased expression of DNA methyltransferases, and across all three EE2 generations hypo or hyper methylation of the same 375 gene promoter regions in their mammary glands. Our study shows that maternal dietary estrogenic exposures during pregnancy can increase breast cancer risk in multiple generations of offspring, and the increase in risk may be inherited through non-genetic means, possibly involving DNA methylation.
[ { "created": "Sat, 27 Sep 2014 18:35:34 GMT", "version": "v1" } ]
2014-09-30
[ [ "Tian", "Ye", "" ], [ "Fu", "Yi", "" ], [ "Yu", "Guoqiang", "" ], [ "Zhang", "Bai", "" ], [ "Wang", "Yue", "" ] ]
Epigenetic aberrations have profound effects on phenotypic output. Genome wide methylation alterations are inheritable to pass down the aberrations through multiple generations. We developed a statistical method, Genome-wide Identification of Significant Methylation Alteration, GISAIM, to study the significant transgenerational methylation changes. GISAIM finds the significant methylation aberrations that are inherited through multiple generations. In a concrete biological study, we investigated whether exposing pregnant rats (F0) to a high fat (HF) diet throughout pregnancy or ethinyl estradiol (EE2)-supplemented diet during gestation days 14 20 affects carcinogen-induced mammary cancer risk in daughters (F1), granddaughters (F2) and great-granddaughters (F3). Mammary tumorigenesis was higher in daughters and granddaughters of HF rat dams, and in daughters, granddaughters and great-granddaughters of EE2 rat dams. Outcross experiments showed that increased mammary cancer risk was transmitted to HF granddaughters equally through the female or male germlines, but is only transmitted to EE2 granddaughters through the female germline. Transgenerational effect on mammary cancer risk was associated with increased expression of DNA methyltransferases, and across all three EE2 generations hypo or hyper methylation of the same 375 gene promoter regions in their mammary glands. Our study shows that maternal dietary estrogenic exposures during pregnancy can increase breast cancer risk in multiple generations of offspring, and the increase in risk may be inherited through non-genetic means, possibly involving DNA methylation.