id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2312.14518
Minghui Liao
Minghui Liao, Guojia Wan, Bo Du
Joint Learning Neuronal Skeleton and Brain Circuit Topology with Permutation Invariant Encoders for Neuron Classification
Accepted by AAAI 2024
null
null
null
q-bio.NC cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the types of neurons within a nervous system plays a significant role in the analysis of brain connectomics and the investigation of neurological diseases. However, the efficiency of utilizing anatomical, physiological, or molecular characteristics of neurons is relatively low and costly. With the advancements in electron microscopy imaging and analysis techniques for brain tissue, we are able to obtain whole-brain connectome consisting neuronal high-resolution morphology and connectivity information. However, few models are built based on such data for automated neuron classification. In this paper, we propose NeuNet, a framework that combines morphological information of neurons obtained from skeleton and topological information between neurons obtained from neural circuit. Specifically, NeuNet consists of three components, namely Skeleton Encoder, Connectome Encoder, and Readout Layer. Skeleton Encoder integrates the local information of neurons in a bottom-up manner, with a one-dimensional convolution in neural skeleton's point data; Connectome Encoder uses a graph neural network to capture the topological information of neural circuit; finally, Readout Layer fuses the above two information and outputs classification results. We reprocess and release two new datasets for neuron classification task from volume electron microscopy(VEM) images of human brain cortex and Drosophila brain. Experiments on these two datasets demonstrated the effectiveness of our model with accuracy of 0.9169 and 0.9363, respectively. Code and data are available at: https://github.com/WHUminghui/NeuNet.
[ { "created": "Fri, 22 Dec 2023 08:31:11 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 02:45:29 GMT", "version": "v2" } ]
2024-03-27
[ [ "Liao", "Minghui", "" ], [ "Wan", "Guojia", "" ], [ "Du", "Bo", "" ] ]
Determining the types of neurons within a nervous system plays a significant role in the analysis of brain connectomics and the investigation of neurological diseases. However, the efficiency of utilizing anatomical, physiological, or molecular characteristics of neurons is relatively low and costly. With the advancements in electron microscopy imaging and analysis techniques for brain tissue, we are able to obtain whole-brain connectome consisting neuronal high-resolution morphology and connectivity information. However, few models are built based on such data for automated neuron classification. In this paper, we propose NeuNet, a framework that combines morphological information of neurons obtained from skeleton and topological information between neurons obtained from neural circuit. Specifically, NeuNet consists of three components, namely Skeleton Encoder, Connectome Encoder, and Readout Layer. Skeleton Encoder integrates the local information of neurons in a bottom-up manner, with a one-dimensional convolution in neural skeleton's point data; Connectome Encoder uses a graph neural network to capture the topological information of neural circuit; finally, Readout Layer fuses the above two information and outputs classification results. We reprocess and release two new datasets for neuron classification task from volume electron microscopy(VEM) images of human brain cortex and Drosophila brain. Experiments on these two datasets demonstrated the effectiveness of our model with accuracy of 0.9169 and 0.9363, respectively. Code and data are available at: https://github.com/WHUminghui/NeuNet.
1508.02913
Dieter W. Heermann
Nestor Norio Oiwa, Claudette Cordeiro and Dieter W. Heermann
The Electronic Behavior of Zinc-Finger Protein Binding Sites in the Context of the DNA Extended Ladder Model
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The eukaryotic Cys2His2 zinc finger proteins bind to DNA ubiquitously at highly conserved domains, responsible for gene regulation and the spatial organization of DNA. To study and understand the zinc finger DNA-protein interaction, we use the extended ladder in the DNA model proposed by Zhu, Rasmussen, Balatsky \& Bishop (2007) \cite{Zhu-2007}. Considering one single spinless electron in each nucleotide $\pi$-orbital along a double DNA chain (dDNA), we find a typical pattern for the {\color{black} bottom of the occupied molecular orbital (BOMO)}, highest occupied molecular orbital (HOMO) and lowest unoccupied orbital (LUMO) along the binding sites. We specifically looked at two members of zinc finger protein family: specificity protein 1 (SP1) and early grown response 1 transcription factors (EGR1). When the valence band is filled, we find electrons in the purines along the nucleotide sequence, compatible with the electric charges of the binding amino acids in SP1 and EGR1 zinc finger.
[ { "created": "Wed, 12 Aug 2015 13:28:41 GMT", "version": "v1" } ]
2015-08-13
[ [ "Oiwa", "Nestor Norio", "" ], [ "Cordeiro", "Claudette", "" ], [ "Heermann", "Dieter W.", "" ] ]
The eukaryotic Cys2His2 zinc finger proteins bind to DNA ubiquitously at highly conserved domains, responsible for gene regulation and the spatial organization of DNA. To study and understand the zinc finger DNA-protein interaction, we use the extended ladder in the DNA model proposed by Zhu, Rasmussen, Balatsky \& Bishop (2007) \cite{Zhu-2007}. Considering one single spinless electron in each nucleotide $\pi$-orbital along a double DNA chain (dDNA), we find a typical pattern for the {\color{black} bottom of the occupied molecular orbital (BOMO)}, highest occupied molecular orbital (HOMO) and lowest unoccupied orbital (LUMO) along the binding sites. We specifically looked at two members of zinc finger protein family: specificity protein 1 (SP1) and early grown response 1 transcription factors (EGR1). When the valence band is filled, we find electrons in the purines along the nucleotide sequence, compatible with the electric charges of the binding amino acids in SP1 and EGR1 zinc finger.
q-bio/0611065
Cesar Flores cflores
J. C. Flores
An axiomatic theory for interaction between species in ecology: Gause's exclusion conjecture
no figures
null
null
null
q-bio.PE q-bio.OT
null
I introduce an axiomatic representation, called ecoset, to consider interactions between species in ecological systems. For interspecific competition, the exclusion conjecture (Gause) is put in a symbolic way and used as a basic operational tool to consider more complex cases like two species with two superposed resources (niche differentiation) and apparent competitors. Competition between tortoises and invaders in Galapagos islands is considered as a specific example. Our theory gives us an operative language to consider elementary process in ecology and open a route to more complex systems.
[ { "created": "Mon, 20 Nov 2006 19:10:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Flores", "J. C.", "" ] ]
I introduce an axiomatic representation, called ecoset, to consider interactions between species in ecological systems. For interspecific competition, the exclusion conjecture (Gause) is put in a symbolic way and used as a basic operational tool to consider more complex cases like two species with two superposed resources (niche differentiation) and apparent competitors. Competition between tortoises and invaders in Galapagos islands is considered as a specific example. Our theory gives us an operative language to consider elementary process in ecology and open a route to more complex systems.
2402.08075
Huixin Zhan
Huixin Zhan, Ying Nian Wu, Zijun Zhang
Efficient and Scalable Fine-Tune of Language Models for Genome Understanding
null
null
null
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Although DNA foundation models have advanced the understanding of genomes, they still face significant challenges in the limited scale and diversity of genomic data. This limitation starkly contrasts with the success of natural language foundation models, which thrive on substantially larger scales. Furthermore, genome understanding involves numerous downstream genome annotation tasks with inherent data heterogeneity, thereby necessitating more efficient and robust fine-tuning methods tailored for genomics. Here, we present \textsc{Lingo}: \textsc{L}anguage prefix f\textsc{In}e-tuning for \textsc{G}en\textsc{O}mes. Unlike DNA foundation models, \textsc{Lingo} strategically leverages natural language foundation models' contextual cues, recalibrating their linguistic knowledge to genomic sequences. \textsc{Lingo} further accommodates numerous, heterogeneous downstream fine-tune tasks by an adaptive rank sampling method that prunes and stochastically reintroduces pruned singular vectors within small computational budgets. Adaptive rank sampling outperformed existing fine-tuning methods on all benchmarked 14 genome understanding tasks, while requiring fewer than 2\% of trainable parameters as genomic-specific adapters. Impressively, applying these adapters on natural language foundation models matched or even exceeded the performance of DNA foundation models. \textsc{Lingo} presents a new paradigm of efficient and scalable genome understanding via genomic-specific adapters on language models.
[ { "created": "Mon, 12 Feb 2024 21:40:45 GMT", "version": "v1" } ]
2024-02-14
[ [ "Zhan", "Huixin", "" ], [ "Wu", "Ying Nian", "" ], [ "Zhang", "Zijun", "" ] ]
Although DNA foundation models have advanced the understanding of genomes, they still face significant challenges in the limited scale and diversity of genomic data. This limitation starkly contrasts with the success of natural language foundation models, which thrive on substantially larger scales. Furthermore, genome understanding involves numerous downstream genome annotation tasks with inherent data heterogeneity, thereby necessitating more efficient and robust fine-tuning methods tailored for genomics. Here, we present \textsc{Lingo}: \textsc{L}anguage prefix f\textsc{In}e-tuning for \textsc{G}en\textsc{O}mes. Unlike DNA foundation models, \textsc{Lingo} strategically leverages natural language foundation models' contextual cues, recalibrating their linguistic knowledge to genomic sequences. \textsc{Lingo} further accommodates numerous, heterogeneous downstream fine-tune tasks by an adaptive rank sampling method that prunes and stochastically reintroduces pruned singular vectors within small computational budgets. Adaptive rank sampling outperformed existing fine-tuning methods on all benchmarked 14 genome understanding tasks, while requiring fewer than 2\% of trainable parameters as genomic-specific adapters. Impressively, applying these adapters on natural language foundation models matched or even exceeded the performance of DNA foundation models. \textsc{Lingo} presents a new paradigm of efficient and scalable genome understanding via genomic-specific adapters on language models.
2006.10045
Todd R Lewis PhD
Todd R. Lewis, Paul B. C. Grant, Robert W. Henderson, Alex Figueroa, Mike D. Dunn
Ecological notes on the Annulated Treeboa (Corallus annulatus) from a Costa Rican Lowland Tropical Wet Forest
null
IRCF Reptiles & Amphibians: Conservation and Natural History 18 4 (2011) 202-207
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Annulated Treeboa (Corallus annulatus) is one of nine currently recognized species in the boid genus Corallus. Its disjunct range extends from eastern Guatemala into northern Honduras, southeastern Nicaragua, northeastern Costa Rica, and southwestern Panama to northern Colombia west of the Andes. It is the only species of Corallus found on the Caribbean versant of Costa Rica, where it occurs at elevations to at least 650m and perhaps as high as 1,000m. Corallus annulatus occurs mostly in primary and secondary lowland tropical wet and moist rainforest and it appears to be genuinely rare. Besides C. cropanii and C. blombergi (the latter closely related to C. annulatus), it is the rarest member of the genus. Aside from information on habitat and activity, little is known regarding its natural history.
[ { "created": "Wed, 17 Jun 2020 18:59:55 GMT", "version": "v1" } ]
2020-06-19
[ [ "Lewis", "Todd R.", "" ], [ "Grant", "Paul B. C.", "" ], [ "Henderson", "Robert W.", "" ], [ "Figueroa", "Alex", "" ], [ "Dunn", "Mike D.", "" ] ]
The Annulated Treeboa (Corallus annulatus) is one of nine currently recognized species in the boid genus Corallus. Its disjunct range extends from eastern Guatemala into northern Honduras, southeastern Nicaragua, northeastern Costa Rica, and southwestern Panama to northern Colombia west of the Andes. It is the only species of Corallus found on the Caribbean versant of Costa Rica, where it occurs at elevations to at least 650m and perhaps as high as 1,000m. Corallus annulatus occurs mostly in primary and secondary lowland tropical wet and moist rainforest and it appears to be genuinely rare. Besides C. cropanii and C. blombergi (the latter closely related to C. annulatus), it is the rarest member of the genus. Aside from information on habitat and activity, little is known regarding its natural history.
1312.1375
Chun Tung Chou
Chun Tung Chou
Impact of receiver reaction mechanisms on the performance of molecular communication networks
null
IEEE Transactions on Nanotechnology ( Volume: 14 , Issue: 2 , March 2015 )
10.1109/TNANO.2015.2393866
null
q-bio.MN cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a molecular communication network, transmitters and receivers communicate by using signalling molecules. At the receivers, the signalling molecules react, via a chain of chemical reactions, to produce output molecules. The counts of output molecules over time is considered to be the output signal of the receiver. This output signal is used to detect the presence of signalling molecules at the receiver. The output signal is noisy due to the stochastic nature of diffusion and chemical reactions. The aim of this paper is to characterise the properties of the output signals for two types of receivers, which are based on two different types of reaction mechanisms. We derive analytical expressions for the mean, variance and frequency properties of these two types of receivers. These expressions allow us to study the properties of these two types of receivers. In addition, our model allows us to study the effect of the diffusibility of the receiver membrane on the performance of the receivers.
[ { "created": "Wed, 4 Dec 2013 22:48:20 GMT", "version": "v1" } ]
2020-07-23
[ [ "Chou", "Chun Tung", "" ] ]
In a molecular communication network, transmitters and receivers communicate by using signalling molecules. At the receivers, the signalling molecules react, via a chain of chemical reactions, to produce output molecules. The counts of output molecules over time is considered to be the output signal of the receiver. This output signal is used to detect the presence of signalling molecules at the receiver. The output signal is noisy due to the stochastic nature of diffusion and chemical reactions. The aim of this paper is to characterise the properties of the output signals for two types of receivers, which are based on two different types of reaction mechanisms. We derive analytical expressions for the mean, variance and frequency properties of these two types of receivers. These expressions allow us to study the properties of these two types of receivers. In addition, our model allows us to study the effect of the diffusibility of the receiver membrane on the performance of the receivers.
2309.15226
HaiKun Du
Ruotong.Lu, Xiaozhe.Huang, Sihuan.Deng, Haikun.Du
Network Pharmacology, Molecular Docking, and MR Analysis: Targets and Mechanisms of Gegen Qinlian Decoction for Helicobacter pylori
16pages
null
null
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Objective: The study explored therapeutic targets and mechanisms of Gegen Qinlian Decoction for Helicobacter pylori infection and related gastric cancer using network pharmacology, molecular docking, and Mendelian randomization. Methods: Medicinal components of Gegen Qinlian Decoction were extracted from TCMSP and HERB databases. Disease treatment targets were sourced from DisGeNET and PubChem. Interaction networks were constructed via the STRING database and visualized using Cytoscape 3.9.1. Enrichment analysis of intersected targets was performed using DAVID and Metascapes. Molecular docking employed Autodock Tools 1.5.6 and PyMOL 2.5.2. Mendelian randomization was based on the ukb-b-531 sample from UK Biobank. Results: 146 active components and 248 targets from Gegen Qinlian Decoction were identified. 66 targets overlapped with Helicobacter pylori infection genes. Molecular docking highlighted interactions between primary drug components like quercetin, wogonin, kaempferol, and target genes PTGS1, PTGS2, MAPK14. Mendelian randomization pinpointed genes like IGF2, PIK3CG, GJA1, and PLAU associated with Helicobacter pylori infection. Conclusion: Gegen Qinlian Decoction's active components target Helicobacter pylori infection through diverse targets and pathways, presenting potential research avenues.
[ { "created": "Tue, 26 Sep 2023 19:43:53 GMT", "version": "v1" } ]
2023-09-28
[ [ "Lu", "Ruotong.", "" ], [ "Huang", "Xiaozhe.", "" ], [ "Deng", "Sihuan.", "" ], [ "Du", "Haikun.", "" ] ]
Objective: The study explored therapeutic targets and mechanisms of Gegen Qinlian Decoction for Helicobacter pylori infection and related gastric cancer using network pharmacology, molecular docking, and Mendelian randomization. Methods: Medicinal components of Gegen Qinlian Decoction were extracted from TCMSP and HERB databases. Disease treatment targets were sourced from DisGeNET and PubChem. Interaction networks were constructed via the STRING database and visualized using Cytoscape 3.9.1. Enrichment analysis of intersected targets was performed using DAVID and Metascapes. Molecular docking employed Autodock Tools 1.5.6 and PyMOL 2.5.2. Mendelian randomization was based on the ukb-b-531 sample from UK Biobank. Results: 146 active components and 248 targets from Gegen Qinlian Decoction were identified. 66 targets overlapped with Helicobacter pylori infection genes. Molecular docking highlighted interactions between primary drug components like quercetin, wogonin, kaempferol, and target genes PTGS1, PTGS2, MAPK14. Mendelian randomization pinpointed genes like IGF2, PIK3CG, GJA1, and PLAU associated with Helicobacter pylori infection. Conclusion: Gegen Qinlian Decoction's active components target Helicobacter pylori infection through diverse targets and pathways, presenting potential research avenues.
1611.00358
Aline Amabile Viol Barbosa
A. Viol, Fernanda Palhano-Fontes, Heloisa Onias, Draulio B. de Araujo and G. M. Viswanathan
Shannon entropy of brain functional complex networks under the influence of the psychedelic Ayahuasca
27 pages, 6 figures
Scientific Reports 7: 7388,2017
10.1038/s41598-017-06854-0
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The entropic brain hypothesis holds that the key facts concerning psychedelics are partially explained in terms of increased entropy of the brain's functional connectivity. Ayahuasca is a psychedelic beverage of Amazonian indigenous origin with legal status in Brazil in religious and scientific settings. In this context, we use tools and concepts from the theory of complex networks to analyze resting state fMRI data of the brains of human subjects under two distinct conditions: (i) under ordinary waking state and (ii) in an altered state of consciousness induced by ingestion of Ayahuasca. We report an increase in the Shannon entropy of the degree distribution of the networks subsequent to Ayahuasca ingestion. We also find increased local and decreased global network integration. Our results are broadly consistent with the entropic brain hypothesis. Finally, we discuss our findings in the context of descriptions of "mind-expansion" frequently seen in self-reports of users of psychedelic drugs.
[ { "created": "Tue, 1 Nov 2016 17:20:00 GMT", "version": "v1" } ]
2017-08-10
[ [ "Viol", "A.", "" ], [ "Palhano-Fontes", "Fernanda", "" ], [ "Onias", "Heloisa", "" ], [ "de Araujo", "Draulio B.", "" ], [ "Viswanathan", "G. M.", "" ] ]
The entropic brain hypothesis holds that the key facts concerning psychedelics are partially explained in terms of increased entropy of the brain's functional connectivity. Ayahuasca is a psychedelic beverage of Amazonian indigenous origin with legal status in Brazil in religious and scientific settings. In this context, we use tools and concepts from the theory of complex networks to analyze resting state fMRI data of the brains of human subjects under two distinct conditions: (i) under ordinary waking state and (ii) in an altered state of consciousness induced by ingestion of Ayahuasca. We report an increase in the Shannon entropy of the degree distribution of the networks subsequent to Ayahuasca ingestion. We also find increased local and decreased global network integration. Our results are broadly consistent with the entropic brain hypothesis. Finally, we discuss our findings in the context of descriptions of "mind-expansion" frequently seen in self-reports of users of psychedelic drugs.
1312.6450
Sean Anderson
Sean C. Anderson, Cole C. Monnahan, Kelli F. Johnson, Kotaro Ono, and Juan L. Valero
ss3sim: An R package for fisheries stock assessment simulation with Stock Synthesis
Accepted at PLOS ONE; corrected the example code
null
10.1371/journal.pone.0092725
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simulation testing is an important approach to evaluating fishery stock assessment methods. In the last decade, the fisheries stock assessment modeling framework Stock Synthesis (SS3) has become widely used around the world. However, there lacks a generalized and scriptable framework for SS3 simulation testing. Here, we introduce ss3sim, an R package that facilitates reproducible, flexible, and rapid end-to-end simulation testing with SS3. ss3sim requires an existing SS3 model configuration along with plain-text control files describing alternative population dynamics, fishery properties, sampling scenarios, and assessment approaches. ss3sim then generates an underlying 'truth' from a specified operating model, samples from that truth, modifies and runs an estimation model, and synthesizes the results. The simulations can be run in parallel, reducing runtime, and the source code is free to be modified under an open-source MIT license. ss3sim is designed to explore structural differences between the underlying truth and assumptions of an estimation model, or between multiple estimation model configurations. For example, ss3sim can be used to answer questions about model misspecification, retrospective patterns, and the relative importance of different types of fisheries data. We demonstrate the software with an example, discuss how ss3sim complements other simulation software, and outline specific research questions that ss3sim could address.
[ { "created": "Mon, 23 Dec 2013 01:05:57 GMT", "version": "v1" }, { "created": "Thu, 6 Feb 2014 18:38:46 GMT", "version": "v2" }, { "created": "Wed, 26 Feb 2014 19:55:13 GMT", "version": "v3" } ]
2015-06-18
[ [ "Anderson", "Sean C.", "" ], [ "Monnahan", "Cole C.", "" ], [ "Johnson", "Kelli F.", "" ], [ "Ono", "Kotaro", "" ], [ "Valero", "Juan L.", "" ] ]
Simulation testing is an important approach to evaluating fishery stock assessment methods. In the last decade, the fisheries stock assessment modeling framework Stock Synthesis (SS3) has become widely used around the world. However, there lacks a generalized and scriptable framework for SS3 simulation testing. Here, we introduce ss3sim, an R package that facilitates reproducible, flexible, and rapid end-to-end simulation testing with SS3. ss3sim requires an existing SS3 model configuration along with plain-text control files describing alternative population dynamics, fishery properties, sampling scenarios, and assessment approaches. ss3sim then generates an underlying 'truth' from a specified operating model, samples from that truth, modifies and runs an estimation model, and synthesizes the results. The simulations can be run in parallel, reducing runtime, and the source code is free to be modified under an open-source MIT license. ss3sim is designed to explore structural differences between the underlying truth and assumptions of an estimation model, or between multiple estimation model configurations. For example, ss3sim can be used to answer questions about model misspecification, retrospective patterns, and the relative importance of different types of fisheries data. We demonstrate the software with an example, discuss how ss3sim complements other simulation software, and outline specific research questions that ss3sim could address.
2004.06522
Guenter Baerwolff
G\"unter B\"arwolff
Prospects and limits of SIR-type Mathematical Models to Capture the COVID-19 Pandemic
11 pages, 16 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the description of a pandemic mathematical models could be interesting. Both for physicians and politicians as a base for decisions to treat the disease. The responsible estimation of parameters is a main issue of mathematical pandemic models. Especially a good choice of $\beta$ as the number of others that one infected person encounters per unit time (per day) influences the adequateness of the results of the model. For the actual COVID-19 pandemic some aspects of the parameter choice will be discussed. Because of the incompatibility of the data of the Johns-Hopkins-University to the data of the German Robert-Koch-Institut we use the COVID-19 data of the European Centre for Disease Prevention and Control (ECDC) as a base for the parameter estimation. Two different mathematical methods for the data analysis will be discussed in this paper and possible sources of trouble will be shown. As example of the parameter choice serve the data of the USA and the UK. The resulting parameters will be used estimated and used in W.\,O. Kermack and A.\,G. McKendrick's SIR model. Strategies for the commencing and ending of social and economic shutdown measures are discussed. The numerical solution of the ordinary differential equation system of the modified SIR model is being done with a Runge-Kutta integration method of fourth order. At the end the applicability of the SIR model could be shown essentially. Suggestions about appropriate points in time at which to commence with lockdown measures based on the acceleration rate of infections conclude the paper.
[ { "created": "Mon, 13 Apr 2020 09:57:45 GMT", "version": "v1" } ]
2020-04-15
[ [ "Bärwolff", "Günter", "" ] ]
For the description of a pandemic mathematical models could be interesting. Both for physicians and politicians as a base for decisions to treat the disease. The responsible estimation of parameters is a main issue of mathematical pandemic models. Especially a good choice of $\beta$ as the number of others that one infected person encounters per unit time (per day) influences the adequateness of the results of the model. For the actual COVID-19 pandemic some aspects of the parameter choice will be discussed. Because of the incompatibility of the data of the Johns-Hopkins-University to the data of the German Robert-Koch-Institut we use the COVID-19 data of the European Centre for Disease Prevention and Control (ECDC) as a base for the parameter estimation. Two different mathematical methods for the data analysis will be discussed in this paper and possible sources of trouble will be shown. As example of the parameter choice serve the data of the USA and the UK. The resulting parameters will be used estimated and used in W.\,O. Kermack and A.\,G. McKendrick's SIR model. Strategies for the commencing and ending of social and economic shutdown measures are discussed. The numerical solution of the ordinary differential equation system of the modified SIR model is being done with a Runge-Kutta integration method of fourth order. At the end the applicability of the SIR model could be shown essentially. Suggestions about appropriate points in time at which to commence with lockdown measures based on the acceleration rate of infections conclude the paper.
1311.2646
Melanie JI Muller PhD
J. David Van Dyken, Melanie J.I. Muller, Keenan M.L. Mack, Michael M. Desai
Spatial population expansion promotes the evolution of cooperation in an experimental Prisoner's Dilemma
null
Curr Biol 23, 919-923, 2013 Export Citation Permissions Current Biology, Volume 23, Issue 10, 919-923
10.1016/j.cub.2013.04.026
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperation is ubiquitous in nature, but explaining its existence remains a central interdisciplinary challenge. Cooperation is most difficult to explain in the Prisoner's Dilemma game, where cooperators always lose in direct competition with defectors despite increasing mean fitness. Here we demonstrate how spatial population expansion, a widespread natural phenomenon, promotes the evolution of cooperation. We engineer an experimental Prisoner's Dilemma game in the budding yeast Saccharomyces cerevisiae to show that, despite losing to defectors in nonexpanding conditions, cooperators increase in frequency in spatially expanding populations. Fluorescently labeled colonies show genetic demixing of cooperators and defectors, followed by increase in cooperator frequency as cooperator sectors overtake neighboring defector sectors. Together with lattice-based spatial simulations, our results suggest that spatial population expansion drives the evolution of cooperation by (1) increasing positive genetic assortment at population frontiers and (2) selecting for phenotypes maximizing local deme productivity. Spatial expansion thus creates a selective force whereby cooperator-enriched demes overtake neighboring defector-enriched demes in a "survival of the fastest". We conclude that colony growth alone can promote cooperation and prevent defection in microbes. Our results extend to other species with spatially restricted dispersal undergoing range expansion, including pathogens, invasive species, and humans.
[ { "created": "Mon, 11 Nov 2013 23:41:57 GMT", "version": "v1" } ]
2013-11-13
[ [ "Van Dyken", "J. David", "" ], [ "Muller", "Melanie J. I.", "" ], [ "Mack", "Keenan M. L.", "" ], [ "Desai", "Michael M.", "" ] ]
Cooperation is ubiquitous in nature, but explaining its existence remains a central interdisciplinary challenge. Cooperation is most difficult to explain in the Prisoner's Dilemma game, where cooperators always lose in direct competition with defectors despite increasing mean fitness. Here we demonstrate how spatial population expansion, a widespread natural phenomenon, promotes the evolution of cooperation. We engineer an experimental Prisoner's Dilemma game in the budding yeast Saccharomyces cerevisiae to show that, despite losing to defectors in nonexpanding conditions, cooperators increase in frequency in spatially expanding populations. Fluorescently labeled colonies show genetic demixing of cooperators and defectors, followed by increase in cooperator frequency as cooperator sectors overtake neighboring defector sectors. Together with lattice-based spatial simulations, our results suggest that spatial population expansion drives the evolution of cooperation by (1) increasing positive genetic assortment at population frontiers and (2) selecting for phenotypes maximizing local deme productivity. Spatial expansion thus creates a selective force whereby cooperator-enriched demes overtake neighboring defector-enriched demes in a "survival of the fastest". We conclude that colony growth alone can promote cooperation and prevent defection in microbes. Our results extend to other species with spatially restricted dispersal undergoing range expansion, including pathogens, invasive species, and humans.
1809.03950
Tingting Wang
T. Wang, M. Herbster, I.S. Mian
Virus genome sequence classification using features based on nucleotides, words and compression
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ICTV develops, refines and maintains a universal virus taxonomy; Order is the highest taxon in the branching hierarchy of recognised viral taxa. Historically, ICTV (sub)committees have classified viruses on the basis of morphological characteristics and various other attributes. Today, virtually all new viral genomes are assembled from metagenomic datasets and are not linked directly to biological agents. Thus, placing a virus into a taxonomic scheme solely from primary genome structure is an increasingly important problem. Various simple descriptive statistics of a viral genome sequence have been used successfully for virus classification. Here, we use the NCBI's viral and viroid reference sequence collection (RefSeq) and a common experimental framework to compare the performance of different genome sequence-derived features and classifiers in the task of assigning a virus to one of seven ICTV Orders. The nucleotide-, word-, and compression-based features we consider include genome length, the k-mer Natural Vector (k = 1, ..., 6) and its derivatives, return time distribution, and general-purpose and DNA-specific compression ratios; the classifiers used are the k-NN and SVM. The combination of genome length and k-NN has the worst, yet still respectable, performance (mean error rate of 0.137); the best performance is achieved using 4-mer counts and SVM (mean error rate of 0.006). We investigate the main causes of misclassification, explore which viruses are more difficult to classify, and use the best performing combination to predict the Orders of 1,834 unclassified viruses. A subsequent version of RefSeq assigned Orders to 17 of these previously unlabelled viruses. Since 16 of our predictions match these assignments, our approach could aid virologists dealing with viruses that are known only from sequence data.
[ { "created": "Tue, 11 Sep 2018 14:58:14 GMT", "version": "v1" } ]
2018-09-12
[ [ "Wang", "T.", "" ], [ "Herbster", "M.", "" ], [ "Mian", "I. S.", "" ] ]
The ICTV develops, refines and maintains a universal virus taxonomy; Order is the highest taxon in the branching hierarchy of recognised viral taxa. Historically, ICTV (sub)committees have classified viruses on the basis of morphological characteristics and various other attributes. Today, virtually all new viral genomes are assembled from metagenomic datasets and are not linked directly to biological agents. Thus, placing a virus into a taxonomic scheme solely from primary genome structure is an increasingly important problem. Various simple descriptive statistics of a viral genome sequence have been used successfully for virus classification. Here, we use the NCBI's viral and viroid reference sequence collection (RefSeq) and a common experimental framework to compare the performance of different genome sequence-derived features and classifiers in the task of assigning a virus to one of seven ICTV Orders. The nucleotide-, word-, and compression-based features we consider include genome length, the k-mer Natural Vector (k = 1, ..., 6) and its derivatives, return time distribution, and general-purpose and DNA-specific compression ratios; the classifiers used are the k-NN and SVM. The combination of genome length and k-NN has the worst, yet still respectable, performance (mean error rate of 0.137); the best performance is achieved using 4-mer counts and SVM (mean error rate of 0.006). We investigate the main causes of misclassification, explore which viruses are more difficult to classify, and use the best performing combination to predict the Orders of 1,834 unclassified viruses. A subsequent version of RefSeq assigned Orders to 17 of these previously unlabelled viruses. Since 16 of our predictions match these assignments, our approach could aid virologists dealing with viruses that are known only from sequence data.
1701.00934
Gergely J Sz\"oll\H{o}si
Imre Der\'enyi and Gergely J. Sz\"oll\H{o}si
Hierarchical tissue organization as a general mechanism to limit the accumulation of somatic mutations
To appear in Nature Communications
null
10.1016/j.bpj.2016.11.1536
null
q-bio.TO q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
How can tissues generate large numbers of cells, yet keep the divisional load (the number of divisions along cell lineages) low in order to curtail the accumulation of somatic mutations and reduce the risk of cancer? To answer the question we consider a general model of hierarchically organized self-renewing tissues and show that the lifetime divisional load of such a tissue is independent of the details of the cell differentiation processes, and depends only on two structural and two dynamical parameters. Our results demonstrate that a strict analytical relationship exists between two seemingly disparate characteristics of self-renewing tissues: divisional load and tissue organization. Most remarkably, we find that a sufficient number of progressively slower dividing cell types can be almost as efficient in minimizing the divisional load, as non-renewing tissues. We argue that one of the main functions of tissue-specific stem cells and differentiation hierarchies is the prevention of cancer.
[ { "created": "Wed, 4 Jan 2017 08:59:29 GMT", "version": "v1" } ]
2017-04-05
[ [ "Derényi", "Imre", "" ], [ "Szöllősi", "Gergely J.", "" ] ]
How can tissues generate large numbers of cells, yet keep the divisional load (the number of divisions along cell lineages) low in order to curtail the accumulation of somatic mutations and reduce the risk of cancer? To answer the question we consider a general model of hierarchically organized self-renewing tissues and show that the lifetime divisional load of such a tissue is independent of the details of the cell differentiation processes, and depends only on two structural and two dynamical parameters. Our results demonstrate that a strict analytical relationship exists between two seemingly disparate characteristics of self-renewing tissues: divisional load and tissue organization. Most remarkably, we find that a sufficient number of progressively slower dividing cell types can be almost as efficient in minimizing the divisional load, as non-renewing tissues. We argue that one of the main functions of tissue-specific stem cells and differentiation hierarchies is the prevention of cancer.
2302.01117
Hao Tian
Hao Tian, Sian Xiao, Xi Jiang, Peng Tao
PASSerRank: Prediction of Allosteric Sites with Learning to Rank
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Allostery plays a crucial role in regulating protein activity, making it a highly sought-after target in drug development. One of the major challenges in allosteric drug research is the identification of allosteric sites. In recent years, many computational models have been developed for accurate allosteric site prediction. Most of these models focus on designing a general rule that can be applied to pockets of proteins from various families. In this study, we present a new approach using the concept of Learning to Rank (LTR). The LTR model ranks pockets based on their relevance to allosteric sites, i.e., how well a pocket meets the characteristics of known allosteric sites. The model outperforms other common machine learning models with higher F1 score and Matthews correlation coefficient. After the training and validation on two datasets, the Allosteric Database (ASD) and CASBench, the LTR model was able to rank an allosteric pocket in the top 3 positions for 83.6% and 80.5% of test proteins, respectively. The trained model is available on the PASSer platform (https://passer.smu.edu) to aid in drug discovery research.
[ { "created": "Thu, 2 Feb 2023 14:23:28 GMT", "version": "v1" }, { "created": "Sat, 29 Apr 2023 03:16:05 GMT", "version": "v2" } ]
2023-05-02
[ [ "Tian", "Hao", "" ], [ "Xiao", "Sian", "" ], [ "Jiang", "Xi", "" ], [ "Tao", "Peng", "" ] ]
Allostery plays a crucial role in regulating protein activity, making it a highly sought-after target in drug development. One of the major challenges in allosteric drug research is the identification of allosteric sites. In recent years, many computational models have been developed for accurate allosteric site prediction. Most of these models focus on designing a general rule that can be applied to pockets of proteins from various families. In this study, we present a new approach using the concept of Learning to Rank (LTR). The LTR model ranks pockets based on their relevance to allosteric sites, i.e., how well a pocket meets the characteristics of known allosteric sites. The model outperforms other common machine learning models with higher F1 score and Matthews correlation coefficient. After the training and validation on two datasets, the Allosteric Database (ASD) and CASBench, the LTR model was able to rank an allosteric pocket in the top 3 positions for 83.6% and 80.5% of test proteins, respectively. The trained model is available on the PASSer platform (https://passer.smu.edu) to aid in drug discovery research.
1904.07780
Dongya Jia
Dongya Jia, Xuefei Li, Federico Bocci, Shubham Tripathi, Youyuan Deng, Mohit Kumar Jolly, Jose N. Onuchic, Herbert Levine
Quantifying cancer epithelial-mesenchymal plasticity and its association with stemness and immune response
50 pages, 6 figures
null
null
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer cells can acquire a spectrum of stable hybrid epithelial/mesenchymal (E/M) states during epithelial-mesenchymal transition (EMT). Cells in these hybrid E/M phenotypes often combine epithelial and mesenchymal features and tend to migrate collectively commonly as small clusters. Such collectively migrating cancer cells play a pivotal role in seeding metastases and their presence in cancer patients indicates an adverse prognostic factor. Moreover, cancer cells in hybrid E/M phenotypes tend to be more associated with stemness which endows them with tumor-initiation ability and therapy resistance. Most recently, cells undergoing EMT have been shown to promote immune suppression for better survival. A systematic understanding of the emergence of hybrid E/M phenotypes and the connection of EMT with stemness and immune suppression would contribute to more effective therapeutic strategies. In this review, we first discuss recent efforts combining theoretical and experimental approaches to elucidate mechanisms underlying EMT multi-stability (i.e. the existence of multiple stable phenotypes during EMT) and the properties of hybrid E/M phenotypes. Following we discuss non-cell-autonomous regulation of EMT by cell cooperation and extracellular matrix. Afterwards, we discuss various metrics that can be used to quantify EMT spectrum. We further describe possible mechanisms underlying the formation of clusters of circulating tumor cells. Last but not least, we summarize recent systems biology analysis of the role of EMT in the acquisition of stemness and immune suppression.
[ { "created": "Tue, 16 Apr 2019 16:04:50 GMT", "version": "v1" } ]
2019-04-17
[ [ "Jia", "Dongya", "" ], [ "Li", "Xuefei", "" ], [ "Bocci", "Federico", "" ], [ "Tripathi", "Shubham", "" ], [ "Deng", "Youyuan", "" ], [ "Jolly", "Mohit Kumar", "" ], [ "Onuchic", "Jose N.", "" ], [ "Levine", "Herbert", "" ] ]
Cancer cells can acquire a spectrum of stable hybrid epithelial/mesenchymal (E/M) states during epithelial-mesenchymal transition (EMT). Cells in these hybrid E/M phenotypes often combine epithelial and mesenchymal features and tend to migrate collectively commonly as small clusters. Such collectively migrating cancer cells play a pivotal role in seeding metastases and their presence in cancer patients indicates an adverse prognostic factor. Moreover, cancer cells in hybrid E/M phenotypes tend to be more associated with stemness which endows them with tumor-initiation ability and therapy resistance. Most recently, cells undergoing EMT have been shown to promote immune suppression for better survival. A systematic understanding of the emergence of hybrid E/M phenotypes and the connection of EMT with stemness and immune suppression would contribute to more effective therapeutic strategies. In this review, we first discuss recent efforts combining theoretical and experimental approaches to elucidate mechanisms underlying EMT multi-stability (i.e. the existence of multiple stable phenotypes during EMT) and the properties of hybrid E/M phenotypes. Following we discuss non-cell-autonomous regulation of EMT by cell cooperation and extracellular matrix. Afterwards, we discuss various metrics that can be used to quantify EMT spectrum. We further describe possible mechanisms underlying the formation of clusters of circulating tumor cells. Last but not least, we summarize recent systems biology analysis of the role of EMT in the acquisition of stemness and immune suppression.
1404.0449
Anirban Banerji
Anirban Banerji
Attractors in residual interactions explain the differentially-conserved stability of Immunoglobulins
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins belonging to immunoglobulin superfamily(IgSF) show remarkably conserved nature both in their folded structure and in their folding process, but they neither originate from very similar sequences nor demonstrate functional conservation. Treating proteins as fractal objects, without studying spatial conservation in positioning of particular residues in IgSF, this work probed the roots structural invariance of immunoglobulins(Ig). Symmetry in distribution of mass, hydrophobicity, polarizability recorded very similar extents in Ig and in structurally-closest non-Ig structures. They registered similar symmetries in dipole-dipole, {\pi}-{\pi}, cation-{\pi} cloud interactions and also in distribution of active chiral centers, charged residues and hydrophobic residues. But in contrast to non-Ig proteins, extents of residual interaction symmetries in Ig.s of largely varying sizes are found to converge to exactly same magnitude of correlation dimension - these are named 'structural attractors', who's weightages depend on ensuring exact convergence of pairwise-interaction symmetries to attractor magnitude. Small basin of attraction for Ig attractors explained the strict and consistent quality control in ensuring stability and functionality of IgSF proteins. Low dependency of attractor weightage on attractor magnitude demonstrated that residual-interaction symmetry with less pervasive nature can also be crucial in ensuring Ig stability.
[ { "created": "Wed, 2 Apr 2014 03:59:44 GMT", "version": "v1" } ]
2014-04-03
[ [ "Banerji", "Anirban", "" ] ]
Proteins belonging to immunoglobulin superfamily(IgSF) show remarkably conserved nature both in their folded structure and in their folding process, but they neither originate from very similar sequences nor demonstrate functional conservation. Treating proteins as fractal objects, without studying spatial conservation in positioning of particular residues in IgSF, this work probed the roots structural invariance of immunoglobulins(Ig). Symmetry in distribution of mass, hydrophobicity, polarizability recorded very similar extents in Ig and in structurally-closest non-Ig structures. They registered similar symmetries in dipole-dipole, {\pi}-{\pi}, cation-{\pi} cloud interactions and also in distribution of active chiral centers, charged residues and hydrophobic residues. But in contrast to non-Ig proteins, extents of residual interaction symmetries in Ig.s of largely varying sizes are found to converge to exactly same magnitude of correlation dimension - these are named 'structural attractors', who's weightages depend on ensuring exact convergence of pairwise-interaction symmetries to attractor magnitude. Small basin of attraction for Ig attractors explained the strict and consistent quality control in ensuring stability and functionality of IgSF proteins. Low dependency of attractor weightage on attractor magnitude demonstrated that residual-interaction symmetry with less pervasive nature can also be crucial in ensuring Ig stability.
1207.6319
William Bialek
Gasper Tkacik, Olivier Marre, Thierry Mora, Dario Amodei, Michael J. Berry II, and William Bialek
The simplest maximum entropy model for collective behavior in a neural network
null
null
10.1088/1742-5468/2013/03/P03011
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural "thermodynamics" for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy.
[ { "created": "Thu, 26 Jul 2012 16:28:11 GMT", "version": "v1" } ]
2015-06-05
[ [ "Tkacik", "Gasper", "" ], [ "Marre", "Olivier", "" ], [ "Mora", "Thierry", "" ], [ "Amodei", "Dario", "" ], [ "Berry", "Michael J.", "II" ], [ "Bialek", "William", "" ] ]
Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural "thermodynamics" for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy.
1001.0977
N. P. Ong
Shu-Wen Teng, Yufang Wang, Kimberly C. Tu, Tao Long, Pankaj Mehta, Ned S. Wingreen, Bonnie L. Bassler, N. P. Ong
Measurement of the copy number of the master quorum-sensing regulator of a bacterial cell
Main text 23 pages, 5 figures. Supporting material 19 pages, 7 figures. In new version, text revised, one figure reformatted
null
10.1016/j.bpj.2010.01.031
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quorum sensing is the mechanism by which bacteria communicate and synchronize group behaviors. Quantitative information on parameters such as the copy number of particular quorum-sensing proteins should contribute strongly to understanding how the quorum-sensing network functions. Here we show that the copy number of the master regulator protein LuxR in Vibrio harveyi, can be determined in vivo by exploiting small-number fluctuations of the protein distribution when cells undergo division. When a cell divides, both its volume and LuxR protein copy number N are partitioned with slight asymmetries. We have measured the distribution functions describing the partitioning of the protein fluorescence and the cell volume. The fluorescence distribution is found to narrow systematically as the LuxR population increases while the volume partitioning is unchanged. Analyzing these changes statistically, we have determined that N = 80-135 dimers at low cell density and 575 dimers at high cell density. In addition, we have measured the static distribution of LuxR over a large (3,000) clonal population. Combining the static and time-lapse experiments, we determine the magnitude of the Fano factor of the distribution. This technique has broad applicability as a general, in vivo technique for measuring protein copy number and burst size.
[ { "created": "Wed, 6 Jan 2010 21:07:28 GMT", "version": "v1" }, { "created": "Tue, 12 Jan 2010 21:57:55 GMT", "version": "v2" } ]
2015-05-14
[ [ "Teng", "Shu-Wen", "" ], [ "Wang", "Yufang", "" ], [ "Tu", "Kimberly C.", "" ], [ "Long", "Tao", "" ], [ "Mehta", "Pankaj", "" ], [ "Wingreen", "Ned S.", "" ], [ "Bassler", "Bonnie L.", "" ], [ "Ong", "N. P.", "" ] ]
Quorum sensing is the mechanism by which bacteria communicate and synchronize group behaviors. Quantitative information on parameters such as the copy number of particular quorum-sensing proteins should contribute strongly to understanding how the quorum-sensing network functions. Here we show that the copy number of the master regulator protein LuxR in Vibrio harveyi, can be determined in vivo by exploiting small-number fluctuations of the protein distribution when cells undergo division. When a cell divides, both its volume and LuxR protein copy number N are partitioned with slight asymmetries. We have measured the distribution functions describing the partitioning of the protein fluorescence and the cell volume. The fluorescence distribution is found to narrow systematically as the LuxR population increases while the volume partitioning is unchanged. Analyzing these changes statistically, we have determined that N = 80-135 dimers at low cell density and 575 dimers at high cell density. In addition, we have measured the static distribution of LuxR over a large (3,000) clonal population. Combining the static and time-lapse experiments, we determine the magnitude of the Fano factor of the distribution. This technique has broad applicability as a general, in vivo technique for measuring protein copy number and burst size.
1503.01096
J. C. Phillips
J. C. Phillips
Similarity Is Not Enough: Tipping Points of Ebola Zaire Mortalities
arXiv admin note: text overlap with arXiv:1410.1417
null
10.1016/j.physa.2015.02.056
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In early 2014 an outbreak of a slightly mutated Zaire Ebola subtype appeared in West Africa which is less virulent than 1976 and 1994 strains. The numbers of cases per year appear to be ~ 1000 times larger than the earlier strains, suggesting a greatly enhanced transmissibility. Although the fraction of the 2014 spike glycoprotein mutations is very small (~3%), the mortality is significantly reduced, while the transmission appears to have increased strongly. Bioinformatic scaling had previously shown similar inversely correlated trends in virulence and transmission in N1 (H1N1) and N2 (H3N2) influenza spike glycoprotein mutations. These trends appear to be related to various external factors (migration, availability of pure water, and vaccination programs). The molecular mechanisms for Ebola's mutational response involve mainly changes in the disordered mucin-like domain (MLD) of its spike glycoprotein amino acids. The MLD has been observed to form the tip of an oligomeric amphiphilic wedge that selectively pries apart cell-cell interfaces via an oxidative mechanism.
[ { "created": "Mon, 17 Nov 2014 17:57:31 GMT", "version": "v1" } ]
2015-06-24
[ [ "Phillips", "J. C.", "" ] ]
In early 2014 an outbreak of a slightly mutated Zaire Ebola subtype appeared in West Africa which is less virulent than 1976 and 1994 strains. The numbers of cases per year appear to be ~ 1000 times larger than the earlier strains, suggesting a greatly enhanced transmissibility. Although the fraction of the 2014 spike glycoprotein mutations is very small (~3%), the mortality is significantly reduced, while the transmission appears to have increased strongly. Bioinformatic scaling had previously shown similar inversely correlated trends in virulence and transmission in N1 (H1N1) and N2 (H3N2) influenza spike glycoprotein mutations. These trends appear to be related to various external factors (migration, availability of pure water, and vaccination programs). The molecular mechanisms for Ebola's mutational response involve mainly changes in the disordered mucin-like domain (MLD) of its spike glycoprotein amino acids. The MLD has been observed to form the tip of an oligomeric amphiphilic wedge that selectively pries apart cell-cell interfaces via an oxidative mechanism.
2403.01383
Shulin Wan
Jianlie Shen, Shulin Wan, Haidong Tan
Postharvest litchi (Litchi chinensis Sonn.) quality preservation by alginate oligosaccharides
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
This study investigates the efficacy of alginate oligosaccharides, derived from a novel alginate lyase expressed in E. coli (Pet21a-alginate lyase), in preserving the postharvest quality of litchi (Litchi chinensis Sonn.) fruits. The alginate lyase, characterized by Huang et al. (2013), was employed to produce AOS through enzymatic degradation of alginate. The resulting oligosaccharides were applied to litchi fruits harvested from Guangzhou Zengcheng to evaluate their impact on various quality parameters under controlled storage conditions. The study focused on measuring the effects of alginate oligosaccharide treatment on the fruits' color retention, water loss rate, hardness, and susceptibility to mold infection, under a set relative humidity and temperature. Results demonstrated significant improvements in the treated fruits, with enhanced color retention, reduced water loss, maintained hardness, and lower rates of mold infection compared to untreated controls. These findings suggest that AOS offer a promising natural alternative for extending the shelf life and maintaining the quality of litchi fruits postharvest.
[ { "created": "Sun, 3 Mar 2024 03:07:46 GMT", "version": "v1" } ]
2024-03-05
[ [ "Shen", "Jianlie", "" ], [ "Wan", "Shulin", "" ], [ "Tan", "Haidong", "" ] ]
This study investigates the efficacy of alginate oligosaccharides, derived from a novel alginate lyase expressed in E. coli (Pet21a-alginate lyase), in preserving the postharvest quality of litchi (Litchi chinensis Sonn.) fruits. The alginate lyase, characterized by Huang et al. (2013), was employed to produce AOS through enzymatic degradation of alginate. The resulting oligosaccharides were applied to litchi fruits harvested from Guangzhou Zengcheng to evaluate their impact on various quality parameters under controlled storage conditions. The study focused on measuring the effects of alginate oligosaccharide treatment on the fruits' color retention, water loss rate, hardness, and susceptibility to mold infection, under a set relative humidity and temperature. Results demonstrated significant improvements in the treated fruits, with enhanced color retention, reduced water loss, maintained hardness, and lower rates of mold infection compared to untreated controls. These findings suggest that AOS offer a promising natural alternative for extending the shelf life and maintaining the quality of litchi fruits postharvest.
2408.05633
Jaeyoung Yoon
Jaeyoung Yoon
Geometrical determinant of nonlinear synaptic integration in human cortical pyramidal neurons
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neurons integrate synaptic inputs and convert them to action potential output at electrically distant locations. The computational power of a neuron is hence enhanced by subcellular compartmentalization and nonlinear synaptic integration, but the biophysical determinants of these features in human neurons are not completely understood. By examining the synaptic input-output function of human neocortical pyramidal neurons, we found that the nonlinearity threshold at the soma was linearly determined by the shortest path distance from the synapse to the apical trunk, and the slope of this relationship was consistent throughout the dendritic arbor. Analogous rules were found from both supragranular and infragranular layers of the rodent cortex, suggesting that these represent a fundamental property of pyramidal neurons. Additionally, we found that neurons associated with tumor or epilepsy had distinct membrane properties, but the nonlinearity threshold was shifted in amplitude such that the slope of its relationship with synaptic distance remained consistent.
[ { "created": "Sat, 10 Aug 2024 21:04:46 GMT", "version": "v1" } ]
2024-08-13
[ [ "Yoon", "Jaeyoung", "" ] ]
Neurons integrate synaptic inputs and convert them to action potential output at electrically distant locations. The computational power of a neuron is hence enhanced by subcellular compartmentalization and nonlinear synaptic integration, but the biophysical determinants of these features in human neurons are not completely understood. By examining the synaptic input-output function of human neocortical pyramidal neurons, we found that the nonlinearity threshold at the soma was linearly determined by the shortest path distance from the synapse to the apical trunk, and the slope of this relationship was consistent throughout the dendritic arbor. Analogous rules were found from both supragranular and infragranular layers of the rodent cortex, suggesting that these represent a fundamental property of pyramidal neurons. Additionally, we found that neurons associated with tumor or epilepsy had distinct membrane properties, but the nonlinearity threshold was shifted in amplitude such that the slope of its relationship with synaptic distance remained consistent.
1507.00044
Tommaso Biancalani
Farshid Jafarpour and Tommaso Biancalani and Nigel Goldenfeld
A noise-induced mechanism for biological homochirality of early life self-replicators
8 pages, 4 figures
Phys. Rev. Lett. 115, 158101 (2015)
10.1103/PhysRevLett.115.158101
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The observed single-handedness of biological amino acids and sugars has long been attributed to autocatalysis. However, the stability of homochiral states in deterministic autocatalytic systems relies on cross inhibition of the two chiral states, an unlikely scenario for early life self-replicators. Here, we present a theory for a stochastic individual-level model of autocatalysis due to early life self-replicators. Without chiral inhibition, the racemic state is the global attractor of the deterministic dynamics, but intrinsic multiplicative noise stabilizes the homochiral states, in both well-mixed and spatially-extended systems. We conclude that autocatalysis is a viable mechanism for homochirality, without imposing additional nonlinearities such as chiral inhibition.
[ { "created": "Tue, 30 Jun 2015 21:41:49 GMT", "version": "v1" }, { "created": "Mon, 17 Aug 2015 16:51:46 GMT", "version": "v2" } ]
2015-10-14
[ [ "Jafarpour", "Farshid", "" ], [ "Biancalani", "Tommaso", "" ], [ "Goldenfeld", "Nigel", "" ] ]
The observed single-handedness of biological amino acids and sugars has long been attributed to autocatalysis. However, the stability of homochiral states in deterministic autocatalytic systems relies on cross inhibition of the two chiral states, an unlikely scenario for early life self-replicators. Here, we present a theory for a stochastic individual-level model of autocatalysis due to early life self-replicators. Without chiral inhibition, the racemic state is the global attractor of the deterministic dynamics, but intrinsic multiplicative noise stabilizes the homochiral states, in both well-mixed and spatially-extended systems. We conclude that autocatalysis is a viable mechanism for homochirality, without imposing additional nonlinearities such as chiral inhibition.
2106.14236
Johannes Berg
Arman Angaji, Christoph Velling, Johannes Berg
Stochastic clonal dynamics and genetic turnover in exponentially growing populations
15 pages
J. Stat. Mech. 103502 (2021)
10.1088/1742-5468/ac257e
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an exponentially growing population of cells undergoing mutations and ask about the effect of reproductive fluctuations (genetic drift) on its long-term evolution. We combine first step analysis with the stochastic dynamics of a birth-death process to analytically calculate the probability that the parent of a given genotype will go extinct. We compare the results with numerical simulations and show how this turnover of genetic clones can be used to infer the rates underlying the population dynamics. Our work is motivated by growing populations of tumour cells, the epidemic spread of viruses, and bacterial growth.
[ { "created": "Sun, 27 Jun 2021 13:42:33 GMT", "version": "v1" }, { "created": "Fri, 11 Mar 2022 17:25:37 GMT", "version": "v2" } ]
2022-03-14
[ [ "Angaji", "Arman", "" ], [ "Velling", "Christoph", "" ], [ "Berg", "Johannes", "" ] ]
We consider an exponentially growing population of cells undergoing mutations and ask about the effect of reproductive fluctuations (genetic drift) on its long-term evolution. We combine first step analysis with the stochastic dynamics of a birth-death process to analytically calculate the probability that the parent of a given genotype will go extinct. We compare the results with numerical simulations and show how this turnover of genetic clones can be used to infer the rates underlying the population dynamics. Our work is motivated by growing populations of tumour cells, the epidemic spread of viruses, and bacterial growth.
2303.09803
Tony Lindeberg
Tony Lindeberg
Covariance properties under natural image transformations for the generalized Gaussian derivative model for visual receptive fields
38 pages, 14 figures
Frontiers in Computational Neuroscience 17:1189949, 2023
10.3389/fncom.2023.1189949
null
q-bio.NC cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a theory for how geometric image transformations can be handled by a first layer of linear receptive fields, in terms of true covariance properties, which, in turn, enable geometric invariance properties at higher levels in the visual hierarchy. Specifically, we develop this theory for a generalized Gaussian derivative model for visual receptive fields, which is derived in an axiomatic manner from first principles, that reflect symmetry properties of the environment, complemented by structural assumptions to guarantee internally consistent treatment of image structures over multiple spatio-temporal scales. It is shown how the studied generalized Gaussian derivative model for visual receptive fields obeys true covariance properties under spatial scaling transformations, spatial affine transformations, Galilean transformations and temporal scaling transformations, implying that a vision system, based on image and video measurements in terms of the receptive fields according to this model, can to first order of approximation handle the image and video deformations between multiple views of objects delimited by smooth surfaces, as well as between multiple views of spatio-temporal events, under varying relative motions between the objects and events in the world and the observer. We conclude by describing implications of the presented theory for biological vision, regarding connections between the variabilities of the shapes of biological visual receptive fields and the variabilities of spatial and spatio-temporal image structures under natural image transformations.
[ { "created": "Fri, 17 Mar 2023 07:08:17 GMT", "version": "v1" }, { "created": "Tue, 16 May 2023 07:49:24 GMT", "version": "v2" }, { "created": "Tue, 23 May 2023 06:49:07 GMT", "version": "v3" }, { "created": "Thu, 1 Jun 2023 09:01:01 GMT", "version": "v4" } ]
2023-08-29
[ [ "Lindeberg", "Tony", "" ] ]
This paper presents a theory for how geometric image transformations can be handled by a first layer of linear receptive fields, in terms of true covariance properties, which, in turn, enable geometric invariance properties at higher levels in the visual hierarchy. Specifically, we develop this theory for a generalized Gaussian derivative model for visual receptive fields, which is derived in an axiomatic manner from first principles, that reflect symmetry properties of the environment, complemented by structural assumptions to guarantee internally consistent treatment of image structures over multiple spatio-temporal scales. It is shown how the studied generalized Gaussian derivative model for visual receptive fields obeys true covariance properties under spatial scaling transformations, spatial affine transformations, Galilean transformations and temporal scaling transformations, implying that a vision system, based on image and video measurements in terms of the receptive fields according to this model, can to first order of approximation handle the image and video deformations between multiple views of objects delimited by smooth surfaces, as well as between multiple views of spatio-temporal events, under varying relative motions between the objects and events in the world and the observer. We conclude by describing implications of the presented theory for biological vision, regarding connections between the variabilities of the shapes of biological visual receptive fields and the variabilities of spatial and spatio-temporal image structures under natural image transformations.
2103.05852
Takashi Okada
Takashi Okada, Oskar Hallatschek
Dynamic sampling bias and overdispersion induced by skewed offspring distributions
39 pages, 22 Figures; v2: Figure 5 is replaced
Genetics 219 (4), 2021
10.1093/genetics/iyab135
RIKEN-iTHEMS-Report-21
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Natural populations often show enhanced genetic drift consistent with a strong skew in their offspring number distribution. The skew arises because the variability of family sizes is either inherently strong or amplified by population expansions, leading to so-called `jackpot' events. The resulting allele frequency fluctuations are large and, therefore, challenge standard models of population genetics, which assume sufficiently narrow offspring distributions. While the neutral dynamics backward in time can be readily analyzed using coalescent approaches, we still know little about the effect of broad offspring distributions on the dynamics forward in time, especially with selection. Here, we employ an exact asymptotic analysis combined with a scaling hypothesis to demonstrate that over-dispersed frequency trajectories emerge from the competition of conventional forces, such as selection or mutations, with an emerging time-dependent sampling bias against the minor allele. The sampling bias arises from the characteristic time-dependence of the largest sampled family size within each allelic type. Using this insight, we establish simple scaling relations for allele frequency fluctuations, fixation probabilities, extinction times, and the site frequency spectra that arise when offspring numbers are distributed according to a power law $~n^{-(1+\alpha)}$. To demonstrate that this coarse-grained model captures a wide variety of non-equilibrium dynamics, we validate our results in traveling waves, where the phenomenon of 'gene surfing' can produce any exponent $1<\alpha <2$. We argue that the concept of a dynamic sampling bias is useful generally to develop both intuition and statistical tests for the unusual dynamics of populations with skewed offspring distributions, which can confound commonly used tests for selection or demographic history.
[ { "created": "Wed, 10 Mar 2021 03:30:38 GMT", "version": "v1" }, { "created": "Wed, 24 Mar 2021 19:00:33 GMT", "version": "v2" }, { "created": "Wed, 12 Jan 2022 02:47:42 GMT", "version": "v3" } ]
2022-01-13
[ [ "Okada", "Takashi", "" ], [ "Hallatschek", "Oskar", "" ] ]
Natural populations often show enhanced genetic drift consistent with a strong skew in their offspring number distribution. The skew arises because the variability of family sizes is either inherently strong or amplified by population expansions, leading to so-called `jackpot' events. The resulting allele frequency fluctuations are large and, therefore, challenge standard models of population genetics, which assume sufficiently narrow offspring distributions. While the neutral dynamics backward in time can be readily analyzed using coalescent approaches, we still know little about the effect of broad offspring distributions on the dynamics forward in time, especially with selection. Here, we employ an exact asymptotic analysis combined with a scaling hypothesis to demonstrate that over-dispersed frequency trajectories emerge from the competition of conventional forces, such as selection or mutations, with an emerging time-dependent sampling bias against the minor allele. The sampling bias arises from the characteristic time-dependence of the largest sampled family size within each allelic type. Using this insight, we establish simple scaling relations for allele frequency fluctuations, fixation probabilities, extinction times, and the site frequency spectra that arise when offspring numbers are distributed according to a power law $~n^{-(1+\alpha)}$. To demonstrate that this coarse-grained model captures a wide variety of non-equilibrium dynamics, we validate our results in traveling waves, where the phenomenon of 'gene surfing' can produce any exponent $1<\alpha <2$. We argue that the concept of a dynamic sampling bias is useful generally to develop both intuition and statistical tests for the unusual dynamics of populations with skewed offspring distributions, which can confound commonly used tests for selection or demographic history.
1903.04220
Dmitry Melnikov
Dmitry Melnikov, Antti J Niemi and Ara Sedrakyan
Topological Indices of Proteins
17 pages, many figures
null
null
ITEP-TH-22/18
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein molecules can be approximated by discrete polygonal chains of amino acids. Standard topological tools can be applied to the smoothening of the polygons to introduce a topological classification of proteins, for example, using the self-linking number of the corresponding framed curves. In this paper we add new details to the standard classification. Known definitions of the self-linking number apply to non-singular framings: for example, the Frenet framing cannot be used if the curve has inflection points. Meanwhile in the discrete proteins the special points are naturally resolved. Consequently, a separate integer topological characteristics can be introduced, which takes into account the intrinsic features of the special points. For large number of proteins we compute integer topological indices associated with the singularities of the Frenet framing. We show how a version of the Calugareanu's theorem is satisfied for the associated self-linking number of a discrete curve. Since the singularities of the Frenet framing correspond to the structural motifs of proteins, we propose topological indices as a technical tool for the description of the folding dynamics of proteins.
[ { "created": "Mon, 11 Mar 2019 11:17:07 GMT", "version": "v1" } ]
2019-03-12
[ [ "Melnikov", "Dmitry", "" ], [ "Niemi", "Antti J", "" ], [ "Sedrakyan", "Ara", "" ] ]
Protein molecules can be approximated by discrete polygonal chains of amino acids. Standard topological tools can be applied to the smoothening of the polygons to introduce a topological classification of proteins, for example, using the self-linking number of the corresponding framed curves. In this paper we add new details to the standard classification. Known definitions of the self-linking number apply to non-singular framings: for example, the Frenet framing cannot be used if the curve has inflection points. Meanwhile in the discrete proteins the special points are naturally resolved. Consequently, a separate integer topological characteristics can be introduced, which takes into account the intrinsic features of the special points. For large number of proteins we compute integer topological indices associated with the singularities of the Frenet framing. We show how a version of the Calugareanu's theorem is satisfied for the associated self-linking number of a discrete curve. Since the singularities of the Frenet framing correspond to the structural motifs of proteins, we propose topological indices as a technical tool for the description of the folding dynamics of proteins.
1009.3161
Georg Fritz
Kirstin Fritz, Georg Fritz, Barbara Windschiegl, Claudia Steinem, Bert Nickel
Arrangement of Annexin A2 tetramer and its impact on the structure and diffusivity of supported lipid bilayers
27 pages, 7 figures; supplementary material available upon request from the authors
Soft Matter, 2010, 6, 4084-4094
10.1039/c0sm00047g
null
q-bio.BM physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Annexins are a family of proteins that bind to anionic phospholipid membranes in a Ca2+-dependent manner. Annexin A2 forms heterotetramers (Anx A2t) with the S100A10 (p11) protein dimer. The tetramer is capable of bridging phospholipid membranes and it has been suggested to play a role in Ca2+-dependent exocytosis and cell-cell adhesion of metastatic cells. Here, we employ x-ray reflectivity measurements to resolve the conformation of Anx A2t upon Ca2+-dependent binding to single supported lipid bilayers (SLBs) composed of different mixtures of anionic (POPS) and neutral (POPC) phospholipids. Based on our results we propose that Anx A2t binds in a side-by-side configuration, i.e., both Anx A2 monomers bind to the bilayer with the p11 dimer positioned on top. Furthermore, we observe a strong decrease of lipid mobility upon binding of Anx A2t to SLBs with varying POPS content. X-ray reflectivity measurements indicate that binding of Anx A2t also increases the density of the SLB. Interestingly, in the protein-facing leaflet of the SLB the lipid density is higher than in the substrate-facing leaflet. This asymmetric densification of the lipid bilayer by Anx A2t and Ca2+ might have important implications for the biochemical mechanism of Anx A2t-induced endo- and exocytosis.
[ { "created": "Thu, 16 Sep 2010 12:42:14 GMT", "version": "v1" } ]
2010-09-17
[ [ "Fritz", "Kirstin", "" ], [ "Fritz", "Georg", "" ], [ "Windschiegl", "Barbara", "" ], [ "Steinem", "Claudia", "" ], [ "Nickel", "Bert", "" ] ]
Annexins are a family of proteins that bind to anionic phospholipid membranes in a Ca2+-dependent manner. Annexin A2 forms heterotetramers (Anx A2t) with the S100A10 (p11) protein dimer. The tetramer is capable of bridging phospholipid membranes and it has been suggested to play a role in Ca2+-dependent exocytosis and cell-cell adhesion of metastatic cells. Here, we employ x-ray reflectivity measurements to resolve the conformation of Anx A2t upon Ca2+-dependent binding to single supported lipid bilayers (SLBs) composed of different mixtures of anionic (POPS) and neutral (POPC) phospholipids. Based on our results we propose that Anx A2t binds in a side-by-side configuration, i.e., both Anx A2 monomers bind to the bilayer with the p11 dimer positioned on top. Furthermore, we observe a strong decrease of lipid mobility upon binding of Anx A2t to SLBs with varying POPS content. X-ray reflectivity measurements indicate that binding of Anx A2t also increases the density of the SLB. Interestingly, in the protein-facing leaflet of the SLB the lipid density is higher than in the substrate-facing leaflet. This asymmetric densification of the lipid bilayer by Anx A2t and Ca2+ might have important implications for the biochemical mechanism of Anx A2t-induced endo- and exocytosis.
1208.3606
Serik Sagitov
Graham Jones, Serik Sagitov, and Bengt Oxelman
Statistical Inference of Allopolyploid Species Networks in the Presence of Incomplete Lineage Sorting
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polyploidy is an important speciation mechanism, particularly in land plants. Allopolyploid species are formed after hybridization between otherwise intersterile parental species. Recent theoretical progress has led to successful implementation of species tree models that take population genetic parameters into account. However, these models have not included allopolyploid hybridization and the special problems imposed when species trees of allopolyploids are inferred. Here, two new models for the statistical inference of the evolutionary history of allopolyploids are evaluated using simulations and demonstrated on two empirical data sets. It is assumed that there has been a single hybridization event between two diploid species resulting in a genomic allotetraploid. The evolutionary history can be represented as a network or as a multiply labeled tree, in which some pairs of tips are labeled with the same species. In one of the models (AlloppMUL), the multiply labeled tree is inferred directly. This is the simplest model and the most widely applicable, since fewer assumptions are made. The second model (AlloppNET) incorporates the hybridization event explicitly which means that fewer parameters need to be estimated. Both models are implemented in the BEAST framework. Simulations show that both models are useful and that AlloppNET is more accurate if the assumptions it is based on are valid. The models are demonstrated on previously analyzed data from the genus Pachycladon (Brassicaceae) and from the genus Silene (Caryophyllaceae).
[ { "created": "Fri, 17 Aug 2012 14:42:34 GMT", "version": "v1" } ]
2012-08-20
[ [ "Jones", "Graham", "" ], [ "Sagitov", "Serik", "" ], [ "Oxelman", "Bengt", "" ] ]
Polyploidy is an important speciation mechanism, particularly in land plants. Allopolyploid species are formed after hybridization between otherwise intersterile parental species. Recent theoretical progress has led to successful implementation of species tree models that take population genetic parameters into account. However, these models have not included allopolyploid hybridization and the special problems imposed when species trees of allopolyploids are inferred. Here, two new models for the statistical inference of the evolutionary history of allopolyploids are evaluated using simulations and demonstrated on two empirical data sets. It is assumed that there has been a single hybridization event between two diploid species resulting in a genomic allotetraploid. The evolutionary history can be represented as a network or as a multiply labeled tree, in which some pairs of tips are labeled with the same species. In one of the models (AlloppMUL), the multiply labeled tree is inferred directly. This is the simplest model and the most widely applicable, since fewer assumptions are made. The second model (AlloppNET) incorporates the hybridization event explicitly which means that fewer parameters need to be estimated. Both models are implemented in the BEAST framework. Simulations show that both models are useful and that AlloppNET is more accurate if the assumptions it is based on are valid. The models are demonstrated on previously analyzed data from the genus Pachycladon (Brassicaceae) and from the genus Silene (Caryophyllaceae).
2309.07099
Jan-Philipp Fr\"anken
Jan-Philipp Fr\"anken, Christopher G. Lucas, Neil R. Bramley, Steven T. Piantadosi
Modeling infant object perception as program induction
3 pages, 3 figures, accepted at CCN conference 2023
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Infants expect physical objects to be rigid and persist through space and time and in spite of occlusion. Developmentists frequently attribute these expectations to a "core system" for object recognition. However, it is unclear if this move is necessary. If object representations emerge reliably from general inductive learning mechanisms exposed to small amounts of environment data, it could be that infants simply induce these assumptions very early. Here, we demonstrate that a domain general learning system, previously used to model concept learning and language learning, can also induce models of these distinctive "core" properties of objects after exposure to a small number of examples. Across eight micro-worlds inspired by experiments from the developmental literature, our model generates concepts that capture core object properties, including rigidity and object persistence. Our findings suggest infant object perception may rely on a general cognitive process that creates models to maximize the likelihood of observations
[ { "created": "Mon, 28 Aug 2023 23:11:42 GMT", "version": "v1" } ]
2023-09-14
[ [ "Fränken", "Jan-Philipp", "" ], [ "Lucas", "Christopher G.", "" ], [ "Bramley", "Neil R.", "" ], [ "Piantadosi", "Steven T.", "" ] ]
Infants expect physical objects to be rigid and persist through space and time and in spite of occlusion. Developmentists frequently attribute these expectations to a "core system" for object recognition. However, it is unclear if this move is necessary. If object representations emerge reliably from general inductive learning mechanisms exposed to small amounts of environment data, it could be that infants simply induce these assumptions very early. Here, we demonstrate that a domain general learning system, previously used to model concept learning and language learning, can also induce models of these distinctive "core" properties of objects after exposure to a small number of examples. Across eight micro-worlds inspired by experiments from the developmental literature, our model generates concepts that capture core object properties, including rigidity and object persistence. Our findings suggest infant object perception may rely on a general cognitive process that creates models to maximize the likelihood of observations
2012.03324
Nabil Ibtehaz
Nabil Ibtehaz, S. M. Shakhawat Hossain Sourav, Md. Shamsuzzoha Bayzid, M. Sohel Rahman
Align-gram : Rethinking the Skip-gram Model for Protein Sequence Analysis
null
null
null
null
q-bio.QM cs.AI cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The inception of next generations sequencing technologies have exponentially increased the volume of biological sequence data. Protein sequences, being quoted as the `language of life', has been analyzed for a multitude of applications and inferences. Motivation: Owing to the rapid development of deep learning, in recent years there have been a number of breakthroughs in the domain of Natural Language Processing. Since these methods are capable of performing different tasks when trained with a sufficient amount of data, off-the-shelf models are used to perform various biological applications. In this study, we investigated the applicability of the popular Skip-gram model for protein sequence analysis and made an attempt to incorporate some biological insights into it. Results: We propose a novel $k$-mer embedding scheme, Align-gram, which is capable of mapping the similar $k$-mers close to each other in a vector space. Furthermore, we experiment with other sequence-based protein representations and observe that the embeddings derived from Align-gram aids modeling and training deep learning models better. Our experiments with a simple baseline LSTM model and a much complex CNN model of DeepGoPlus shows the potential of Align-gram in performing different types of deep learning applications for protein sequence analysis.
[ { "created": "Sun, 6 Dec 2020 17:04:17 GMT", "version": "v1" } ]
2020-12-08
[ [ "Ibtehaz", "Nabil", "" ], [ "Sourav", "S. M. Shakhawat Hossain", "" ], [ "Bayzid", "Md. Shamsuzzoha", "" ], [ "Rahman", "M. Sohel", "" ] ]
Background: The inception of next generations sequencing technologies have exponentially increased the volume of biological sequence data. Protein sequences, being quoted as the `language of life', has been analyzed for a multitude of applications and inferences. Motivation: Owing to the rapid development of deep learning, in recent years there have been a number of breakthroughs in the domain of Natural Language Processing. Since these methods are capable of performing different tasks when trained with a sufficient amount of data, off-the-shelf models are used to perform various biological applications. In this study, we investigated the applicability of the popular Skip-gram model for protein sequence analysis and made an attempt to incorporate some biological insights into it. Results: We propose a novel $k$-mer embedding scheme, Align-gram, which is capable of mapping the similar $k$-mers close to each other in a vector space. Furthermore, we experiment with other sequence-based protein representations and observe that the embeddings derived from Align-gram aids modeling and training deep learning models better. Our experiments with a simple baseline LSTM model and a much complex CNN model of DeepGoPlus shows the potential of Align-gram in performing different types of deep learning applications for protein sequence analysis.
q-bio/0404037
Maria Barbi
Maria Barbi, Julien Mozziconacci and Jean-Marc Victor
How does the chromatin fiber deal with topological constraints?
Presented in Nature "News and views in brief" Vol. 429 (13 May 2004). Movies available at http://www.lptl.jussieu.fr/recherche/operationE_fichiers/Page_figurePRL.html
Physical Review E, 71, 031910 (2005)
10.1103/PhysRevE.71.031910
null
q-bio.SC cond-mat.soft physics.bio-ph
null
In the nuclei of eukaryotic cells, DNA is packaged through several levels of compaction in an orderly retrievable way that enables the correct regulation of gene expression. The functional dynamics of this assembly involves the unwinding of the so-called 30 nm chromatin fiber and accordingly imposes strong topological constraints. We present a general method for computing both the twist and the writhe of any winding pattern. An explicit derivation is implemented for the chromatin fiber which provides the linking number of DNA in eukaryotic chromosomes. We show that there exists one and only one unwinding path which satisfies both topological and mechanical constraints that DNA has to deal with during condensation/decondensation processes.
[ { "created": "Mon, 26 Apr 2004 15:41:20 GMT", "version": "v1" }, { "created": "Wed, 12 May 2004 16:36:49 GMT", "version": "v2" } ]
2007-05-23
[ [ "Barbi", "Maria", "" ], [ "Mozziconacci", "Julien", "" ], [ "Victor", "Jean-Marc", "" ] ]
In the nuclei of eukaryotic cells, DNA is packaged through several levels of compaction in an orderly retrievable way that enables the correct regulation of gene expression. The functional dynamics of this assembly involves the unwinding of the so-called 30 nm chromatin fiber and accordingly imposes strong topological constraints. We present a general method for computing both the twist and the writhe of any winding pattern. An explicit derivation is implemented for the chromatin fiber which provides the linking number of DNA in eukaryotic chromosomes. We show that there exists one and only one unwinding path which satisfies both topological and mechanical constraints that DNA has to deal with during condensation/decondensation processes.
2004.01028
Christos Fotis
C. Fotis, N. Meimetis, A. Sardis and L.G. Alexopoulos
DeepSIBA: Chemical Structure-based Inference of Biological Alterations
Article: 19 pages, Electronic Supplementary Information (included): 16 pages
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting whether a chemical structure shares a desired biological effect can have a significant impact for in-silico compound screening in early drug discovery. In this study, we developed a deep learning model where compound structures are represented as graphs and then linked to their biological footprint. To make this complex problem computationally tractable, compound differences were mapped to biological effect alterations using Siamese Graph Convolutional Neural Networks. The proposed model was able to learn new representations from chemical structures and identify structurally dissimilar compounds that affect similar biological processes with high precision. Additionally, by utilizing deep ensembles to estimate uncertainty, we were able to provide reliable and accurate predictions for chemical structures that are very different from the ones used during training. Finally, we present a novel inference approach, where the trained models are used to estimate the signaling pathways affected by a compound perturbation in a specific cell line, using only its chemical structure as input. As a use case, this approach was used to infer signaling pathways affected by FDA-approved anticancer drugs.
[ { "created": "Wed, 1 Apr 2020 16:29:45 GMT", "version": "v1" } ]
2020-04-03
[ [ "Fotis", "C.", "" ], [ "Meimetis", "N.", "" ], [ "Sardis", "A.", "" ], [ "Alexopoulos", "L. G.", "" ] ]
Predicting whether a chemical structure shares a desired biological effect can have a significant impact for in-silico compound screening in early drug discovery. In this study, we developed a deep learning model where compound structures are represented as graphs and then linked to their biological footprint. To make this complex problem computationally tractable, compound differences were mapped to biological effect alterations using Siamese Graph Convolutional Neural Networks. The proposed model was able to learn new representations from chemical structures and identify structurally dissimilar compounds that affect similar biological processes with high precision. Additionally, by utilizing deep ensembles to estimate uncertainty, we were able to provide reliable and accurate predictions for chemical structures that are very different from the ones used during training. Finally, we present a novel inference approach, where the trained models are used to estimate the signaling pathways affected by a compound perturbation in a specific cell line, using only its chemical structure as input. As a use case, this approach was used to infer signaling pathways affected by FDA-approved anticancer drugs.
1808.03478
Erik Aurell
Chen-Yi Gao, Fabio Cecconi, Angelo Vulpiani, Hai-Jun Zhou, Erik Aurell
DCA for genome-wide epistasis analysis: the statistical genetics perspective
9 pages, 5 figures
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Direct Coupling Analysis (DCA) is a now widely used method to leverage statistical information from many similar biological systems to draw meaningful conclusions on each system separately. DCA has been applied with great success to sequences of homologous proteins, and also more recently to whole-genome population-wide sequencing data. We here argue that the use of DCA on the genome scale is contingent on fundamental issues of population genetics. DCA can be expected to yield meaningful results when a population is in the Quasi-Linkage Equilibrium (QLE) phase studied by Kimura and others, but not, for instance, in a phase of Clonal Competition. We discuss how the exponential (Potts model) distributions emerge in QLE, and compare couplings to correlations obtained in a study of about 3,000 genomes of the human pathogen Streptococcus pneumoniae.
[ { "created": "Fri, 10 Aug 2018 10:17:08 GMT", "version": "v1" } ]
2018-08-13
[ [ "Gao", "Chen-Yi", "" ], [ "Cecconi", "Fabio", "" ], [ "Vulpiani", "Angelo", "" ], [ "Zhou", "Hai-Jun", "" ], [ "Aurell", "Erik", "" ] ]
Direct Coupling Analysis (DCA) is a now widely used method to leverage statistical information from many similar biological systems to draw meaningful conclusions on each system separately. DCA has been applied with great success to sequences of homologous proteins, and also more recently to whole-genome population-wide sequencing data. We here argue that the use of DCA on the genome scale is contingent on fundamental issues of population genetics. DCA can be expected to yield meaningful results when a population is in the Quasi-Linkage Equilibrium (QLE) phase studied by Kimura and others, but not, for instance, in a phase of Clonal Competition. We discuss how the exponential (Potts model) distributions emerge in QLE, and compare couplings to correlations obtained in a study of about 3,000 genomes of the human pathogen Streptococcus pneumoniae.
2009.02364
V\'ictor F. Bre\~na-Medina
Jes\'us Pantoja-Hern\'andez, V\'ictor F. Bre\~na-Medina, Mois\'es Santill\'an
Hybrid reaction-diffusion and clock-and-wavefront model for the arrest of oscillations in the somitogenesis segmentation clock
22 pages, 34 figures
null
10.1063/5.0045460
null
q-bio.CB nlin.PS
http://creativecommons.org/licenses/by/4.0/
The clock and wavefront paradigm is arguably the most widely accepted model for explaining the embryonic process of somitogenesis. According to this model, somitogenesis is based upon the interaction between a genetic oscillator, known as segmentation clock, and a differentiation wavefront, which provides the positional information indicating where each pair of somites is formed. Shortly after the clock and wavefront paradigm was introduced, Meinhardt presented a conceptually different mathematical model for morphogenesis in general, and somitogenesis in particular. Recently, Cotterell et al. rediscovered an equivalent model by systematically enumerating and studying small networks performing segmentation. Cotterell et al. called it a progressive oscillatory reaction-diffusion (PORD) model. In the Meinhardt- PORD model, somitogenesis is driven by short-range interactions and the posterior movement of the front is a local, emergent phenomenon, which is not controlled by global positional information. With this model, it is possible to explain some experimental observations that are incompatible with the clock and wavefront model. However the Meinhardt-PORD model has some important disadvantages of its own. Namely, it is quite sensitive to fluctuations and depends on very specific initial conditions (which are not biologically realistic). In this work, we propose an equivalent Meinhardt-PORD model, and then amend it to couple it with a wavefront consisting of a receding morphogen gradient. By doing so, we get a hybrid model between the Meinhardt-PORD and the clock-and-wavefront ones, which overcomes most of the deficiencies of the two originating models.
[ { "created": "Fri, 4 Sep 2020 19:03:51 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2021 22:17:41 GMT", "version": "v2" }, { "created": "Thu, 1 Apr 2021 18:55:21 GMT", "version": "v3" }, { "created": "Tue, 11 May 2021 16:59:44 GMT", "version": "v4" } ]
2021-06-30
[ [ "Pantoja-Hernández", "Jesús", "" ], [ "Breña-Medina", "Víctor F.", "" ], [ "Santillán", "Moisés", "" ] ]
The clock and wavefront paradigm is arguably the most widely accepted model for explaining the embryonic process of somitogenesis. According to this model, somitogenesis is based upon the interaction between a genetic oscillator, known as segmentation clock, and a differentiation wavefront, which provides the positional information indicating where each pair of somites is formed. Shortly after the clock and wavefront paradigm was introduced, Meinhardt presented a conceptually different mathematical model for morphogenesis in general, and somitogenesis in particular. Recently, Cotterell et al. rediscovered an equivalent model by systematically enumerating and studying small networks performing segmentation. Cotterell et al. called it a progressive oscillatory reaction-diffusion (PORD) model. In the Meinhardt- PORD model, somitogenesis is driven by short-range interactions and the posterior movement of the front is a local, emergent phenomenon, which is not controlled by global positional information. With this model, it is possible to explain some experimental observations that are incompatible with the clock and wavefront model. However the Meinhardt-PORD model has some important disadvantages of its own. Namely, it is quite sensitive to fluctuations and depends on very specific initial conditions (which are not biologically realistic). In this work, we propose an equivalent Meinhardt-PORD model, and then amend it to couple it with a wavefront consisting of a receding morphogen gradient. By doing so, we get a hybrid model between the Meinhardt-PORD and the clock-and-wavefront ones, which overcomes most of the deficiencies of the two originating models.
1011.4143
Ambika G
V.Resmi, G.Ambika, R.E.Amritkar and G.Rangarajan
Neuronal networks with coupling through amyloid beta: towards a theory for Alzheimer's disease
5 pages, 4 figures
null
null
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's disease (AD) is a common form of dementia observed in the elderly due to neurodegenerative disorder and dysfunction. This arises from alterations in synaptic functioning of neurons leading to cognitive impairment and memory loss. Recent experimental studies indicate that the amyloid beta (A beta) protein, in its dimer and oligomer forms, affects the synaptic activity of neurons in the early stages of AD. However, the precise mechanism underlying A beta induced synaptic depression is still not clearly understood. In this paper, we introduce an electrical model that provides a possible mechanism for this. Our studies show that the competing effects of synaptic activity and the indirect interaction mediated by A beta, can disrupt the synchrony among neurons and severely affect the neuronal activity. This then leads to sub-threshold activity or synaptic silencing. This is in agreement with the reported disruption of cortical synaptic integration in the presence of A beta in transgenic mice. We suggest that direct electrophysiological measurements of A beta activity can establish its role in AD and the possible revival, proposed in our model. The mechanism proposed here is quite general and could account for the role of the relevant protein in other neuronal disorders also.
[ { "created": "Thu, 18 Nov 2010 07:01:30 GMT", "version": "v1" } ]
2010-11-19
[ [ "Resmi", "V.", "" ], [ "Ambika", "G.", "" ], [ "Amritkar", "R. E.", "" ], [ "Rangarajan", "G.", "" ] ]
Alzheimer's disease (AD) is a common form of dementia observed in the elderly due to neurodegenerative disorder and dysfunction. This arises from alterations in synaptic functioning of neurons leading to cognitive impairment and memory loss. Recent experimental studies indicate that the amyloid beta (A beta) protein, in its dimer and oligomer forms, affects the synaptic activity of neurons in the early stages of AD. However, the precise mechanism underlying A beta induced synaptic depression is still not clearly understood. In this paper, we introduce an electrical model that provides a possible mechanism for this. Our studies show that the competing effects of synaptic activity and the indirect interaction mediated by A beta, can disrupt the synchrony among neurons and severely affect the neuronal activity. This then leads to sub-threshold activity or synaptic silencing. This is in agreement with the reported disruption of cortical synaptic integration in the presence of A beta in transgenic mice. We suggest that direct electrophysiological measurements of A beta activity can establish its role in AD and the possible revival, proposed in our model. The mechanism proposed here is quite general and could account for the role of the relevant protein in other neuronal disorders also.
2005.12634
Evangelos Magirou
Evangelos Magirou
Optimal Responses to an Infectious Disease
null
null
10.13140/RG.2.2.35776.97282
null
q-bio.PE math.OC
http://creativecommons.org/licenses/by/4.0/
We analyze an optimal control version of a simple SIRS epidemiology model. The policy maker can adopt policies to diminish the contact rate between infected and susceptible individuals, at a specific economic cost. The arrival of a vaccine will immediately end the epidemic. Total or partial immunity is modeled, while the contact rate can exhibit a (user specified) seasonality. The problem is solved in a spreadsheet environment. A reasonable parameter selection leads to optimal policies which are similar to those followed by different countries. A mild response relying on eventually reaching a high immunity level is optimal if ample health facilities are available. On the other hand limited health care facilities lead to strict lock downs while moderate ones allow a flattening of the curve approach.
[ { "created": "Tue, 26 May 2020 11:33:38 GMT", "version": "v1" }, { "created": "Mon, 8 Jun 2020 11:11:52 GMT", "version": "v2" } ]
2020-06-09
[ [ "Magirou", "Evangelos", "" ] ]
We analyze an optimal control version of a simple SIRS epidemiology model. The policy maker can adopt policies to diminish the contact rate between infected and susceptible individuals, at a specific economic cost. The arrival of a vaccine will immediately end the epidemic. Total or partial immunity is modeled, while the contact rate can exhibit a (user specified) seasonality. The problem is solved in a spreadsheet environment. A reasonable parameter selection leads to optimal policies which are similar to those followed by different countries. A mild response relying on eventually reaching a high immunity level is optimal if ample health facilities are available. On the other hand limited health care facilities lead to strict lock downs while moderate ones allow a flattening of the curve approach.
1112.5218
Graham Coop
Graham Coop, Peter Ralph
Patterns of neutral diversity under general models of selective sweeps
44 pages. 5 figures
Genetics September 1, 2012 vol. 192 no. 1 205-224
10.1534/genetics.112.141861
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Two major sources of stochasticity in the dynamics of neutral alleles result from resampling of finite populations (genetic drift) and the random genetic background of nearby selected alleles on which the neutral alleles are found (linked selection). There is now good evidence that linked selection plays an important role in shaping polymorphism levels in a number of species. One of the best investigated models of linked selection is the recurrent full sweep model, in which newly arisen selected alleles fix rapidly. However, the bulk of selected alleles that sweep into the population may not be destined for rapid fixation. Here we develop a general model of recurrent selective sweeps in a coalescent framework, one that generalizes the recurrent full sweep model to the case where selected alleles do not sweep to fixation. We show that in a large population, only the initial rapid increase of a selected allele affects the genealogy at partially linked sites, which under fairly general assumptions are unaffected by the subsequent fate of the selected allele. We also apply the theory to a simple model to investigate the impact of recurrent partial sweeps on levels of neutral diversity, and find that for a given reduction in diversity, the impact of recurrent partial sweeps on the frequency spectrum at neutral sites is determined primarily by the frequencies achieved by the selected alleles. Consequently, recurrent sweeps of selected alleles to low frequencies can have a profound effect on levels of diversity but can leave the frequency spectrum relatively unperturbed. In fact, the limiting coalescent model under a high rate of sweeps to low frequency is identical to the standard neutral model. The general model of selective sweeps we describe goes some way towards providing a more flexible framework to describe genomic patterns of diversity than is currently available.
[ { "created": "Thu, 22 Dec 2011 01:35:33 GMT", "version": "v1" }, { "created": "Wed, 9 May 2012 03:13:15 GMT", "version": "v2" }, { "created": "Sun, 17 Jun 2012 16:38:39 GMT", "version": "v3" }, { "created": "Sun, 13 Jan 2013 19:04:17 GMT", "version": "v4" } ]
2013-01-15
[ [ "Coop", "Graham", "" ], [ "Ralph", "Peter", "" ] ]
Two major sources of stochasticity in the dynamics of neutral alleles result from resampling of finite populations (genetic drift) and the random genetic background of nearby selected alleles on which the neutral alleles are found (linked selection). There is now good evidence that linked selection plays an important role in shaping polymorphism levels in a number of species. One of the best investigated models of linked selection is the recurrent full sweep model, in which newly arisen selected alleles fix rapidly. However, the bulk of selected alleles that sweep into the population may not be destined for rapid fixation. Here we develop a general model of recurrent selective sweeps in a coalescent framework, one that generalizes the recurrent full sweep model to the case where selected alleles do not sweep to fixation. We show that in a large population, only the initial rapid increase of a selected allele affects the genealogy at partially linked sites, which under fairly general assumptions are unaffected by the subsequent fate of the selected allele. We also apply the theory to a simple model to investigate the impact of recurrent partial sweeps on levels of neutral diversity, and find that for a given reduction in diversity, the impact of recurrent partial sweeps on the frequency spectrum at neutral sites is determined primarily by the frequencies achieved by the selected alleles. Consequently, recurrent sweeps of selected alleles to low frequencies can have a profound effect on levels of diversity but can leave the frequency spectrum relatively unperturbed. In fact, the limiting coalescent model under a high rate of sweeps to low frequency is identical to the standard neutral model. The general model of selective sweeps we describe goes some way towards providing a more flexible framework to describe genomic patterns of diversity than is currently available.
1505.04268
Nikolaos Sfakianakis
Nadja Hellmann, Niklas Kolbe, and Nikolaos Sfakianakis
A mathematical insight in the epithelial-mesenchymal-like transition in cancer cells and its effect in the invasion of the extracellular matrix
null
null
null
null
q-bio.CB math.NA q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current biological knowledge supports the existence of a secondary group of cancer cells within the body of the tumour that exhibits stem cell-like properties. These cells are termed Cancer Stem Cells (CSCs}, and as opposed to the more usual Differentiated Cancer Cells (DCCs), they exhibit higher motility, they are more resilient to therapy, and are able to metastasize to secondary locations within the organism and produce new tumours. The origin of the CSCs is not completely clear; they seem to stem from the DCCs via a transition process related to the Epithelial-Mesenchymal Transition (EMT) that can also be found in normal tissue. In the current work we model and numerically study the transition between these two types of cancer cells, and the resulting "ensemble" invasion of the extracellular matrix. This leads to the derivation and numerical simulation of two systems: an algebraic-elliptic system for the transition and an advection-reaction-diffusion system of Keller-Segel taxis type for the invasion.
[ { "created": "Sat, 16 May 2015 12:22:45 GMT", "version": "v1" } ]
2015-05-19
[ [ "Hellmann", "Nadja", "" ], [ "Kolbe", "Niklas", "" ], [ "Sfakianakis", "Nikolaos", "" ] ]
Current biological knowledge supports the existence of a secondary group of cancer cells within the body of the tumour that exhibits stem cell-like properties. These cells are termed Cancer Stem Cells (CSCs}, and as opposed to the more usual Differentiated Cancer Cells (DCCs), they exhibit higher motility, they are more resilient to therapy, and are able to metastasize to secondary locations within the organism and produce new tumours. The origin of the CSCs is not completely clear; they seem to stem from the DCCs via a transition process related to the Epithelial-Mesenchymal Transition (EMT) that can also be found in normal tissue. In the current work we model and numerically study the transition between these two types of cancer cells, and the resulting "ensemble" invasion of the extracellular matrix. This leads to the derivation and numerical simulation of two systems: an algebraic-elliptic system for the transition and an advection-reaction-diffusion system of Keller-Segel taxis type for the invasion.
2304.01347
Cheng Zhu
Cheng Zhu, Ying Tan, Shuqi Yang, Jiaqing Miao, Jiayi Zhu, Huan Huang, Dezhong Yao, and Cheng Luo
Temporal Dynamic Synchronous Functional Brain Network for Schizophrenia Diagnosis and Lateralization Analysis
null
null
null
null
q-bio.NC cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The available evidence suggests that dynamic functional connectivity (dFC) can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia(SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed, based on the synchronous temporal properties of feature. Finally, the first modular abnormal hemispherical lateralization test tool in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower order perceptual system and higher order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ and reaffirms the importance of the left medial superior frontal gyrus in SZ. Our core code is available at: https://github.com/swfen/Temporal-BCGCN.
[ { "created": "Fri, 31 Mar 2023 02:54:01 GMT", "version": "v1" }, { "created": "Thu, 6 Apr 2023 09:06:44 GMT", "version": "v2" }, { "created": "Wed, 17 May 2023 02:58:26 GMT", "version": "v3" }, { "created": "Tue, 12 Sep 2023 01:20:11 GMT", "version": "v4" } ]
2023-09-13
[ [ "Zhu", "Cheng", "" ], [ "Tan", "Ying", "" ], [ "Yang", "Shuqi", "" ], [ "Miao", "Jiaqing", "" ], [ "Zhu", "Jiayi", "" ], [ "Huang", "Huan", "" ], [ "Yao", "Dezhong", "" ], [ "Luo", "Cheng", "" ] ]
The available evidence suggests that dynamic functional connectivity (dFC) can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia(SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed, based on the synchronous temporal properties of feature. Finally, the first modular abnormal hemispherical lateralization test tool in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower order perceptual system and higher order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ and reaffirms the importance of the left medial superior frontal gyrus in SZ. Our core code is available at: https://github.com/swfen/Temporal-BCGCN.
1602.06785
Massimo Stella
Massimo Stella, Cecilia S. Andreazzi, Sanja Selakovic, Alireza Goudarzi and Alberto Antonioni
Parasite Spreading in Spatial Ecological Multiplex Networks
null
null
null
null
q-bio.QM physics.bio-ph physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network ecology is a rising field of quantitative biology representing ecosystems as complex networks. A suitable example is parasite spreading: several parasites may be transmitted among their hosts through different mechanisms, each one giving rise to a network of interactions. Modelling these networked, ecological interactions at the same time is still an open challenge. We present a novel spatially-embedded multiplex network framework for modelling multi-host infection spreading through multiple routes of transmission. Our model is inspired by T. cruzi, a parasite transmitted by trophic and vectorial mechanisms. Our ecological network model is represented by a multiplex in which nodes represent species populations interacting through a food web and a parasite contaminative layer at the same time. We modelled an SI dynamics in two different scenarios: a simple theoretical food web and an empirical one. Our simulations in both scenarios show that the infection is more widespread when both the trophic and the contaminative interactions are considered with equal rates. This indicates that trophic and contaminative transmission may have additive effects in real ecosystems. We also find that the ratio of vectors-to-host in the community (i) crucially influences the infection spread, (ii) regulates a percolating phase transition in the rate of parasite transmission and (iii) increases the infection rate in hosts. By immunising the same fractions of predator and prey populations, we show that the multiplex topology is fundamental in outlining the role that each host species plays in parasite transmission in a given ecosystem. We also show that the multiplex models provide a richer phenomenology in terms of parasite spreading dynamics compared to mono-layer models. Our work opens new challenges and provides new quantitative tools for modelling multi-channel spreading in networked systems.
[ { "created": "Mon, 22 Feb 2016 14:18:24 GMT", "version": "v1" }, { "created": "Fri, 9 Sep 2016 15:29:33 GMT", "version": "v2" } ]
2016-09-12
[ [ "Stella", "Massimo", "" ], [ "Andreazzi", "Cecilia S.", "" ], [ "Selakovic", "Sanja", "" ], [ "Goudarzi", "Alireza", "" ], [ "Antonioni", "Alberto", "" ] ]
Network ecology is a rising field of quantitative biology representing ecosystems as complex networks. A suitable example is parasite spreading: several parasites may be transmitted among their hosts through different mechanisms, each one giving rise to a network of interactions. Modelling these networked, ecological interactions at the same time is still an open challenge. We present a novel spatially-embedded multiplex network framework for modelling multi-host infection spreading through multiple routes of transmission. Our model is inspired by T. cruzi, a parasite transmitted by trophic and vectorial mechanisms. Our ecological network model is represented by a multiplex in which nodes represent species populations interacting through a food web and a parasite contaminative layer at the same time. We modelled an SI dynamics in two different scenarios: a simple theoretical food web and an empirical one. Our simulations in both scenarios show that the infection is more widespread when both the trophic and the contaminative interactions are considered with equal rates. This indicates that trophic and contaminative transmission may have additive effects in real ecosystems. We also find that the ratio of vectors-to-host in the community (i) crucially influences the infection spread, (ii) regulates a percolating phase transition in the rate of parasite transmission and (iii) increases the infection rate in hosts. By immunising the same fractions of predator and prey populations, we show that the multiplex topology is fundamental in outlining the role that each host species plays in parasite transmission in a given ecosystem. We also show that the multiplex models provide a richer phenomenology in terms of parasite spreading dynamics compared to mono-layer models. Our work opens new challenges and provides new quantitative tools for modelling multi-channel spreading in networked systems.
2302.01657
Virginie Uhlmann
Si\^an Culley, Alicia Cuber Caballero, Jemima J Burden, Virginie Uhlmann
Made to measure: an introduction to quantification in microscopy data
23 pages, 5 figures, 2 tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of visual information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement "good" is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments.
[ { "created": "Fri, 3 Feb 2023 11:14:13 GMT", "version": "v1" } ]
2023-02-06
[ [ "Culley", "Siân", "" ], [ "Caballero", "Alicia Cuber", "" ], [ "Burden", "Jemima J", "" ], [ "Uhlmann", "Virginie", "" ] ]
Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of visual information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement "good" is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments.
2103.03790
Geoflly Adonias
Geoflly L. Adonias, Harun Siljak, Michael Taynnan Barros, Sasitharan Balasubramaniam
Neuron Signal Propagation Analysis of Cytokine-Storm induced Demyelination
null
null
null
null
q-bio.NC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 pandemic has shaken the world unprecedentedly, where it has affected the vast global population both socially and economically. The pandemic has also opened our eyes to the many threats that novel virus infections can pose for humanity. While numerous unknowns are being investigated in terms of the distributed damage that the virus can do to the human body, recent studies have also shown that the infection can lead to lifelong sequelae that could affect other parts of the body, and one example is the brain. As part of this work, we investigate how viral infection can affect the brain by modelling and simulating a neuron's behaviour under demyelination that is affected by the cytokine storm. We quantify the effects of cytokine-induced demyelination on the propagation of action potential signals within a neuron. We used information and communication theory analysis on the signal propagated through the axonal pathway under different intensity levels of demyelination to analyse these effects. Our simulations demonstrate that virus-induced degeneration can play a role in the signal power and spiking rate and the probability of releasing neurotransmitters and compromising the propagation and processing of information between the neurons. We also propose a transfer function that models these attenuation effects that degenerates the action potential, where this model has the potential to be used as a framework for the analysis of virus-induced neurodegeneration that can pave the way to improved understanding of virus-induced demyelination.
[ { "created": "Fri, 5 Mar 2021 16:39:36 GMT", "version": "v1" } ]
2021-03-08
[ [ "Adonias", "Geoflly L.", "" ], [ "Siljak", "Harun", "" ], [ "Barros", "Michael Taynnan", "" ], [ "Balasubramaniam", "Sasitharan", "" ] ]
The COVID-19 pandemic has shaken the world unprecedentedly, where it has affected the vast global population both socially and economically. The pandemic has also opened our eyes to the many threats that novel virus infections can pose for humanity. While numerous unknowns are being investigated in terms of the distributed damage that the virus can do to the human body, recent studies have also shown that the infection can lead to lifelong sequelae that could affect other parts of the body, and one example is the brain. As part of this work, we investigate how viral infection can affect the brain by modelling and simulating a neuron's behaviour under demyelination that is affected by the cytokine storm. We quantify the effects of cytokine-induced demyelination on the propagation of action potential signals within a neuron. We used information and communication theory analysis on the signal propagated through the axonal pathway under different intensity levels of demyelination to analyse these effects. Our simulations demonstrate that virus-induced degeneration can play a role in the signal power and spiking rate and the probability of releasing neurotransmitters and compromising the propagation and processing of information between the neurons. We also propose a transfer function that models these attenuation effects that degenerates the action potential, where this model has the potential to be used as a framework for the analysis of virus-induced neurodegeneration that can pave the way to improved understanding of virus-induced demyelination.
0801.2403
Emmanuel Tannenbaum
Pavel Gorodetsky and Emmanuel Tannenbaum
The Effect of Mutators on Adaptability in Time-Varying Fitness Landscapes
4 pages, 3 figures
null
10.1103/PhysRevE.77.042901
null
q-bio.PE q-bio.GN
null
This Letter studies the quasispecies dynamics of a population capable of genetic repair evolving on a time-dependent fitness landscape. We develop a model that considers an asexual population of single-stranded, conservatively replicating genomes, whose only source of genetic variation is due to copying errors during replication. We consider a time-dependent, single-fitness-peak landscape where the master sequence changes by a single point mutation every time $ \tau $. We are able to analytically solve for the evolutionary dynamics of the population in the point-mutation limit. In particular, our model provides an analytical expression for the fraction of mutators in the dynamic fitness landscape that agrees well with results from stochastic simulations.
[ { "created": "Tue, 15 Jan 2008 22:42:44 GMT", "version": "v1" } ]
2009-11-13
[ [ "Gorodetsky", "Pavel", "" ], [ "Tannenbaum", "Emmanuel", "" ] ]
This Letter studies the quasispecies dynamics of a population capable of genetic repair evolving on a time-dependent fitness landscape. We develop a model that considers an asexual population of single-stranded, conservatively replicating genomes, whose only source of genetic variation is due to copying errors during replication. We consider a time-dependent, single-fitness-peak landscape where the master sequence changes by a single point mutation every time $ \tau $. We are able to analytically solve for the evolutionary dynamics of the population in the point-mutation limit. In particular, our model provides an analytical expression for the fraction of mutators in the dynamic fitness landscape that agrees well with results from stochastic simulations.
1311.4514
Andrew Mugler
Pieter Rein ten Wolde, Andrew Mugler
The importance of crowding in signaling, genetic, and metabolic networks
19 pages, 6 figures
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is now well established that the cell is a highly crowded environment. Yet, the effects of crowding on the dynamics of signaling pathways, gene regulation networks and metabolic networks are still largely unknown. Crowding can alter both molecular diffusion and the equilibria of biomolecular reactions. In this review, we first discuss how diffusion can affect biochemical networks. Diffusion of transcription factors can increase noise in gene expression, while diffusion of proteins between intracellular compartments or between cells can reduce concentration fluctuations. In push-pull networks diffusion can impede information transmission, while in multi-site protein modification networks diffusion can qualitatively change the macroscopic response of the system, such as the loss or emergence of bistability. Moreover, diffusion can directly change the metabolic flux. We describe how crowding affects diffusion, and thus how all these phenomena are influenced by crowding. Yet, a potentially more important effect of crowding on biochemical networks is mediated via the shift in the equilibria of bimolecular reactions, and we provide computational evidence that supports this idea. Finally, we discuss how the effects of crowding can be incorporated in models of biochemical networks.
[ { "created": "Mon, 18 Nov 2013 19:48:07 GMT", "version": "v1" } ]
2013-11-19
[ [ "Wolde", "Pieter Rein ten", "" ], [ "Mugler", "Andrew", "" ] ]
It is now well established that the cell is a highly crowded environment. Yet, the effects of crowding on the dynamics of signaling pathways, gene regulation networks and metabolic networks are still largely unknown. Crowding can alter both molecular diffusion and the equilibria of biomolecular reactions. In this review, we first discuss how diffusion can affect biochemical networks. Diffusion of transcription factors can increase noise in gene expression, while diffusion of proteins between intracellular compartments or between cells can reduce concentration fluctuations. In push-pull networks diffusion can impede information transmission, while in multi-site protein modification networks diffusion can qualitatively change the macroscopic response of the system, such as the loss or emergence of bistability. Moreover, diffusion can directly change the metabolic flux. We describe how crowding affects diffusion, and thus how all these phenomena are influenced by crowding. Yet, a potentially more important effect of crowding on biochemical networks is mediated via the shift in the equilibria of bimolecular reactions, and we provide computational evidence that supports this idea. Finally, we discuss how the effects of crowding can be incorporated in models of biochemical networks.
2007.13596
Swetaprovo Chaudhuri
Swetaprovo Chaudhuri, Saptarshi Basu, Abhishek Saha
Analyzing the dominant SARS-CoV-2 transmission routes towards an ab initio SEIR model
null
null
10.1063/5.0034032
null
q-bio.PE physics.flu-dyn physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying the relative importance of the different transmission routes of the SARS-CoV-2 virus is an urgent research priority. To that end, the different transmission routes, and their role in determining the evolution of the Covid-19 pandemic are analyzed in this work. Probability of infection caused by inhaling virus-laden droplets (initial, ejection diameters between $0.5-750\mu m$) and the corresponding desiccated nuclei that mostly encapsulate the virions post droplet evaporation, are individually calculated. At typical, air-conditioned yet quiescent indoor space, for average viral loading, cough droplets of initial diameter between $10-50 \mu m$ have the highest infection probability. However, by the time they are inhaled, the diameters reduce to about $1/6^{th}$ of their initial diameters. While the initially near unity infection probability due to droplets rapidly decays within the first $25s$, the small yet persistent infection probability of desiccated nuclei decays appreciably only by $\mathcal{O} (1000s)$, assuming the virus sustains equally well within the dried droplet nuclei as in the droplets. Combined with molecular collision theory adapted to calculate frequency of contact between the susceptible population and the droplet/nuclei cloud, infection rate constants are derived ab-initio, leading to a SEIR model applicable for any respiratory event - vector combination. Viral load, minimum infectious dose, sensitivity of the virus half-life to the phase of its vector and dilution of the respiratory jet/puff by the entraining air are shown to mechanistically determine specific physical modes of transmission and variation in the basic reproduction number $\mathcal{R}_0$, from first principle calculations.
[ { "created": "Mon, 27 Jul 2020 14:28:28 GMT", "version": "v1" }, { "created": "Sat, 1 Aug 2020 01:45:52 GMT", "version": "v2" }, { "created": "Tue, 1 Sep 2020 15:24:07 GMT", "version": "v3" } ]
2020-12-08
[ [ "Chaudhuri", "Swetaprovo", "" ], [ "Basu", "Saptarshi", "" ], [ "Saha", "Abhishek", "" ] ]
Identifying the relative importance of the different transmission routes of the SARS-CoV-2 virus is an urgent research priority. To that end, the different transmission routes, and their role in determining the evolution of the Covid-19 pandemic are analyzed in this work. Probability of infection caused by inhaling virus-laden droplets (initial, ejection diameters between $0.5-750\mu m$) and the corresponding desiccated nuclei that mostly encapsulate the virions post droplet evaporation, are individually calculated. At typical, air-conditioned yet quiescent indoor space, for average viral loading, cough droplets of initial diameter between $10-50 \mu m$ have the highest infection probability. However, by the time they are inhaled, the diameters reduce to about $1/6^{th}$ of their initial diameters. While the initially near unity infection probability due to droplets rapidly decays within the first $25s$, the small yet persistent infection probability of desiccated nuclei decays appreciably only by $\mathcal{O} (1000s)$, assuming the virus sustains equally well within the dried droplet nuclei as in the droplets. Combined with molecular collision theory adapted to calculate frequency of contact between the susceptible population and the droplet/nuclei cloud, infection rate constants are derived ab-initio, leading to a SEIR model applicable for any respiratory event - vector combination. Viral load, minimum infectious dose, sensitivity of the virus half-life to the phase of its vector and dilution of the respiratory jet/puff by the entraining air are shown to mechanistically determine specific physical modes of transmission and variation in the basic reproduction number $\mathcal{R}_0$, from first principle calculations.
2403.07013
Shaorong Chen
Jun Xia, Shaorong Chen, Jingbo Zhou, Tianze Ling, Wenjie Du, Sizhe Liu, Stan Z. Li
AdaNovo: Adaptive \emph{De Novo} Peptide Sequencing with Conditional Mutual Information
null
null
null
null
q-bio.QM cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the analysis of protein composition in biological samples. Despite the development of various deep learning methods for identifying amino acid sequences (peptides) responsible for observed spectra, challenges persist in \emph{de novo} peptide sequencing. Firstly, prior methods struggle to identify amino acids with post-translational modifications (PTMs) due to their lower frequency in training data compared to canonical amino acids, further resulting in decreased peptide-level identification precision. Secondly, diverse types of noise and missing peaks in mass spectra reduce the reliability of training data (peptide-spectrum matches, PSMs). To address these challenges, we propose AdaNovo, a novel framework that calculates conditional mutual information (CMI) between the spectrum and each amino acid/peptide, using CMI for adaptive model training. Extensive experiments demonstrate AdaNovo's state-of-the-art performance on a 9-species benchmark, where the peptides in the training set are almost completely disjoint from the peptides of the test sets. Moreover, AdaNovo excels in identifying amino acids with PTMs and exhibits robustness against data noise. The supplementary materials contain the official code.
[ { "created": "Sat, 9 Mar 2024 11:54:58 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 05:46:37 GMT", "version": "v2" } ]
2024-03-18
[ [ "Xia", "Jun", "" ], [ "Chen", "Shaorong", "" ], [ "Zhou", "Jingbo", "" ], [ "Ling", "Tianze", "" ], [ "Du", "Wenjie", "" ], [ "Liu", "Sizhe", "" ], [ "Li", "Stan Z.", "" ] ]
Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the analysis of protein composition in biological samples. Despite the development of various deep learning methods for identifying amino acid sequences (peptides) responsible for observed spectra, challenges persist in \emph{de novo} peptide sequencing. Firstly, prior methods struggle to identify amino acids with post-translational modifications (PTMs) due to their lower frequency in training data compared to canonical amino acids, further resulting in decreased peptide-level identification precision. Secondly, diverse types of noise and missing peaks in mass spectra reduce the reliability of training data (peptide-spectrum matches, PSMs). To address these challenges, we propose AdaNovo, a novel framework that calculates conditional mutual information (CMI) between the spectrum and each amino acid/peptide, using CMI for adaptive model training. Extensive experiments demonstrate AdaNovo's state-of-the-art performance on a 9-species benchmark, where the peptides in the training set are almost completely disjoint from the peptides of the test sets. Moreover, AdaNovo excels in identifying amino acids with PTMs and exhibits robustness against data noise. The supplementary materials contain the official code.
2212.07809
Yunchi Zhu
Yunchi Zhu, Chengda Tong, Zuohan Zhao, Zuhong Lu
MineProt: modern application for custom protein curation
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
AI systems represented by AlphaFold are rapidly expanding the scale of protein structure modelling data, and the MineProt project provides an effective solution for custom curation of these novel high-throughput data. It enables researchers to build their own protein server in simple steps, run almost out-of-the-box scripts to annotate and curate their proteins, visualize, browse and search their data via a user-friendly online interface, and utilize plugins to extend the functionality of server. It is expected to support researcher productivity and facilitate data sharing in the new era of structural proteomics. MineProt is open-sourced at https://github.com/huiwenke/MineProt.
[ { "created": "Wed, 14 Dec 2022 07:44:09 GMT", "version": "v1" } ]
2022-12-16
[ [ "Zhu", "Yunchi", "" ], [ "Tong", "Chengda", "" ], [ "Zhao", "Zuohan", "" ], [ "Lu", "Zuhong", "" ] ]
AI systems represented by AlphaFold are rapidly expanding the scale of protein structure modelling data, and the MineProt project provides an effective solution for custom curation of these novel high-throughput data. It enables researchers to build their own protein server in simple steps, run almost out-of-the-box scripts to annotate and curate their proteins, visualize, browse and search their data via a user-friendly online interface, and utilize plugins to extend the functionality of server. It is expected to support researcher productivity and facilitate data sharing in the new era of structural proteomics. MineProt is open-sourced at https://github.com/huiwenke/MineProt.
1703.05097
Taoyang Wu
Vincent Moulton and James Oldman and Taoyang Wu
A cubic-time algorithm for computing the trinet distance between level-1 networks
11pages, 5 figures
null
null
null
q-bio.PE cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In evolutionary biology, phylogenetic networks are constructed to represent the evolution of species in which reticulate events are thought to have occurred, such as recombination and hybridization. It is therefore useful to have efficiently computable metrics with which to systematically compare such networks. Through developing an optimal algorithm to enumerate all trinets displayed by a level-1 network (a type of network that is slightly more general than an evolutionary tree), here we propose a cubic-time algorithm to compute the trinet distance between two level-1 networks. Employing simulations, we also present a comparison between the trinet metric and the so-called Robinson-Foulds phylogenetic network metric restricted to level-1 networks. The algorithms described in this paper have been implemented in JAVA and are freely available at https://www.uea.ac.uk/computing/TriLoNet.
[ { "created": "Wed, 15 Mar 2017 11:57:53 GMT", "version": "v1" } ]
2017-03-16
[ [ "Moulton", "Vincent", "" ], [ "Oldman", "James", "" ], [ "Wu", "Taoyang", "" ] ]
In evolutionary biology, phylogenetic networks are constructed to represent the evolution of species in which reticulate events are thought to have occurred, such as recombination and hybridization. It is therefore useful to have efficiently computable metrics with which to systematically compare such networks. Through developing an optimal algorithm to enumerate all trinets displayed by a level-1 network (a type of network that is slightly more general than an evolutionary tree), here we propose a cubic-time algorithm to compute the trinet distance between two level-1 networks. Employing simulations, we also present a comparison between the trinet metric and the so-called Robinson-Foulds phylogenetic network metric restricted to level-1 networks. The algorithms described in this paper have been implemented in JAVA and are freely available at https://www.uea.ac.uk/computing/TriLoNet.
q-bio/0511035
Peng-Ye Wang
Xin Zhao, Shuo-Xing Dou, Ping Xie, Peng-Ye Wang
On the Fibril Elongation Mechanism of the Prion Protein Fragment PrP106-126
16 pages, 14 figures
null
null
null
q-bio.BM
null
Mouse prion protein PrP106-126 is a peptide corresponding to the residues 107-127 of human prion protein. It has been shown that PrP106-126 can reproduce the main neuropathological features of prionrelated transmissible spongiform encephalopathies and can form amyloid-like fibrils in vitro. The conformational characteristics of PrP106-126 fibril have been investigated by electron microscopy, CD spectroscopy, NMR and molecular dynamics simulations. Recent researches have found out that PrP106-126 in water assumes a stable structure consisting of two parallel beta-sheets that are tightly packed against each other. In this work we perform molecular dynamics simulation to reveal the elongation mechanism of PrP106-126 fibril. Influenced by the edge strands of the fibril which already adopt beta-sheets conformation, single PrP106-126 peptide forms beta-structure and becomes a new element of the fibril. Under acidic condition, single PrP106-126 peptide adopts a much larger variety of conformations than it does under neural condition, which makes a peptide easier to be influenced by the edge strands of the fibril. However, acidic condition dose not largely affect the stability of PrP106-126 peptide fibril. Thus, the speed of fibril elongation can be dramatically increased by lowering the pH value of the solution. The pH value was adjusted by either changing the protonation state of the residues or adding hydronium ions (acidic solution) or hydroxyl ions (alkaline solution). The differences between these two approaches are analyzed here.
[ { "created": "Tue, 22 Nov 2005 02:28:29 GMT", "version": "v1" } ]
2007-05-23
[ [ "Zhao", "Xin", "" ], [ "Dou", "Shuo-Xing", "" ], [ "Xie", "Ping", "" ], [ "Wang", "Peng-Ye", "" ] ]
Mouse prion protein PrP106-126 is a peptide corresponding to the residues 107-127 of human prion protein. It has been shown that PrP106-126 can reproduce the main neuropathological features of prionrelated transmissible spongiform encephalopathies and can form amyloid-like fibrils in vitro. The conformational characteristics of PrP106-126 fibril have been investigated by electron microscopy, CD spectroscopy, NMR and molecular dynamics simulations. Recent researches have found out that PrP106-126 in water assumes a stable structure consisting of two parallel beta-sheets that are tightly packed against each other. In this work we perform molecular dynamics simulation to reveal the elongation mechanism of PrP106-126 fibril. Influenced by the edge strands of the fibril which already adopt beta-sheets conformation, single PrP106-126 peptide forms beta-structure and becomes a new element of the fibril. Under acidic condition, single PrP106-126 peptide adopts a much larger variety of conformations than it does under neural condition, which makes a peptide easier to be influenced by the edge strands of the fibril. However, acidic condition dose not largely affect the stability of PrP106-126 peptide fibril. Thus, the speed of fibril elongation can be dramatically increased by lowering the pH value of the solution. The pH value was adjusted by either changing the protonation state of the residues or adding hydronium ions (acidic solution) or hydroxyl ions (alkaline solution). The differences between these two approaches are analyzed here.
0903.5021
David Hsu
David Hsu, Murielle Hsu
Zwanzig-Mori projection operators and EEG dynamics: deriving a simple equation of motion
Revised, e-published Jul 13, 2009
PMC Biophysics 2009; 2:6
10.1186/1757-5036-2-6
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a macroscopic theory of electroencephalogram (EEG) dynamics based on the laws of motion that govern atomic and molecular motion. The theory is an application of Zwanzig-Mori projection operators. The result is a simple equation of motion that has the form of a generalized Langevin equation (GLE), which requires knowledge only of macroscopic properties. The macroscopic properties can be extracted from experimental data by one of two possible variational principles. These variational principles are our principal contribution to the formalism. Potential applications are discussed, including applications to the theory of critical phenomena in the brain, Granger causality and Kalman filters.
[ { "created": "Sun, 29 Mar 2009 04:11:05 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2009 17:00:30 GMT", "version": "v2" } ]
2009-07-14
[ [ "Hsu", "David", "" ], [ "Hsu", "Murielle", "" ] ]
We present a macroscopic theory of electroencephalogram (EEG) dynamics based on the laws of motion that govern atomic and molecular motion. The theory is an application of Zwanzig-Mori projection operators. The result is a simple equation of motion that has the form of a generalized Langevin equation (GLE), which requires knowledge only of macroscopic properties. The macroscopic properties can be extracted from experimental data by one of two possible variational principles. These variational principles are our principal contribution to the formalism. Potential applications are discussed, including applications to the theory of critical phenomena in the brain, Granger causality and Kalman filters.
1806.11030
Thierry Mora
Thomas Dupic, Quentin Marcou, Aleksandra M. Walczak, Thierry Mora
Genesis of the alpha beta T-cell receptor
null
PLoS Comp. Biol. 15(3): e1006874 (2019)
10.1371/journal.pcbi.1006874
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The T-cell (TCR) repertoire relies on the diversity of receptors composed of two chains, called $\alpha$ and $\beta$, to recognize pathogens. Using results of high throughput sequencing and computational chain-pairing experiments of human TCR repertoires, we quantitively characterize the $\alpha\beta$ generation process. We estimate the probabilities of a rescue recombination of the $\beta$ chain on the second chromosome upon failure or success on the first chromosome. Unlike $\beta$ chains, $\alpha$ chains recombine simultaneously on both chromosomes, resulting in correlated statistics of the two genes which we predict using a mechanistic model. We find that $\sim 28 \%$ of cells express both $\alpha$ chains. We report that clones sharing the same $\beta$ chain but different $\alpha$ chains are overrepresented, suggesting that they respond to common immune challenges. Altogether, our statistical analysis gives a complete quantitative mechanistic picture that results in the observed correlations in the generative process. We learn that the probability to generate any TCR$\alpha\beta$ is lower than $10^{-12}$ and estimate the generation diversity and sharing properties of the $\alpha\beta$ TCR repertoire.
[ { "created": "Thu, 28 Jun 2018 15:19:13 GMT", "version": "v1" }, { "created": "Tue, 11 Dec 2018 10:22:05 GMT", "version": "v2" } ]
2019-05-14
[ [ "Dupic", "Thomas", "" ], [ "Marcou", "Quentin", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ] ]
The T-cell (TCR) repertoire relies on the diversity of receptors composed of two chains, called $\alpha$ and $\beta$, to recognize pathogens. Using results of high throughput sequencing and computational chain-pairing experiments of human TCR repertoires, we quantitively characterize the $\alpha\beta$ generation process. We estimate the probabilities of a rescue recombination of the $\beta$ chain on the second chromosome upon failure or success on the first chromosome. Unlike $\beta$ chains, $\alpha$ chains recombine simultaneously on both chromosomes, resulting in correlated statistics of the two genes which we predict using a mechanistic model. We find that $\sim 28 \%$ of cells express both $\alpha$ chains. We report that clones sharing the same $\beta$ chain but different $\alpha$ chains are overrepresented, suggesting that they respond to common immune challenges. Altogether, our statistical analysis gives a complete quantitative mechanistic picture that results in the observed correlations in the generative process. We learn that the probability to generate any TCR$\alpha\beta$ is lower than $10^{-12}$ and estimate the generation diversity and sharing properties of the $\alpha\beta$ TCR repertoire.
2011.05212
Joaquin Goni
Uttara Tipnis, Kausar Abbas, Elizabeth Tran, Enrico Amico, Li Shen, Alan D. Kaplan, Joaqu\'in Go\~ni
Functional Connectome Fingerprint Gradients in Young Adults
26 pages, 10 figures, 2 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The assessment of brain fingerprints has emerged in the recent years as an important tool to study individual differences and to infer quality of neuroimaging datasets. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. Here, we extend the concept of brain connectivity fingerprints beyond test/retest and assess fingerprint gradients in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (subject fingerprint), but also between the scans of monozygotic and dizygotic twins (twin fingerprint). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and timeseries parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and timeseries for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.
[ { "created": "Tue, 10 Nov 2020 16:11:12 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2021 21:13:51 GMT", "version": "v2" } ]
2021-01-13
[ [ "Tipnis", "Uttara", "" ], [ "Abbas", "Kausar", "" ], [ "Tran", "Elizabeth", "" ], [ "Amico", "Enrico", "" ], [ "Shen", "Li", "" ], [ "Kaplan", "Alan D.", "" ], [ "Goñi", "Joaquín", "" ] ]
The assessment of brain fingerprints has emerged in the recent years as an important tool to study individual differences and to infer quality of neuroimaging datasets. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. Here, we extend the concept of brain connectivity fingerprints beyond test/retest and assess fingerprint gradients in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (subject fingerprint), but also between the scans of monozygotic and dizygotic twins (twin fingerprint). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and timeseries parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and timeseries for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.
2311.12035
Bowen Gao
Minsi Ren, Bowen Gao, Bo Qiang, Yanyan Lan
Delta Score: Improving the Binding Assessment of Structure-Based Drug Design Methods
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structure-based drug design (SBDD) stands at the forefront of drug discovery, emphasizing the creation of molecules that target specific binding pockets. Recent advances in this area have witnessed the adoption of deep generative models and geometric deep learning techniques, modeling SBDD as a conditional generation task where the target structure serves as context. Historically, evaluation of these models centered on docking scores, which quantitatively depict the predicted binding affinity between a molecule and its target pocket. Though state-of-the-art models purport that a majority of their generated ligands exceed the docking score of ground truth ligands in test sets, it begs the question: Do these scores align with real-world biological needs? In this paper, we introduce the delta score, a novel evaluation metric grounded in tangible pharmaceutical requisites. Our experiments reveal that molecules produced by current deep generative models significantly lag behind ground truth reference ligands when assessed with the delta score. This novel metric not only complements existing benchmarks but also provides a pivotal direction for subsequent research in the domain.
[ { "created": "Wed, 1 Nov 2023 08:37:39 GMT", "version": "v1" } ]
2023-11-22
[ [ "Ren", "Minsi", "" ], [ "Gao", "Bowen", "" ], [ "Qiang", "Bo", "" ], [ "Lan", "Yanyan", "" ] ]
Structure-based drug design (SBDD) stands at the forefront of drug discovery, emphasizing the creation of molecules that target specific binding pockets. Recent advances in this area have witnessed the adoption of deep generative models and geometric deep learning techniques, modeling SBDD as a conditional generation task where the target structure serves as context. Historically, evaluation of these models centered on docking scores, which quantitatively depict the predicted binding affinity between a molecule and its target pocket. Though state-of-the-art models purport that a majority of their generated ligands exceed the docking score of ground truth ligands in test sets, it begs the question: Do these scores align with real-world biological needs? In this paper, we introduce the delta score, a novel evaluation metric grounded in tangible pharmaceutical requisites. Our experiments reveal that molecules produced by current deep generative models significantly lag behind ground truth reference ligands when assessed with the delta score. This novel metric not only complements existing benchmarks but also provides a pivotal direction for subsequent research in the domain.
1406.4415
Arne Traulsen
Laura Hindersin and Arne Traulsen
Counterintuitive properties of the fixation time in network-structured populations
null
Journal of the Royal Society Interface 11: 2014060 (2014)
10.1098/rsif.2014.0606
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary dynamics on graphs can lead to many interesting and counterintuitive findings. We study the Moran process, a discrete time birth-death process, that describes the invasion of a mutant type into a population of wild-type individuals. Remarkably, the fixation probability of a single mutant is the same on all regular networks. But non-regular networks can increase or decrease the fixation probability. While the time until fixation formally depends on the same transition probabilities as the fixation probabilities, there is no obvious relation between them. For example, an amplifier of selection, which increases the fixation probability and thus decreases the number of mutations needed until one of them is successful, can at the same time slow down the process of fixation. Based on small networks, we show analytically that (i) the time to fixation can decrease when links are removed from the network and (ii) the node providing the best starting conditions in terms of the shortest fixation time depends on the fitness of the mutant. Our results are obtained analytically on small networks, but numerical simulations show that they are qualitatively valid even in much larger populations.
[ { "created": "Tue, 17 Jun 2014 16:19:16 GMT", "version": "v1" }, { "created": "Wed, 22 Apr 2015 07:35:24 GMT", "version": "v2" } ]
2015-04-23
[ [ "Hindersin", "Laura", "" ], [ "Traulsen", "Arne", "" ] ]
Evolutionary dynamics on graphs can lead to many interesting and counterintuitive findings. We study the Moran process, a discrete time birth-death process, that describes the invasion of a mutant type into a population of wild-type individuals. Remarkably, the fixation probability of a single mutant is the same on all regular networks. But non-regular networks can increase or decrease the fixation probability. While the time until fixation formally depends on the same transition probabilities as the fixation probabilities, there is no obvious relation between them. For example, an amplifier of selection, which increases the fixation probability and thus decreases the number of mutations needed until one of them is successful, can at the same time slow down the process of fixation. Based on small networks, we show analytically that (i) the time to fixation can decrease when links are removed from the network and (ii) the node providing the best starting conditions in terms of the shortest fixation time depends on the fitness of the mutant. Our results are obtained analytically on small networks, but numerical simulations show that they are qualitatively valid even in much larger populations.
2406.06969
Daigo Okada Dr
Daigo Okada, Jianshen Zhu, Kan Shota, Yuuki Nishimura, Kazuya Haraguchi
Data mining method of single-cell omics data to evaluate a pure tissue environmental effect on gene expression level
null
null
null
null
q-bio.GN cs.DM
http://creativecommons.org/licenses/by-nc-nd/4.0/
While single-cell RNA-seq enables the investigation of the celltype effect on the transcriptome, the pure tissue environmental effect has not been well investigated. The bias in the combination of tissue and celltype in the body made it difficult to evaluate the effect of pure tissue environment by omics data mining. It is important to prevent statistical confounding among discrete variables such as celltype, tissue, and other categorical variables when evaluating the effects of these variables. We propose a novel method to enumerate suitable analysis units of variables for estimating the effects of tissue environment by extending the maximal biclique enumeration problem for bipartite graphs to $k$-partite hypergraphs. We applied the proposed method to a large mouse single-cell transcriptome dataset of Tabala Muris Senis to evaluate pure tissue environmental effects on gene expression. Data Mining using the proposed method revealed pure tissue environment effects on gene expression and its age-related change among adipose sub-tissues. The method proposed in this study helps evaluations of the effects of discrete variables in exploratory data mining of large-scale genomics datasets.
[ { "created": "Tue, 11 Jun 2024 05:55:24 GMT", "version": "v1" } ]
2024-06-12
[ [ "Okada", "Daigo", "" ], [ "Zhu", "Jianshen", "" ], [ "Shota", "Kan", "" ], [ "Nishimura", "Yuuki", "" ], [ "Haraguchi", "Kazuya", "" ] ]
While single-cell RNA-seq enables the investigation of the celltype effect on the transcriptome, the pure tissue environmental effect has not been well investigated. The bias in the combination of tissue and celltype in the body made it difficult to evaluate the effect of pure tissue environment by omics data mining. It is important to prevent statistical confounding among discrete variables such as celltype, tissue, and other categorical variables when evaluating the effects of these variables. We propose a novel method to enumerate suitable analysis units of variables for estimating the effects of tissue environment by extending the maximal biclique enumeration problem for bipartite graphs to $k$-partite hypergraphs. We applied the proposed method to a large mouse single-cell transcriptome dataset of Tabala Muris Senis to evaluate pure tissue environmental effects on gene expression. Data Mining using the proposed method revealed pure tissue environment effects on gene expression and its age-related change among adipose sub-tissues. The method proposed in this study helps evaluations of the effects of discrete variables in exploratory data mining of large-scale genomics datasets.
0906.4986
Artem Novozhilov S
Georgy P Karev, Artem S Novozhilov, and Faina S Berezovskaya
On the asymptotic behavior of the solutions to the replicator equation
23 pages, 1 figure, several small changes are added, together with the new title
null
null
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selection systems and the corresponding replicator equations model the evolution of replicators with a high level of abstraction. In this paper we apply novel methods of analysis of selection systems to the replicator equations. To be suitable for the suggested algorithm the interaction matrix of the replicator equation should be transformed; in particular the standard singular value decomposition allows us to rewrite the replicator equation in a convenient form. The original $n$-dimensional problem is reduced to the analysis of asymptotic behavior of the solutions to the so-called escort system, which in some important cases can be of significantly smaller dimension than the original system. The Newton diagram methods are applied to study the asymptotic behavior of the solutions to the escort system, when interaction matrix has rank 1 or 2. A general replicator equation with the interaction matrix of rank 1 is fully analyzed; the conditions are provided when the asymptotic state is a polymorphic equilibrium. As an example of the system with the interaction matrix of rank 2 we consider the problem from [Adams, M.R. and Sornborger, A.T., J Math Biol, 54:357-384, 2007], for which we show, for arbitrary dimension of the system and under some suitable conditions, that generically one globally stable equilibrium exits on the 1-skeleton of the simplex.
[ { "created": "Fri, 26 Jun 2009 17:49:22 GMT", "version": "v1" }, { "created": "Mon, 15 Mar 2010 07:24:04 GMT", "version": "v2" } ]
2010-03-16
[ [ "Karev", "Georgy P", "" ], [ "Novozhilov", "Artem S", "" ], [ "Berezovskaya", "Faina S", "" ] ]
Selection systems and the corresponding replicator equations model the evolution of replicators with a high level of abstraction. In this paper we apply novel methods of analysis of selection systems to the replicator equations. To be suitable for the suggested algorithm the interaction matrix of the replicator equation should be transformed; in particular the standard singular value decomposition allows us to rewrite the replicator equation in a convenient form. The original $n$-dimensional problem is reduced to the analysis of asymptotic behavior of the solutions to the so-called escort system, which in some important cases can be of significantly smaller dimension than the original system. The Newton diagram methods are applied to study the asymptotic behavior of the solutions to the escort system, when interaction matrix has rank 1 or 2. A general replicator equation with the interaction matrix of rank 1 is fully analyzed; the conditions are provided when the asymptotic state is a polymorphic equilibrium. As an example of the system with the interaction matrix of rank 2 we consider the problem from [Adams, M.R. and Sornborger, A.T., J Math Biol, 54:357-384, 2007], for which we show, for arbitrary dimension of the system and under some suitable conditions, that generically one globally stable equilibrium exits on the 1-skeleton of the simplex.
1110.1725
Mikhail Peslyak
Nikolai Korotky, Mikhail Peslyak
Psoriasis as a consequence of incorporation of beta-streptococci into the microbiocenosis of highly permeable intestines (a pathogenic concept). Russian edition
16 pages, 1 figure, in Russian, ISSN 0042-4609, Journal NLM Unique ID: 0414246
Vestn Dermatol Venerol 2005; 1: 9-18
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A review of recent investigations into pathogenesis of psoriasis summarizing data on the relationship between the presence beta-streptococci in the organism and the cutaneous immune reaction. Results of the studies on gastrointestinal pathology in psoriatic patients are presented. The authors propose a hypothesis that advocates the primary origin of psoriatic gastrointestinal pathology and secondary nature of skin manifestations. A chronic plaque psoriasis model is developed on the basis of the assumption on the key role of two psorafactors, i.e. hyperpermeability of intestinal walls for certain proteins and incorporation of beta-streptococci in the microbiocenosis of intestinal mucosa. The validity of the model is confirmed by the results of practical clinical work.
[ { "created": "Sat, 8 Oct 2011 12:00:44 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2011 07:26:36 GMT", "version": "v2" } ]
2011-10-14
[ [ "Korotky", "Nikolai", "" ], [ "Peslyak", "Mikhail", "" ] ]
A review of recent investigations into pathogenesis of psoriasis summarizing data on the relationship between the presence beta-streptococci in the organism and the cutaneous immune reaction. Results of the studies on gastrointestinal pathology in psoriatic patients are presented. The authors propose a hypothesis that advocates the primary origin of psoriatic gastrointestinal pathology and secondary nature of skin manifestations. A chronic plaque psoriasis model is developed on the basis of the assumption on the key role of two psorafactors, i.e. hyperpermeability of intestinal walls for certain proteins and incorporation of beta-streptococci in the microbiocenosis of intestinal mucosa. The validity of the model is confirmed by the results of practical clinical work.
1603.05311
Qasim Ali
Qasim Ali, Lindi Wahl
Mathematical Modeling of CRISPR-CAS system effects on biofilm formation
9 figures including 4 supplementary, 44 pages with 28 pages article + references + supplementary material
null
null
null
q-bio.PE math.DS q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), linked with CRISPR associated (CAS) genes, play a profound role in the interactions between phage and their bacterial hosts. It is now well understood that CRISPR-CAS systems can confer adaptive immunity against bacteriophage infections. However, the possibility of failure of CRISPR immunity may lead to a productive infection by the phage (cell lysis) or lysogeny. Recently, CRISPR-CAS genes have been implicated in changes to group behaviour, including biofilm formation, of the bacterium Pseudomonas aeruginosa when lysogenized. For lysogens with a CRISPR system, another recent experimental study suggests that bacteriophage re-infection of previously lysogenized bacteria may lead to cell death. Thus CRISPR immunity can have complex effects on phage-host-lysogen interactions, particularly in a biofilm. In this contribution, we develop and analyse a series of models to elucidate and disentangle these interactions. From a therapeutic standpoint, CRISPR immunity increases biofilm resistance to phage therapy. Our models predict that lysogens may be able to displace CRISPR-immune bacteria in a biofilm, and thus suggest strategies to eliminate phage resistant biofilms.
[ { "created": "Wed, 16 Mar 2016 23:23:57 GMT", "version": "v1" } ]
2016-03-18
[ [ "Ali", "Qasim", "" ], [ "Wahl", "Lindi", "" ] ]
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), linked with CRISPR associated (CAS) genes, play a profound role in the interactions between phage and their bacterial hosts. It is now well understood that CRISPR-CAS systems can confer adaptive immunity against bacteriophage infections. However, the possibility of failure of CRISPR immunity may lead to a productive infection by the phage (cell lysis) or lysogeny. Recently, CRISPR-CAS genes have been implicated in changes to group behaviour, including biofilm formation, of the bacterium Pseudomonas aeruginosa when lysogenized. For lysogens with a CRISPR system, another recent experimental study suggests that bacteriophage re-infection of previously lysogenized bacteria may lead to cell death. Thus CRISPR immunity can have complex effects on phage-host-lysogen interactions, particularly in a biofilm. In this contribution, we develop and analyse a series of models to elucidate and disentangle these interactions. From a therapeutic standpoint, CRISPR immunity increases biofilm resistance to phage therapy. Our models predict that lysogens may be able to displace CRISPR-immune bacteria in a biofilm, and thus suggest strategies to eliminate phage resistant biofilms.
1706.00780
Yilin Song
Yilin Song, Jonathan Viventi, Yao Wang
Unsupervised Learning of Spike Patterns for Seizure Detection and Wavefront Estimation of High Resolution Micro Electrocorticographic ({\mu}ECoG) Data
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the past few years, we have developed flexible, active, multiplexed recording devices for high resolution recording over large, clinically relevant areas in the brain. While this technology has enabled a much higher-resolution view of the electrical activity of the brain, the analytical methods to process, categorize and respond to the huge volumes of seizure data produced by these devices have not yet been developed. In this work we proposed an unsupervised learning framework for spike analysis, which by itself reveals spike pattern. By applying advanced video processing techniques for separating a multi-channel recording into individual spike segments, unfolding the spike segments manifold and identifying natural clusters for spike patterns, we are able to find the common spike motion patterns. And we further explored using these patterns for more interesting and practical problems as seizure prediction and spike wavefront prediction. These methods have been applied to in-vivo feline seizure recordings and yielded promising results.
[ { "created": "Fri, 2 Jun 2017 14:45:58 GMT", "version": "v1" } ]
2017-06-06
[ [ "Song", "Yilin", "" ], [ "Viventi", "Jonathan", "" ], [ "Wang", "Yao", "" ] ]
For the past few years, we have developed flexible, active, multiplexed recording devices for high resolution recording over large, clinically relevant areas in the brain. While this technology has enabled a much higher-resolution view of the electrical activity of the brain, the analytical methods to process, categorize and respond to the huge volumes of seizure data produced by these devices have not yet been developed. In this work we proposed an unsupervised learning framework for spike analysis, which by itself reveals spike pattern. By applying advanced video processing techniques for separating a multi-channel recording into individual spike segments, unfolding the spike segments manifold and identifying natural clusters for spike patterns, we are able to find the common spike motion patterns. And we further explored using these patterns for more interesting and practical problems as seizure prediction and spike wavefront prediction. These methods have been applied to in-vivo feline seizure recordings and yielded promising results.
2211.07429
Xin Ma
Yang Li, Xin Ma, Raj Sunderraman, Shihao Ji, Suprateek Kundu
Accounting for Temporal Variability in Functional Magnetic Resonance Imaging Improves Prediction of Intelligence
null
null
10.1002/hbm.26415
null
q-bio.NC cs.LG eess.IV stat.CO stat.ME
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neuroimaging-based prediction methods for intelligence and cognitive abilities have seen a rapid development in literature. Among different neuroimaging modalities, prediction based on functional connectivity (FC) has shown great promise. Most literature has focused on prediction using static FC, but there are limited investigations on the merits of such analysis compared to prediction based on dynamic FC or region level functional magnetic resonance imaging (fMRI) times series that encode temporal variability. To account for the temporal dynamics in fMRI data, we propose a deep neural network involving bi-directional long short-term memory (bi-LSTM) approach that also incorporates feature selection mechanism. The proposed pipeline is implemented via an efficient GPU computation framework and applied to predict intelligence scores based on region level fMRI time series as well as dynamic FC. We compare the prediction performance for different intelligence measures based on static FC, dynamic FC, and region level time series acquired from the Adolescent Brain Cognitive Development (ABCD) study involving close to 7000 individuals. Our detailed analysis illustrates that static FC consistently has inferior prediction performance compared to region level time series or dynamic FC for unimodal rest and task fMRI experiments, and in almost all cases using a combination of task and rest features. In addition, the proposed bi-LSTM pipeline based on region level time series identifies several shared and differential important brain regions across task and rest fMRI experiments that drive intelligence prediction. A test-retest analysis of the selected features shows strong reliability across cross-validation folds. Given the large sample size from ABCD study, our results provide strong evidence that superior prediction of intelligence can be achieved by accounting for temporal variations in fMRI.
[ { "created": "Fri, 11 Nov 2022 18:48:59 GMT", "version": "v1" }, { "created": "Wed, 14 Dec 2022 15:49:09 GMT", "version": "v2" } ]
2023-07-20
[ [ "Li", "Yang", "" ], [ "Ma", "Xin", "" ], [ "Sunderraman", "Raj", "" ], [ "Ji", "Shihao", "" ], [ "Kundu", "Suprateek", "" ] ]
Neuroimaging-based prediction methods for intelligence and cognitive abilities have seen a rapid development in literature. Among different neuroimaging modalities, prediction based on functional connectivity (FC) has shown great promise. Most literature has focused on prediction using static FC, but there are limited investigations on the merits of such analysis compared to prediction based on dynamic FC or region level functional magnetic resonance imaging (fMRI) times series that encode temporal variability. To account for the temporal dynamics in fMRI data, we propose a deep neural network involving bi-directional long short-term memory (bi-LSTM) approach that also incorporates feature selection mechanism. The proposed pipeline is implemented via an efficient GPU computation framework and applied to predict intelligence scores based on region level fMRI time series as well as dynamic FC. We compare the prediction performance for different intelligence measures based on static FC, dynamic FC, and region level time series acquired from the Adolescent Brain Cognitive Development (ABCD) study involving close to 7000 individuals. Our detailed analysis illustrates that static FC consistently has inferior prediction performance compared to region level time series or dynamic FC for unimodal rest and task fMRI experiments, and in almost all cases using a combination of task and rest features. In addition, the proposed bi-LSTM pipeline based on region level time series identifies several shared and differential important brain regions across task and rest fMRI experiments that drive intelligence prediction. A test-retest analysis of the selected features shows strong reliability across cross-validation folds. Given the large sample size from ABCD study, our results provide strong evidence that superior prediction of intelligence can be achieved by accounting for temporal variations in fMRI.
1002.4368
Alexei Koulakov
Alexei A. Koulakov
On the scaling law for cortical magnification factor
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Primate visual system samples different parts of the world unevenly. The part of the visual scene corresponding to the eye center is represented densely, while away from the center the sampling becomes progressively sparser. Such distribution allows a more effective use of the limited transfer rate of the optic nerve, since animals can aim area centralis (AC) at the relevant position in the scene by performing saccadic eye movements. To locate a new saccade target the animal has to sample the corresponding region of the visual scene, away from AC. In this work we derive the sampling density away from AC, which optimizes the trajectory of saccadic eye movements. We obtain the scaling law for the sampling density as a function of eccentricity, which results from the evolutionary pressure to locate the target in the shortest time under the constraint of limited transfer rate of the optic nerve. In case of very small AC the visual scene is optimally represented by logarithmic conformal mapping, in which geometrically similar circular bands around AC are equally represented by the visual system. We also obtain corrections to the logarithmic scaling for the case of a larger AC and compare them to experimental findings.
[ { "created": "Tue, 23 Feb 2010 18:25:00 GMT", "version": "v1" } ]
2010-02-24
[ [ "Koulakov", "Alexei A.", "" ] ]
Primate visual system samples different parts of the world unevenly. The part of the visual scene corresponding to the eye center is represented densely, while away from the center the sampling becomes progressively sparser. Such distribution allows a more effective use of the limited transfer rate of the optic nerve, since animals can aim area centralis (AC) at the relevant position in the scene by performing saccadic eye movements. To locate a new saccade target the animal has to sample the corresponding region of the visual scene, away from AC. In this work we derive the sampling density away from AC, which optimizes the trajectory of saccadic eye movements. We obtain the scaling law for the sampling density as a function of eccentricity, which results from the evolutionary pressure to locate the target in the shortest time under the constraint of limited transfer rate of the optic nerve. In case of very small AC the visual scene is optimally represented by logarithmic conformal mapping, in which geometrically similar circular bands around AC are equally represented by the visual system. We also obtain corrections to the logarithmic scaling for the case of a larger AC and compare them to experimental findings.
1309.5873
Rodrigo Cofre
Rodrigo Cofre (INRIA Sophia Antipolis), Bruno Cessac (INRIA Sophia Antipolis)
Hearing the Maximum Entropy Potential of neuronal networks
null
null
null
null
q-bio.NC math-ph math.MP q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a spike-generating stationary Markov process whose transition probabilities are known. We show that there is a canonical potential whose Gibbs distribution, obtained from the Maximum Entropy Principle (MaxEnt), is the equilibrium distribution of this process. We provide a method to compute explicitly and exactly this potential as a linear combination of spatio-temporal interactions. In particular, our results establish an explicit relation between Maximum Entropy models and neuro-mimetic models used in spike train statistics.
[ { "created": "Fri, 20 Sep 2013 16:55:21 GMT", "version": "v1" }, { "created": "Mon, 6 Jan 2014 19:55:43 GMT", "version": "v2" } ]
2014-01-07
[ [ "Cofre", "Rodrigo", "", "INRIA Sophia Antipolis" ], [ "Cessac", "Bruno", "", "INRIA Sophia\n Antipolis" ] ]
We consider a spike-generating stationary Markov process whose transition probabilities are known. We show that there is a canonical potential whose Gibbs distribution, obtained from the Maximum Entropy Principle (MaxEnt), is the equilibrium distribution of this process. We provide a method to compute explicitly and exactly this potential as a linear combination of spatio-temporal interactions. In particular, our results establish an explicit relation between Maximum Entropy models and neuro-mimetic models used in spike train statistics.
1507.08504
Roeland M.H. Merks
Sonja E.M. Boas, Maria I. Navarro Jimenez, Roeland M.H. Merks, Joke G. Blom
A global sensitivity analysis approach for morphogenesis models
31 pages, 7 figures
Boas, S. E. M., Navarro Jimenez, M. I., Merks, R. M. H., & Blom, J. G. (2015). A global sensitivity analysis approach for morphogenesis models. Bmc Systems Biology, 9(1), 85
10.1186/s12918-015-0222-7
null
q-bio.CB math.NA q-bio.QM stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, multi-factorial models are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such `black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of the model represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provides information on the relative impact of single parameters and of interactions between them. The uncertainty in the output of the model was largely caused by single parameters. The parameter interactions, although of low impact, provided new insights in the mechanisms of \emph{in silico} sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study and validate the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and validation of manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all `black-box' models, including high-throughput \emph{in vitro} models in which an output measure is affected by a set of experimental perturbations.
[ { "created": "Thu, 30 Jul 2015 13:53:35 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2015 12:57:21 GMT", "version": "v2" } ]
2015-11-25
[ [ "Boas", "Sonja E. M.", "" ], [ "Jimenez", "Maria I. Navarro", "" ], [ "Merks", "Roeland M. H.", "" ], [ "Blom", "Joke G.", "" ] ]
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, multi-factorial models are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such `black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of the model represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provides information on the relative impact of single parameters and of interactions between them. The uncertainty in the output of the model was largely caused by single parameters. The parameter interactions, although of low impact, provided new insights in the mechanisms of \emph{in silico} sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study and validate the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and validation of manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all `black-box' models, including high-throughput \emph{in vitro} models in which an output measure is affected by a set of experimental perturbations.
2005.08345
Iaroslav Ispolatov
Yaroslav Ispolatov
Epidemiological dynamics with fine temporal resolution
10 pages, 2 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To better predict the dynamics of spread of COVID-19 epidemics, it is important not only to investigate the network of local and long-range contagious contacts, but also to understand the temporal dynamics of infectiousness and detectable symptoms. Here we present a model of infection spread in a well-mixed group of individuals, which usually corresponds to a node in large-scale epidemiological networks. The model uses delay equations that take into account the duration of infection and is based on experimentally-derived time courses of viral load, virus shedding, severity and detectability of symptoms. We show that because of an early onset of infectiousness, which is reported to be synchronous or even precede the onset of detectable symptoms, the tracing and immediate testing of everyone who came in contact with the detected infected individual reduces the spread of epidemics, hospital load, and fatality rate. We hope that this more precise node dynamics could be incorporated into complex large-scale epidemiological models to improve the accuracy and credibility of predictions.
[ { "created": "Sun, 17 May 2020 19:14:50 GMT", "version": "v1" } ]
2020-05-19
[ [ "Ispolatov", "Yaroslav", "" ] ]
To better predict the dynamics of spread of COVID-19 epidemics, it is important not only to investigate the network of local and long-range contagious contacts, but also to understand the temporal dynamics of infectiousness and detectable symptoms. Here we present a model of infection spread in a well-mixed group of individuals, which usually corresponds to a node in large-scale epidemiological networks. The model uses delay equations that take into account the duration of infection and is based on experimentally-derived time courses of viral load, virus shedding, severity and detectability of symptoms. We show that because of an early onset of infectiousness, which is reported to be synchronous or even precede the onset of detectable symptoms, the tracing and immediate testing of everyone who came in contact with the detected infected individual reduces the spread of epidemics, hospital load, and fatality rate. We hope that this more precise node dynamics could be incorporated into complex large-scale epidemiological models to improve the accuracy and credibility of predictions.
1901.01169
Pierre Degond
Pierre Degond, Maximilian Engel, Jian-Guo Liu, Robert L. Pego
A Markov jump process modelling animal group size statistics
null
null
10.4310/CMS.2020.v18.n1.a3
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We translate a coagulation-framentation model, describing the dynamics of animal group size distributions, into a model for the population distribution and associate the \blue{nonlinear} evolution equation with a Markov jump process of a type introduced in classic work of H.~McKean. In particular this formalizes a model suggested by H.-S. Niwa [J.~Theo.~Biol.~224 (2003)] with simple coagulation and fragmentation rates. Based on the jump process, we develop a numerical scheme that allows us to approximate the equilibrium for the Niwa model, validated by comparison to analytical results by Degond et al. [J.~Nonlinear Sci.~27 (2017)], and study the population and size distributions for more complicated rates. Furthermore, the simulations are used to describe statistical properties of the underlying jump process. We additionally discuss the relation of the jump process to models expressed in stochastic differential equations and demonstrate that such a connection is justified in the case of nearest-neighbour interactions, as opposed to global interactions as in the Niwa model.
[ { "created": "Thu, 3 Jan 2019 14:33:07 GMT", "version": "v1" } ]
2020-06-16
[ [ "Degond", "Pierre", "" ], [ "Engel", "Maximilian", "" ], [ "Liu", "Jian-Guo", "" ], [ "Pego", "Robert L.", "" ] ]
We translate a coagulation-framentation model, describing the dynamics of animal group size distributions, into a model for the population distribution and associate the \blue{nonlinear} evolution equation with a Markov jump process of a type introduced in classic work of H.~McKean. In particular this formalizes a model suggested by H.-S. Niwa [J.~Theo.~Biol.~224 (2003)] with simple coagulation and fragmentation rates. Based on the jump process, we develop a numerical scheme that allows us to approximate the equilibrium for the Niwa model, validated by comparison to analytical results by Degond et al. [J.~Nonlinear Sci.~27 (2017)], and study the population and size distributions for more complicated rates. Furthermore, the simulations are used to describe statistical properties of the underlying jump process. We additionally discuss the relation of the jump process to models expressed in stochastic differential equations and demonstrate that such a connection is justified in the case of nearest-neighbour interactions, as opposed to global interactions as in the Niwa model.
1904.05198
Md Fazlul Karim Khan
Fazlul MKK, Deepthi S, Farzana Y, Najnin A, Rashid Ma, Munira B, Srikumar C, Nazmul
Detection of Metallo-\b{eta}-Lactamases-Encoding Genes Among Clinical Isolates of Escherichia Coli in a Tertiary Care Hospital, Malaysia
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The multidrug resistant Escherichia coli strains causes multiple clinical infections and has become a rising problem globally. The metallo-\b{eta}-lactamases encoding genes are very severe in gram-negative bacteria especially E. coli. This study was aimed to evaluate the prevalence of MBLs among the clinical isolates of E. coli. A total of 65 E. coli isolates were collected from various clinical samples of Malaysian patients with bacterial infections. The conventional microbiological test was performed for isolation and identification of E. coli producing MBLs strains in this vicinity. Multidrug Resistance (MDR) of E. coli isolates were assessed using disk diffusion test. Phenotypic methods, as well as genotypic- PCR methods, were performed to detect the presence of metallo-\b{eta}-lactamase resistance genes (blaIMP, blaVIM) in imipenem resistant strains. Out of 65 E. coli isolates, 42 isolates (57.3%) were MDR. The isolates from urine (19) produced significantly more MDR (10) isolates than other sources. Additionally, 19 (29.2%) imipenem resistant E. coli isolates contained 10 MBLs gene, 7(36.8%) isolates contained blaIMP and 3(15.7%) isolates contained blaVIM genes. This study revealed the significant occurrence of MBL producing E. coli isolates in clinical specimens and its association with health risk factors indicating an alarming situation in Malaysia. It demands an appropriate concern to avoid failure of treatments and infection control management.
[ { "created": "Wed, 10 Apr 2019 14:06:13 GMT", "version": "v1" } ]
2019-04-11
[ [ "MKK", "Fazlul", "" ], [ "S", "Deepthi", "" ], [ "Y", "Farzana", "" ], [ "A", "Najnin", "" ], [ "Ma", "Rashid", "" ], [ "B", "Munira", "" ], [ "C", "Srikumar", "" ], [ "Nazmul", "", "" ] ]
The multidrug resistant Escherichia coli strains causes multiple clinical infections and has become a rising problem globally. The metallo-\b{eta}-lactamases encoding genes are very severe in gram-negative bacteria especially E. coli. This study was aimed to evaluate the prevalence of MBLs among the clinical isolates of E. coli. A total of 65 E. coli isolates were collected from various clinical samples of Malaysian patients with bacterial infections. The conventional microbiological test was performed for isolation and identification of E. coli producing MBLs strains in this vicinity. Multidrug Resistance (MDR) of E. coli isolates were assessed using disk diffusion test. Phenotypic methods, as well as genotypic- PCR methods, were performed to detect the presence of metallo-\b{eta}-lactamase resistance genes (blaIMP, blaVIM) in imipenem resistant strains. Out of 65 E. coli isolates, 42 isolates (57.3%) were MDR. The isolates from urine (19) produced significantly more MDR (10) isolates than other sources. Additionally, 19 (29.2%) imipenem resistant E. coli isolates contained 10 MBLs gene, 7(36.8%) isolates contained blaIMP and 3(15.7%) isolates contained blaVIM genes. This study revealed the significant occurrence of MBL producing E. coli isolates in clinical specimens and its association with health risk factors indicating an alarming situation in Malaysia. It demands an appropriate concern to avoid failure of treatments and infection control management.
2111.02170
Sean Vittadello
Sean T. Vittadello and Michael P.H. Stumpf
A group theoretic approach to model comparison with simplicial representations
43 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The complexity of biological systems, and the increasingly large amount of associated experimental data, necessitates that we develop mathematical models to further our understanding of these systems. As biological systems are generally not well understood, most mathematical models of these systems are based on experimental data, resulting in a seemingly heterogeneous collection of models that ostensibly represent the same system. To understand the system we therefore need to know how the different models are related, with a view to obtaining a unified mathematical description. This goal is complicated by the fact that distinct mathematical formalisms may be used to represent the same system, making direct comparison of the models very difficult. In previous work we developed an appropriate framework for model comparison where we represent models as labelled simplicial complexes and compare them with two general methodologies: comparison by distance or equivalence. In this article we continue the development of our model comparison methodology in two directions. First, we present a rigorous and automatable methodology for the core process of comparison by equivalence, namely determining the vertices in a simplicial representation, corresponding to model components, that are conceptually related and the identification of these vertices via simplicial operations. Our methodology is based on considerations of vertex symmetry in the simplicial representation, for which we develop the required mathematical theory of group actions on simplicial complexes. This methodology greatly simplifies and expedites the process of determining model equivalence. Second, we provide an alternative mathematical framework for our model-comparison methodology by representing models as groups, which allows for the direct application of group-theoretic techniques within our model-comparison methodology.
[ { "created": "Wed, 3 Nov 2021 12:15:01 GMT", "version": "v1" }, { "created": "Sat, 30 Jul 2022 06:57:34 GMT", "version": "v2" } ]
2022-08-02
[ [ "Vittadello", "Sean T.", "" ], [ "Stumpf", "Michael P. H.", "" ] ]
The complexity of biological systems, and the increasingly large amount of associated experimental data, necessitates that we develop mathematical models to further our understanding of these systems. As biological systems are generally not well understood, most mathematical models of these systems are based on experimental data, resulting in a seemingly heterogeneous collection of models that ostensibly represent the same system. To understand the system we therefore need to know how the different models are related, with a view to obtaining a unified mathematical description. This goal is complicated by the fact that distinct mathematical formalisms may be used to represent the same system, making direct comparison of the models very difficult. In previous work we developed an appropriate framework for model comparison where we represent models as labelled simplicial complexes and compare them with two general methodologies: comparison by distance or equivalence. In this article we continue the development of our model comparison methodology in two directions. First, we present a rigorous and automatable methodology for the core process of comparison by equivalence, namely determining the vertices in a simplicial representation, corresponding to model components, that are conceptually related and the identification of these vertices via simplicial operations. Our methodology is based on considerations of vertex symmetry in the simplicial representation, for which we develop the required mathematical theory of group actions on simplicial complexes. This methodology greatly simplifies and expedites the process of determining model equivalence. Second, we provide an alternative mathematical framework for our model-comparison methodology by representing models as groups, which allows for the direct application of group-theoretic techniques within our model-comparison methodology.
1204.1515
Steven Frank
Steven A. Frank
Natural selection. IV. The Price equation
null
Frank, S. A. 2012. Natural selection. IV. The Price equation. Journal of Evolutionary Biology 25:1002-1019
10.1111/j.1420-9101.2012.02498.x
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Price equation partitions total evolutionary change into two components. The first component provides an abstract expression of natural selection. The second component subsumes all other evolutionary processes, including changes during transmission. The natural selection component is often used in applications. Those applications attract widespread interest for their simplicity of expression and ease of interpretation. Those same applications attract widespread criticism by dropping the second component of evolutionary change and by leaving unspecified the detailed assumptions needed for a complete study of dynamics. Controversies over approximation and dynamics have nothing to do with the Price equation itself, which is simply a mathematical equivalence relation for total evolutionary change expressed in an alternative form. Disagreements about approach have to do with the tension between the relative valuation of abstract versus concrete analyses. The Price equation's greatest value has been on the abstract side, particularly the invariance relations that illuminate the understanding of natural selection. Those abstract insights lay the foundation for applications in terms of kin selection, information theory interpretations of natural selection, and partitions of causes by path analysis. I discuss recent critiques of the Price equation by Nowak and van Veelen.
[ { "created": "Fri, 6 Apr 2012 17:02:46 GMT", "version": "v1" } ]
2012-05-21
[ [ "Frank", "Steven A.", "" ] ]
The Price equation partitions total evolutionary change into two components. The first component provides an abstract expression of natural selection. The second component subsumes all other evolutionary processes, including changes during transmission. The natural selection component is often used in applications. Those applications attract widespread interest for their simplicity of expression and ease of interpretation. Those same applications attract widespread criticism by dropping the second component of evolutionary change and by leaving unspecified the detailed assumptions needed for a complete study of dynamics. Controversies over approximation and dynamics have nothing to do with the Price equation itself, which is simply a mathematical equivalence relation for total evolutionary change expressed in an alternative form. Disagreements about approach have to do with the tension between the relative valuation of abstract versus concrete analyses. The Price equation's greatest value has been on the abstract side, particularly the invariance relations that illuminate the understanding of natural selection. Those abstract insights lay the foundation for applications in terms of kin selection, information theory interpretations of natural selection, and partitions of causes by path analysis. I discuss recent critiques of the Price equation by Nowak and van Veelen.
1409.7211
Sahand Hormoz
Sahand Hormoz, Nicolas Desprat, Boris I. Shraiman
Inferring Epigenetic Dynamics from Kin Correlations
18 pages, 11 figures
null
10.1073/pnas.1504407112
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Populations of isogenic embryonic stem cells or clonal bacteria often exhibit extensive phenotypic heterogeneity which arises from stochastic intrinsic dynamics of cells. The internal state of the cell can be transmitted epigenetically in cell division, leading to correlations in the phenotypic states of cells related by descent. Therefore, a phenotypic snapshot of a collection of cells with known genealogical structure, contains information on phenotypic dynamics. Here we use a model of phenotypic dynamics on a genealogical tree to define an inference method which allows to extract an approximate probabilistic description of phenotypic dynamics based on measured correlations as a function of the degree of kinship. The approach is tested and validated on the example of Pyoverdine dynamics in P. aeruginosa colonies. Interestingly, we find that correlations among pairs and triples of distant relatives have a simple but non-trivial structure indicating that observed phenotypic dynamics on the genealogical tree is approximately conformal - a symmetry characteristic of critical behavior in physical systems. Proposed inference method is sufficiently general to be applied in any system where lineage information is available.
[ { "created": "Thu, 25 Sep 2014 10:56:57 GMT", "version": "v1" }, { "created": "Tue, 3 Feb 2015 23:13:06 GMT", "version": "v2" } ]
2016-02-17
[ [ "Hormoz", "Sahand", "" ], [ "Desprat", "Nicolas", "" ], [ "Shraiman", "Boris I.", "" ] ]
Populations of isogenic embryonic stem cells or clonal bacteria often exhibit extensive phenotypic heterogeneity which arises from stochastic intrinsic dynamics of cells. The internal state of the cell can be transmitted epigenetically in cell division, leading to correlations in the phenotypic states of cells related by descent. Therefore, a phenotypic snapshot of a collection of cells with known genealogical structure, contains information on phenotypic dynamics. Here we use a model of phenotypic dynamics on a genealogical tree to define an inference method which allows to extract an approximate probabilistic description of phenotypic dynamics based on measured correlations as a function of the degree of kinship. The approach is tested and validated on the example of Pyoverdine dynamics in P. aeruginosa colonies. Interestingly, we find that correlations among pairs and triples of distant relatives have a simple but non-trivial structure indicating that observed phenotypic dynamics on the genealogical tree is approximately conformal - a symmetry characteristic of critical behavior in physical systems. Proposed inference method is sufficiently general to be applied in any system where lineage information is available.
1305.1495
Lorenzo Del Castello
Alessandro Attanasi, Andrea Cavagna, Lorenzo Del Castello, Irene Giardina, Asja Jelic, Stefania Melillo, Leonardo Parisi, Fabio Pellacini, Edward Shen, Edmondo Silvestri, Massimiliano Viale
GReTA - a novel Global and Recursive Tracking Algorithm in three dimensions
13 pages, 6 figures, 3 tables. Version 3 was slightly shortened, and new comprative results on the public datasets (thermal infrared videos of flying bats) by Z. Wu and coworkers (2014) were included. in A. Attanasi et al., "GReTA - A Novel Global and Recursive Tracking Algorithm in Three Dimensions", IEEE Trans. Pattern Anal. Mach. Intell., vol.37 (2015)
null
10.1109/TPAMI.2015.2414427
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking multiple moving targets allows quantitative measure of the dynamic behavior in systems as diverse as animal groups in biology, turbulence in fluid dynamics and crowd and traffic control. In three dimensions, tracking several targets becomes increasingly hard since optical occlusions are very likely, i.e. two featureless targets frequently overlap for several frames. Occlusions are particularly frequent in biological groups such as bird flocks, fish schools, and insect swarms, a fact that has severely limited collective animal behavior field studies in the past. This paper presents a 3D tracking method that is robust in the case of severe occlusions. To ensure robustness, we adopt a global optimization approach that works on all objects and frames at once. To achieve practicality and scalability, we employ a divide and conquer formulation, thanks to which the computational complexity of the problem is reduced by orders of magnitude. We tested our algorithm with synthetic data, with experimental data of bird flocks and insect swarms and with public benchmark datasets, and show that our system yields high quality trajectories for hundreds of moving targets with severe overlap. The results obtained on very heterogeneous data show the potential applicability of our method to the most diverse experimental situations.
[ { "created": "Tue, 7 May 2013 12:45:30 GMT", "version": "v1" }, { "created": "Thu, 24 Apr 2014 14:55:59 GMT", "version": "v2" }, { "created": "Fri, 17 Apr 2015 16:36:59 GMT", "version": "v3" } ]
2015-04-20
[ [ "Attanasi", "Alessandro", "" ], [ "Cavagna", "Andrea", "" ], [ "Del Castello", "Lorenzo", "" ], [ "Giardina", "Irene", "" ], [ "Jelic", "Asja", "" ], [ "Melillo", "Stefania", "" ], [ "Parisi", "Leonardo", "" ], [ "Pellacini", "Fabio", "" ], [ "Shen", "Edward", "" ], [ "Silvestri", "Edmondo", "" ], [ "Viale", "Massimiliano", "" ] ]
Tracking multiple moving targets allows quantitative measure of the dynamic behavior in systems as diverse as animal groups in biology, turbulence in fluid dynamics and crowd and traffic control. In three dimensions, tracking several targets becomes increasingly hard since optical occlusions are very likely, i.e. two featureless targets frequently overlap for several frames. Occlusions are particularly frequent in biological groups such as bird flocks, fish schools, and insect swarms, a fact that has severely limited collective animal behavior field studies in the past. This paper presents a 3D tracking method that is robust in the case of severe occlusions. To ensure robustness, we adopt a global optimization approach that works on all objects and frames at once. To achieve practicality and scalability, we employ a divide and conquer formulation, thanks to which the computational complexity of the problem is reduced by orders of magnitude. We tested our algorithm with synthetic data, with experimental data of bird flocks and insect swarms and with public benchmark datasets, and show that our system yields high quality trajectories for hundreds of moving targets with severe overlap. The results obtained on very heterogeneous data show the potential applicability of our method to the most diverse experimental situations.
2110.13976
Danil Tyulmankov
Danil Tyulmankov, Ching Fang, Annapurna Vadaparty, Guangyu Robert Yang
Biological learning in key-value memory networks
NeurIPS 2021
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
In neuroscience, classical Hopfield networks are the standard biologically plausible model of long-term memory, relying on Hebbian plasticity for storage and attractor dynamics for recall. In contrast, memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step. Such augmented networks achieve impressive feats of memory compared to traditional variants, yet their biological relevance is unclear. We propose an implementation of basic key-value memory that stores inputs using a combination of biologically plausible three-factor plasticity rules. The same rules are recovered when network parameters are meta-learned. Our network performs on par with classical Hopfield networks on autoassociative memory tasks and can be naturally extended to continual recall, heteroassociative memory, and sequence learning. Our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
[ { "created": "Tue, 26 Oct 2021 19:26:53 GMT", "version": "v1" } ]
2021-10-28
[ [ "Tyulmankov", "Danil", "" ], [ "Fang", "Ching", "" ], [ "Vadaparty", "Annapurna", "" ], [ "Yang", "Guangyu Robert", "" ] ]
In neuroscience, classical Hopfield networks are the standard biologically plausible model of long-term memory, relying on Hebbian plasticity for storage and attractor dynamics for recall. In contrast, memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step. Such augmented networks achieve impressive feats of memory compared to traditional variants, yet their biological relevance is unclear. We propose an implementation of basic key-value memory that stores inputs using a combination of biologically plausible three-factor plasticity rules. The same rules are recovered when network parameters are meta-learned. Our network performs on par with classical Hopfield networks on autoassociative memory tasks and can be naturally extended to continual recall, heteroassociative memory, and sequence learning. Our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
1707.00142
Haifei Yang
Haifei Yang, Lu Shi, Feng Liu, Yanmeng Zhang, Baohua Liu, Yangyang Li, Zhongyuan Shi and Shuyao Zhou
EEG and ECG changes during deep-sea manned submersible operation
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Deep-sea manned submersible operation could induce mental workload and influence neurophysiological measures. Psychophysiological responses to submersible operation are not well known. The main aim of this study was to investigate changes in EEG and ECG components and subjective mental stress of pilots during submersible operation. Methods: There were 6 experienced submersible pilots who performed a 3 h submersible operation task composed of 5 subtasks. Electroencephalogram (EEG) and electrocardiogram (ECG) was recorded before the operation task, after 1.5 h and 2.5 h operation, and after the task. Subjective ratings of mental stress were also conducted at these time points. Results: HR and scores on subjective stressed scale increased during the task compared to baseline (P<0.05). LF/HF ratio at 1.5 h were higher than those at Baseline (P<0.05) and 2.5 h (P<0.05). Relative theta power at the Cz site increased (P<0.01) and relative alpha power decreased (P<0.01) at 2.5 h compared to values at Baseline. Alpha attenuation coefficient (AAC, ratio of mean alpha power during eyes closed versus eyes open) at 2.5 h and after the task were lower compared to baseline and 1.5 h (P<0.05 or less). Conclusions: Submersible operation resulted in an increased HR in association with mental stress, alterations in autonomic activity and EEG changes that expressed variations in mental workload. Brain arousal level declined during the later operation period.
[ { "created": "Sat, 1 Jul 2017 11:49:47 GMT", "version": "v1" } ]
2017-07-04
[ [ "Yang", "Haifei", "" ], [ "Shi", "Lu", "" ], [ "Liu", "Feng", "" ], [ "Zhang", "Yanmeng", "" ], [ "Liu", "Baohua", "" ], [ "Li", "Yangyang", "" ], [ "Shi", "Zhongyuan", "" ], [ "Zhou", "Shuyao", "" ] ]
Background: Deep-sea manned submersible operation could induce mental workload and influence neurophysiological measures. Psychophysiological responses to submersible operation are not well known. The main aim of this study was to investigate changes in EEG and ECG components and subjective mental stress of pilots during submersible operation. Methods: There were 6 experienced submersible pilots who performed a 3 h submersible operation task composed of 5 subtasks. Electroencephalogram (EEG) and electrocardiogram (ECG) was recorded before the operation task, after 1.5 h and 2.5 h operation, and after the task. Subjective ratings of mental stress were also conducted at these time points. Results: HR and scores on subjective stressed scale increased during the task compared to baseline (P<0.05). LF/HF ratio at 1.5 h were higher than those at Baseline (P<0.05) and 2.5 h (P<0.05). Relative theta power at the Cz site increased (P<0.01) and relative alpha power decreased (P<0.01) at 2.5 h compared to values at Baseline. Alpha attenuation coefficient (AAC, ratio of mean alpha power during eyes closed versus eyes open) at 2.5 h and after the task were lower compared to baseline and 1.5 h (P<0.05 or less). Conclusions: Submersible operation resulted in an increased HR in association with mental stress, alterations in autonomic activity and EEG changes that expressed variations in mental workload. Brain arousal level declined during the later operation period.
1309.2600
Eric Forgoston
Eric Forgoston and Ira B. Schwartz
Predicting unobserved exposures from seasonal epidemic data
24 pages, 6 figures; Final version in Bulletin of Mathematical Biology
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a stochastic Susceptible-Exposed-Infected-Recovered (SEIR) epidemiological model with a contact rate that fluctuates seasonally. Through the use of a nonlinear, stochastic projection, we are able to analytically determine the lower dimensional manifold on which the deterministic and stochastic dynamics correctly interact. Our method produces a low dimensional stochastic model that captures the same timing of disease outbreak and the same amplitude and phase of recurrent behavior seen in the high dimensional model. Given seasonal epidemic data consisting of the number of infectious individuals, our method enables a data-based model prediction of the number of unobserved exposed individuals over very long times.
[ { "created": "Tue, 10 Sep 2013 18:14:19 GMT", "version": "v1" } ]
2013-09-11
[ [ "Forgoston", "Eric", "" ], [ "Schwartz", "Ira B.", "" ] ]
We consider a stochastic Susceptible-Exposed-Infected-Recovered (SEIR) epidemiological model with a contact rate that fluctuates seasonally. Through the use of a nonlinear, stochastic projection, we are able to analytically determine the lower dimensional manifold on which the deterministic and stochastic dynamics correctly interact. Our method produces a low dimensional stochastic model that captures the same timing of disease outbreak and the same amplitude and phase of recurrent behavior seen in the high dimensional model. Given seasonal epidemic data consisting of the number of infectious individuals, our method enables a data-based model prediction of the number of unobserved exposed individuals over very long times.
1601.07086
Miklos Z. Racz
Shirshendu Ganguly, Elchanan Mossel, Miklos Z. Racz
Sequence assembly from corrupted shotgun reads
13 pages, 2 figures
null
null
null
q-bio.GN cs.IT math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevalent technique for DNA sequencing consists of two main steps: shotgun sequencing, where many randomly located fragments, called reads, are extracted from the overall sequence, followed by an assembly algorithm that aims to reconstruct the original sequence. There are many different technologies that generate the reads: widely-used second-generation methods create short reads with low error rates, while emerging third-generation methods create long reads with high error rates. Both error rates and error profiles differ among methods, so reconstruction algorithms are often tailored to specific shotgun sequencing technologies. As these methods change over time, a fundamental question is whether there exist reconstruction algorithms which are robust, i.e., which perform well under a wide range of error distributions. Here we study this question of sequence assembly from corrupted reads. We make no assumption on the types of errors in the reads, but only assume a bound on their magnitude. More precisely, for each read we assume that instead of receiving the true read with no errors, we receive a corrupted read which has edit distance at most $\epsilon$ times the length of the read from the true read. We show that if the reads are long enough and there are sufficiently many of them, then approximate reconstruction is possible: we construct a simple algorithm such that for almost all original sequences the output of the algorithm is a sequence whose edit distance from the original one is at most $O(\epsilon)$ times the length of the original sequence.
[ { "created": "Tue, 26 Jan 2016 16:29:02 GMT", "version": "v1" } ]
2016-01-28
[ [ "Ganguly", "Shirshendu", "" ], [ "Mossel", "Elchanan", "" ], [ "Racz", "Miklos Z.", "" ] ]
The prevalent technique for DNA sequencing consists of two main steps: shotgun sequencing, where many randomly located fragments, called reads, are extracted from the overall sequence, followed by an assembly algorithm that aims to reconstruct the original sequence. There are many different technologies that generate the reads: widely-used second-generation methods create short reads with low error rates, while emerging third-generation methods create long reads with high error rates. Both error rates and error profiles differ among methods, so reconstruction algorithms are often tailored to specific shotgun sequencing technologies. As these methods change over time, a fundamental question is whether there exist reconstruction algorithms which are robust, i.e., which perform well under a wide range of error distributions. Here we study this question of sequence assembly from corrupted reads. We make no assumption on the types of errors in the reads, but only assume a bound on their magnitude. More precisely, for each read we assume that instead of receiving the true read with no errors, we receive a corrupted read which has edit distance at most $\epsilon$ times the length of the read from the true read. We show that if the reads are long enough and there are sufficiently many of them, then approximate reconstruction is possible: we construct a simple algorithm such that for almost all original sequences the output of the algorithm is a sequence whose edit distance from the original one is at most $O(\epsilon)$ times the length of the original sequence.
2405.06303
Yuichi Togashi
Jin Kousaka, Atsuko H. Iwane, Yuichi Togashi
Automated Cell Structure Extraction for 3D Electron Microscopy by Deep Learning
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling the 3D structures of cells and tissues is crucial in biology. Sequential cross-sectional images from electron microscopy provide high-resolution intracellular structure information. Segmentation of complex cell structures remains a laborious manual task for experts, demanding time and effort. This bottleneck in analyzing biological images requires efficient and automated solutions. This study explores deep learning-based automated segmentation of biological images, enabling accurate reconstruction of the 3D structures of cells and organelles. We constructed an analysis system for the cell images of Cyanidioschyzon merolae, a primitive unicellular red algae. This system utilizes sequential cross-sectional images captured by Focused Ion Beam Scanning Electron Microscopes (FIB-SEM). We adopted the U-Net and performed pre-training to identify and segment cell organelles from single-cell images. In addition, we employed the Segmentation Anything Model (SAM) and the 3D watershed algorithm to extract individual 3D images of each cell from large-scale microscope images containing numerous cells. Finally, we applied the pre-trained U-Net to segment each structure within these 3D images. Through this procedure, we could fully automate the creation of 3D cell models. Our approach would apply to other cell types, and we aim to build a versatile analysis system. We will also explore adopting other deep learning techniques and combinations of image processing methods to further enhance segmentation accuracy.
[ { "created": "Fri, 10 May 2024 08:16:41 GMT", "version": "v1" } ]
2024-05-13
[ [ "Kousaka", "Jin", "" ], [ "Iwane", "Atsuko H.", "" ], [ "Togashi", "Yuichi", "" ] ]
Modeling the 3D structures of cells and tissues is crucial in biology. Sequential cross-sectional images from electron microscopy provide high-resolution intracellular structure information. Segmentation of complex cell structures remains a laborious manual task for experts, demanding time and effort. This bottleneck in analyzing biological images requires efficient and automated solutions. This study explores deep learning-based automated segmentation of biological images, enabling accurate reconstruction of the 3D structures of cells and organelles. We constructed an analysis system for the cell images of Cyanidioschyzon merolae, a primitive unicellular red algae. This system utilizes sequential cross-sectional images captured by Focused Ion Beam Scanning Electron Microscopes (FIB-SEM). We adopted the U-Net and performed pre-training to identify and segment cell organelles from single-cell images. In addition, we employed the Segmentation Anything Model (SAM) and the 3D watershed algorithm to extract individual 3D images of each cell from large-scale microscope images containing numerous cells. Finally, we applied the pre-trained U-Net to segment each structure within these 3D images. Through this procedure, we could fully automate the creation of 3D cell models. Our approach would apply to other cell types, and we aim to build a versatile analysis system. We will also explore adopting other deep learning techniques and combinations of image processing methods to further enhance segmentation accuracy.
0901.1320
Emmanuel Tannenbaum
Maya Kleiman and Emmanuel Tannenbaum
Diploidy and the selective advantage for sexual reproduction in unicellular organisms
90 pages, no figures. Submitted to Theory in Biosciences (figures included in version submitted for publication)
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops mathematical models describing the evolutionary dynamics of both asexually and sexually reproducing populations of diploid unicellular organisms. We consider two forms of genome organization. In one case, we assume that the genome consists of two multi-gene chromosomes, while in the second case we assume that each gene defines a separate chromosome. If the organism has $ l $ homologous pairs that lack a functional copy of the given gene, then the fitness of the organism is $ \kappa_l $. The $ \kappa_l $ are assumed to be monotonically decreasing, so that $ \kappa_0 = 1 > \kappa_1 > \kappa_2 > ... > \kappa_{\infty} = 0 $. For nearly all of the reproduction strategies we consider, we find, in the limit of large $ N $, that the mean fitness at mutation-selection balance is $ \max\{2 e^{-\mu} - 1, 0\} $, where $ N $ is the number of genes in the haploid set of the genome, $ \epsilon $ is the probability that a given DNA template strand of a given gene produces a mutated daughter during replication, and $ \mu = N \epsilon $. The only exception is the sexual reproduction pathway for the multi-chromosomed genome. Assuming a multiplicative fitness landscape where $ \kappa_l = \alpha^{l} $ for $ \alpha \in (0, 1) $, this strategy is found to have a mean fitness that exceeds the mean fitness of all of the other strategies. Furthermore, while the other reproduction strategies experience a total loss of viability due to the steady accumulation of deleterious mutations once $ \mu $ exceeds $ \ln 2 $, no such transition occurs in the sexual pathway. The results of this paper demonstrate a selective advantage for sexual reproduction with fewer and much less restrictive assumptions than previous work.
[ { "created": "Fri, 9 Jan 2009 21:08:27 GMT", "version": "v1" }, { "created": "Fri, 23 Jan 2009 23:12:05 GMT", "version": "v2" }, { "created": "Fri, 17 Jul 2009 10:53:31 GMT", "version": "v3" } ]
2009-07-17
[ [ "Kleiman", "Maya", "" ], [ "Tannenbaum", "Emmanuel", "" ] ]
This paper develops mathematical models describing the evolutionary dynamics of both asexually and sexually reproducing populations of diploid unicellular organisms. We consider two forms of genome organization. In one case, we assume that the genome consists of two multi-gene chromosomes, while in the second case we assume that each gene defines a separate chromosome. If the organism has $ l $ homologous pairs that lack a functional copy of the given gene, then the fitness of the organism is $ \kappa_l $. The $ \kappa_l $ are assumed to be monotonically decreasing, so that $ \kappa_0 = 1 > \kappa_1 > \kappa_2 > ... > \kappa_{\infty} = 0 $. For nearly all of the reproduction strategies we consider, we find, in the limit of large $ N $, that the mean fitness at mutation-selection balance is $ \max\{2 e^{-\mu} - 1, 0\} $, where $ N $ is the number of genes in the haploid set of the genome, $ \epsilon $ is the probability that a given DNA template strand of a given gene produces a mutated daughter during replication, and $ \mu = N \epsilon $. The only exception is the sexual reproduction pathway for the multi-chromosomed genome. Assuming a multiplicative fitness landscape where $ \kappa_l = \alpha^{l} $ for $ \alpha \in (0, 1) $, this strategy is found to have a mean fitness that exceeds the mean fitness of all of the other strategies. Furthermore, while the other reproduction strategies experience a total loss of viability due to the steady accumulation of deleterious mutations once $ \mu $ exceeds $ \ln 2 $, no such transition occurs in the sexual pathway. The results of this paper demonstrate a selective advantage for sexual reproduction with fewer and much less restrictive assumptions than previous work.
1707.08726
Neil MacKinnon
Jan G. Korvink, Vlad Badilita, Lorenzo Bordonali, Mazin Jouda, Dario Mager, Neil MacKinnon
NMR microscopy for in vivo metabolomics, digitally twinned by computational systems biology, needs a sensitivity boost
9 pages, 2 figures, Perspective
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The metabolism of an organism is regulated at the cellular level, yet is strongly influenced by its environment. The precise metabolomic study of living organisms is currently hampered by measurement sensitivity: most metabolomic measurement techniques involve some compromise, in that averaging is performed over a volume significantly larger than a single cell, or require invasion of the organism, or arrest the state of the organism. NMR is an inherently non-invasive chemometric and imaging method, and hence in principle suitable for metabolomic measurements. The digital twin of metabolomics is computational systems biology, so that NMR microscopy is potentially a viable approach with which to join the theoretical and experimental exploration of the metabolomic and behavioural response of organisms. This prospect paper considers the challenge of performing in vivo NMR-based metabolomics on the small organism C. elegans, points the way towards possible solutions created using MEMS techniques, and highlights currently insurmountable challenges.
[ { "created": "Thu, 27 Jul 2017 07:06:38 GMT", "version": "v1" } ]
2017-07-28
[ [ "Korvink", "Jan G.", "" ], [ "Badilita", "Vlad", "" ], [ "Bordonali", "Lorenzo", "" ], [ "Jouda", "Mazin", "" ], [ "Mager", "Dario", "" ], [ "MacKinnon", "Neil", "" ] ]
The metabolism of an organism is regulated at the cellular level, yet is strongly influenced by its environment. The precise metabolomic study of living organisms is currently hampered by measurement sensitivity: most metabolomic measurement techniques involve some compromise, in that averaging is performed over a volume significantly larger than a single cell, or require invasion of the organism, or arrest the state of the organism. NMR is an inherently non-invasive chemometric and imaging method, and hence in principle suitable for metabolomic measurements. The digital twin of metabolomics is computational systems biology, so that NMR microscopy is potentially a viable approach with which to join the theoretical and experimental exploration of the metabolomic and behavioural response of organisms. This prospect paper considers the challenge of performing in vivo NMR-based metabolomics on the small organism C. elegans, points the way towards possible solutions created using MEMS techniques, and highlights currently insurmountable challenges.
1902.05764
Pejman Farhadi Ghalati
Pejman F. Ghalati, Satya S. Samal, Jayesh S. Bhat, Robert Deisz, Gernot Marx and Andreas Schuppert
Critical Transitions in Intensive Care Units: A Sepsis Case Study
16 pages, 8 figures, 2 tables
Scientific Reports, 9(1) (2019)
10.1038/s41598-019-49006-2
null
q-bio.QM cs.CE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The progression of complex human diseases is associated with critical transitions across dynamical regimes. These transitions often spawn early-warning signals and provide insights into the underlying disease-driving mechanisms. In this paper, we propose a computational method based on surprise loss (SL) to discover data-driven indicators of such transitions in a multivariate time series dataset of septic shock and non-sepsis patient cohorts (MIMIC-III database). The core idea of SL is to train a mathematical model on time series in an unsupervised fashion and to quantify the deterioration of the model's forecast (out-of-sample) performance relative to its past (in-sample) performance. Considering the highest value of the moving average of SL as a critical transition, our retrospective analysis revealed that critical transitions occurred at a median of over 35 hours before the onset of septic shock, which suggests the applicability of our method as an early-warning indicator. Furthermore, we show that clinical variables at critical-transition regions are significantly different between septic shock and non-sepsis cohorts. Therefore, our paper contributes a critical-transition-based data-sampling strategy that can be utilized for further analysis, such as patient classification. Moreover, our method outperformed other indicators of critical transition in complex systems, such as temporal autocorrelation and variance.
[ { "created": "Fri, 15 Feb 2019 11:00:51 GMT", "version": "v1" }, { "created": "Wed, 23 Oct 2019 09:34:47 GMT", "version": "v2" } ]
2019-10-24
[ [ "Ghalati", "Pejman F.", "" ], [ "Samal", "Satya S.", "" ], [ "Bhat", "Jayesh S.", "" ], [ "Deisz", "Robert", "" ], [ "Marx", "Gernot", "" ], [ "Schuppert", "Andreas", "" ] ]
The progression of complex human diseases is associated with critical transitions across dynamical regimes. These transitions often spawn early-warning signals and provide insights into the underlying disease-driving mechanisms. In this paper, we propose a computational method based on surprise loss (SL) to discover data-driven indicators of such transitions in a multivariate time series dataset of septic shock and non-sepsis patient cohorts (MIMIC-III database). The core idea of SL is to train a mathematical model on time series in an unsupervised fashion and to quantify the deterioration of the model's forecast (out-of-sample) performance relative to its past (in-sample) performance. Considering the highest value of the moving average of SL as a critical transition, our retrospective analysis revealed that critical transitions occurred at a median of over 35 hours before the onset of septic shock, which suggests the applicability of our method as an early-warning indicator. Furthermore, we show that clinical variables at critical-transition regions are significantly different between septic shock and non-sepsis cohorts. Therefore, our paper contributes a critical-transition-based data-sampling strategy that can be utilized for further analysis, such as patient classification. Moreover, our method outperformed other indicators of critical transition in complex systems, such as temporal autocorrelation and variance.
1304.7205
Alexander Stewart
Alexander J. Stewart and Joshua B. Plotkin
From extortion to generosity, the evolution of zero-determinant strategies in the prisoner's dilemma
null
PNAS September 17, 2013 vol. 110 no. 38 15348-15353
10.1073/pnas.1306246110
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has revealed a new class of "zero-determinant" (ZD) strategies for iterated, two-player games. ZD strategies allow a player to unilaterally enforce a linear relationship between her score and her opponent's score, and thus achieve an unusual degree of control over both players' long-term payoffs. Although originally conceived in the context of classical, two-player game theory, ZD strategies also have consequences in evolving populations of players. Here we explore the evolutionary prospects for ZD strategies in the Iterated Prisoner's Dilemma (IPD). Several recent studies have focused on the evolution of "extortion strategies" - a subset of zero-determinant strategies - and found them to be unsuccessful in populations. Nevertheless, we identify a different subset of ZD strategies, called "generous ZD strategies", that forgive defecting opponents, but nonetheless dominate in evolving populations. For all but the smallest population sizes, generous ZD strategies are not only robust to being replaced by other strategies, but they also can selectively replace any non-cooperative ZD strategy. Generous strategies can be generalized beyond the space of ZD strategies, and they remain robust to invasion. When evolution occurs on the full set of all IPD strategies, selection disproportionately favors these generous strategies. In some regimes, generous strategies outperform even the most successful of the well-known Iterated Prisoner's Dilemma strategies, including win-stay-lose-shift.
[ { "created": "Fri, 26 Apr 2013 15:37:49 GMT", "version": "v1" }, { "created": "Fri, 3 May 2013 21:40:15 GMT", "version": "v2" }, { "created": "Fri, 27 Dec 2013 14:01:24 GMT", "version": "v3" } ]
2013-12-30
[ [ "Stewart", "Alexander J.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Recent work has revealed a new class of "zero-determinant" (ZD) strategies for iterated, two-player games. ZD strategies allow a player to unilaterally enforce a linear relationship between her score and her opponent's score, and thus achieve an unusual degree of control over both players' long-term payoffs. Although originally conceived in the context of classical, two-player game theory, ZD strategies also have consequences in evolving populations of players. Here we explore the evolutionary prospects for ZD strategies in the Iterated Prisoner's Dilemma (IPD). Several recent studies have focused on the evolution of "extortion strategies" - a subset of zero-determinant strategies - and found them to be unsuccessful in populations. Nevertheless, we identify a different subset of ZD strategies, called "generous ZD strategies", that forgive defecting opponents, but nonetheless dominate in evolving populations. For all but the smallest population sizes, generous ZD strategies are not only robust to being replaced by other strategies, but they also can selectively replace any non-cooperative ZD strategy. Generous strategies can be generalized beyond the space of ZD strategies, and they remain robust to invasion. When evolution occurs on the full set of all IPD strategies, selection disproportionately favors these generous strategies. In some regimes, generous strategies outperform even the most successful of the well-known Iterated Prisoner's Dilemma strategies, including win-stay-lose-shift.
1808.08230
Andrew Borkowski M.D.
Andrew A. Borkowski, Catherine P. Wilson, Steven A. Borkowski, Lauren A. Deland, Stephen M. Mastorides
Using Apple Machine Learning Algorithms to Detect and Subclassify Non-Small Cell Lung Cancer
12 pages, 2 tables, 3 figures
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lung cancer continues to be a major healthcare challenge with high morbidity and mortality rates among both men and women worldwide. The majority of lung cancer cases are of non-small cell lung cancer type. With the advent of targeted cancer therapy, it is imperative not only to properly diagnose but also sub-classify non-small cell lung cancer. In our study, we evaluated the utility of using Apple Create ML module to detect and sub-classify non-small cell carcinomas based on histopathological images. After module optimization, the program detected 100% of non-small cell lung cancer images and successfully subclassified the majority of the images. Trained modules, such as ours, can be utilized in diagnostic smartphone-based applications, augmenting diagnostic services in understaffed areas of the world.
[ { "created": "Fri, 24 Aug 2018 13:57:40 GMT", "version": "v1" }, { "created": "Fri, 18 Jan 2019 20:09:48 GMT", "version": "v2" } ]
2019-01-23
[ [ "Borkowski", "Andrew A.", "" ], [ "Wilson", "Catherine P.", "" ], [ "Borkowski", "Steven A.", "" ], [ "Deland", "Lauren A.", "" ], [ "Mastorides", "Stephen M.", "" ] ]
Lung cancer continues to be a major healthcare challenge with high morbidity and mortality rates among both men and women worldwide. The majority of lung cancer cases are of non-small cell lung cancer type. With the advent of targeted cancer therapy, it is imperative not only to properly diagnose but also sub-classify non-small cell lung cancer. In our study, we evaluated the utility of using Apple Create ML module to detect and sub-classify non-small cell carcinomas based on histopathological images. After module optimization, the program detected 100% of non-small cell lung cancer images and successfully subclassified the majority of the images. Trained modules, such as ours, can be utilized in diagnostic smartphone-based applications, augmenting diagnostic services in understaffed areas of the world.
1201.2929
Alexei Koulakov
Alexei A. Koulakov and Yuri Lazebnik
The Problem of Colliding Networks and its Relation to Cancer
null
null
10.1016/j.bpj.2012.08.062
null
q-bio.MN cond-mat.dis-nn cond-mat.soft physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex systems, ranging from living cells to human societies, can be represented as attractor networks, whose basic property is to exist in one of allowed states, or attractors. We noted that merging two systems that are in distinct attractors creates uncertainty, as the hybrid system cannot assume two attractors at once. As a prototype of this problem, we explore cell fusion, whose ability to combine distinct cells into hybrids was proposed to cause cancer. By simulating cell types as attractors, we find that hybrids are prone to assume spurious attractors, which are emergent and sporadic states of networks, and propose that cell fusion can make a cell cancerous by placing it into normally inaccessible spurious states. We define basic features of hybrid networks and suggest that the problem of colliding networks has general significance in processes represented by attractor networks, including biological, social, and political phenomena.
[ { "created": "Fri, 13 Jan 2012 20:23:35 GMT", "version": "v1" } ]
2015-06-03
[ [ "Koulakov", "Alexei A.", "" ], [ "Lazebnik", "Yuri", "" ] ]
Complex systems, ranging from living cells to human societies, can be represented as attractor networks, whose basic property is to exist in one of allowed states, or attractors. We noted that merging two systems that are in distinct attractors creates uncertainty, as the hybrid system cannot assume two attractors at once. As a prototype of this problem, we explore cell fusion, whose ability to combine distinct cells into hybrids was proposed to cause cancer. By simulating cell types as attractors, we find that hybrids are prone to assume spurious attractors, which are emergent and sporadic states of networks, and propose that cell fusion can make a cell cancerous by placing it into normally inaccessible spurious states. We define basic features of hybrid networks and suggest that the problem of colliding networks has general significance in processes represented by attractor networks, including biological, social, and political phenomena.
2211.07363
Yueting Li
Yueting Li, Qingyue Wei, Ehsan Adeli, Kilian M. Pohl, and Qingyu Zhao
Joint Graph Convolution for Analyzing Brain Structural and Functional Connectome
null
null
null
null
q-bio.NC cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The white-matter (micro-)structural architecture of the brain promotes synchrony among neuronal populations, giving rise to richly patterned functional connections. A fundamental problem for systems neuroscience is determining the best way to relate structural and functional networks quantified by diffusion tensor imaging and resting-state functional MRI. As one of the state-of-the-art approaches for network analysis, graph convolutional networks (GCN) have been separately used to analyze functional and structural networks, but have not been applied to explore inter-network relationships. In this work, we propose to couple the two networks of an individual by adding inter-network edges between corresponding brain regions, so that the joint structure-function graph can be directly analyzed by a single GCN. The weights of inter-network edges are learnable, reflecting non-uniform structure-function coupling strength across the brain. We apply our Joint-GCN to predict age and sex of 662 participants from the public dataset of the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA) based on their functional and micro-structural white-matter networks. Our results support that the proposed Joint-GCN outperforms existing multi-modal graph learning approaches for analyzing structural and functional networks.
[ { "created": "Thu, 27 Oct 2022 23:43:34 GMT", "version": "v1" } ]
2022-11-15
[ [ "Li", "Yueting", "" ], [ "Wei", "Qingyue", "" ], [ "Adeli", "Ehsan", "" ], [ "Pohl", "Kilian M.", "" ], [ "Zhao", "Qingyu", "" ] ]
The white-matter (micro-)structural architecture of the brain promotes synchrony among neuronal populations, giving rise to richly patterned functional connections. A fundamental problem for systems neuroscience is determining the best way to relate structural and functional networks quantified by diffusion tensor imaging and resting-state functional MRI. As one of the state-of-the-art approaches for network analysis, graph convolutional networks (GCN) have been separately used to analyze functional and structural networks, but have not been applied to explore inter-network relationships. In this work, we propose to couple the two networks of an individual by adding inter-network edges between corresponding brain regions, so that the joint structure-function graph can be directly analyzed by a single GCN. The weights of inter-network edges are learnable, reflecting non-uniform structure-function coupling strength across the brain. We apply our Joint-GCN to predict age and sex of 662 participants from the public dataset of the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA) based on their functional and micro-structural white-matter networks. Our results support that the proposed Joint-GCN outperforms existing multi-modal graph learning approaches for analyzing structural and functional networks.
2009.04412
Lana Garmire
Di Wang, Kevin He, Lana X Garmire
Cox-nnet v2.0: improved neural-network based survival prediction extended to large-scale EMR dataset
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Cox-nnet is a neural-network based prognosis prediction method, originally applied to genomics data. Here we propose the version 2 of Cox-nnet, with significant improvement on efficiency and interpretability, making it suitable to predict prognosis based on large-scale electronic medical records (EMR) datasets. We also add permutation-based feature importance scores and the direction of feature coefficients. Applying on an EMR dataset of OPTN kidney transplantation, Cox-nnet v2.0 reduces the training time of Cox-nnet up to 32 folds (n=10,000) and achieves better prediction accuracy than Cox-PH (p<0.05). Availability and implementation: Cox-nnet v2.0 is freely available to the public at https://github.com/lanagarmire/Cox-nnet-v2.0
[ { "created": "Wed, 9 Sep 2020 16:44:48 GMT", "version": "v1" } ]
2020-09-10
[ [ "Wang", "Di", "" ], [ "He", "Kevin", "" ], [ "Garmire", "Lana X", "" ] ]
Cox-nnet is a neural-network based prognosis prediction method, originally applied to genomics data. Here we propose the version 2 of Cox-nnet, with significant improvement on efficiency and interpretability, making it suitable to predict prognosis based on large-scale electronic medical records (EMR) datasets. We also add permutation-based feature importance scores and the direction of feature coefficients. Applying on an EMR dataset of OPTN kidney transplantation, Cox-nnet v2.0 reduces the training time of Cox-nnet up to 32 folds (n=10,000) and achieves better prediction accuracy than Cox-PH (p<0.05). Availability and implementation: Cox-nnet v2.0 is freely available to the public at https://github.com/lanagarmire/Cox-nnet-v2.0
1607.03063
Peter Foster
Bryan Kaye, Peter J. Foster, Tae Yeon Yoo, Daniel J. Needleman
Developing and Testing a Bayesian Analysis of Fluorescence Lifetime Measurements
* These authors contributed equally to this work
PLoS ONE (2017), 12(1): e0169337
10.1371/journal.pone.0169337
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FRET measurements can provide dynamic spatial information on length scales smaller than the diffraction limit of light. Several methods exist to measure FRET between fluorophores, including Fluorescence Lifetime Imaging Microscopy (FLIM), which relies on the reduction of fluorescence lifetime when a fluorophore is undergoing FRET. FLIM measurements take the form of histograms of photon arrival times, containing contributions from a mixed population of fluorophores both undergoing and not undergoing FRET, with the measured distribution being a mixture of exponentials of different lifetimes. Here, we present an analysis method based on Bayesian inference that rigorously takes into account several experimental complications. We test the precision and accuracy of our analysis on controlled experimental data and verify that we can faithfully extract model parameters, both in the low-photon and low-fraction regimes.
[ { "created": "Mon, 11 Jul 2016 18:09:23 GMT", "version": "v1" } ]
2017-01-09
[ [ "Kaye", "Bryan", "" ], [ "Foster", "Peter J.", "" ], [ "Yoo", "Tae Yeon", "" ], [ "Needleman", "Daniel J.", "" ] ]
FRET measurements can provide dynamic spatial information on length scales smaller than the diffraction limit of light. Several methods exist to measure FRET between fluorophores, including Fluorescence Lifetime Imaging Microscopy (FLIM), which relies on the reduction of fluorescence lifetime when a fluorophore is undergoing FRET. FLIM measurements take the form of histograms of photon arrival times, containing contributions from a mixed population of fluorophores both undergoing and not undergoing FRET, with the measured distribution being a mixture of exponentials of different lifetimes. Here, we present an analysis method based on Bayesian inference that rigorously takes into account several experimental complications. We test the precision and accuracy of our analysis on controlled experimental data and verify that we can faithfully extract model parameters, both in the low-photon and low-fraction regimes.
1502.05029
Jason Perlmutter
J.D. Perlmutter, M.F. Hagan
The role of packaging sites in efficient and specific virus assembly
null
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the lifecycle of many single-stranded RNA viruses, including many human pathogens, a protein shell called the capsid spontaneously assembles around the viral genome. Understanding the mechanisms by which capsid proteins selectively assemble around the viral RNA amidst diverse host RNAs is a key question in virology. In one proposed mechanism, sequence elements (packaging sites) within the genomic RNA promote rapid and efficient assembly through specific interactions with the capsid proteins. In this work we develop a coarse-grained particle-based computational model for capsid proteins and RNA which represents protein-RNA interactions arising both from non-specific electrostatics and specific packaging sites interactions. Using Brownian dynamics simulations, we explore how the efficiency and specificity of assembly depend on solution conditions (which control protein-protein and nonspecific protein-RNA interactions) as well as the strength and number of packaging sites. We identify distinct regions in parameter space in which packaging sites lead to highly specific assembly via different mechanisms, and others in which packaging sites lead to kinetic traps. We relate these computational predictions to in vitro assays for specificity in which cognate viral RNAs are compete against non-cognate RNAs for assembly by capsid proteins.
[ { "created": "Tue, 17 Feb 2015 20:46:27 GMT", "version": "v1" } ]
2015-02-19
[ [ "Perlmutter", "J. D.", "" ], [ "Hagan", "M. F.", "" ] ]
During the lifecycle of many single-stranded RNA viruses, including many human pathogens, a protein shell called the capsid spontaneously assembles around the viral genome. Understanding the mechanisms by which capsid proteins selectively assemble around the viral RNA amidst diverse host RNAs is a key question in virology. In one proposed mechanism, sequence elements (packaging sites) within the genomic RNA promote rapid and efficient assembly through specific interactions with the capsid proteins. In this work we develop a coarse-grained particle-based computational model for capsid proteins and RNA which represents protein-RNA interactions arising both from non-specific electrostatics and specific packaging sites interactions. Using Brownian dynamics simulations, we explore how the efficiency and specificity of assembly depend on solution conditions (which control protein-protein and nonspecific protein-RNA interactions) as well as the strength and number of packaging sites. We identify distinct regions in parameter space in which packaging sites lead to highly specific assembly via different mechanisms, and others in which packaging sites lead to kinetic traps. We relate these computational predictions to in vitro assays for specificity in which cognate viral RNAs are compete against non-cognate RNAs for assembly by capsid proteins.
2205.15581
Armin Thomas
Armin W. Thomas and Christopher R\'e and Russell A. Poldrack
Comparing interpretation methods in mental state decoding analyses with deep learning models
27 pages, 5 main figures
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e.g., perceiving fear or joy) and brain activity by identifying those brain regions (and networks) whose activity allows to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of interpretation methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we compare the explanation performance of prominent interpretation methods in a mental state decoding analysis of three functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its biological plausibility and faithfulness: interpretation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that are biologically less plausible than the explanations of interpretation methods with less explanation faithfulness. Based on this finding, we provide specific recommendations for the application of interpretation methods in mental state decoding.
[ { "created": "Tue, 31 May 2022 07:43:02 GMT", "version": "v1" }, { "created": "Fri, 14 Oct 2022 15:43:36 GMT", "version": "v2" } ]
2022-10-17
[ [ "Thomas", "Armin W.", "" ], [ "Ré", "Christopher", "" ], [ "Poldrack", "Russell A.", "" ] ]
Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e.g., perceiving fear or joy) and brain activity by identifying those brain regions (and networks) whose activity allows to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of interpretation methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we compare the explanation performance of prominent interpretation methods in a mental state decoding analysis of three functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its biological plausibility and faithfulness: interpretation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that are biologically less plausible than the explanations of interpretation methods with less explanation faithfulness. Based on this finding, we provide specific recommendations for the application of interpretation methods in mental state decoding.
2206.09516
Michael Stumpf
Sean T. Vittadello and Michael P.H. Stumpf
Open Problems in Mathematical Biology
31 pages,2 figures, 115 references
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
Biology is data-rich, and it is equally rich in concepts and hypotheses. Part of trying to understand biological processes and systems is therefore to confront our ideas and hypotheses with data using statistical methods to determine the extent to which our hypotheses agree with reality. But doing so in a systematic way is becoming increasingly challenging as our hypotheses become more detailed, and our data becomes more complex. Mathematical methods are therefore gaining in importance across the life- and biomedical sciences. Mathematical models allow us to test our understanding, make testable predictions about future behaviour, and gain insights into how we can control the behaviour of biological systems. It has been argued that mathematical methods can be of great benefit to biologists to make sense of data. But mathematics and mathematicians are set to benefit equally from considering the often bewildering complexity inherent to living systems. Here we present a small selection of open problems and challenges in mathematical biology. We have chosen these open problems because they are of both biological and mathematical interest.
[ { "created": "Mon, 20 Jun 2022 00:31:27 GMT", "version": "v1" } ]
2022-06-22
[ [ "Vittadello", "Sean T.", "" ], [ "Stumpf", "Michael P. H.", "" ] ]
Biology is data-rich, and it is equally rich in concepts and hypotheses. Part of trying to understand biological processes and systems is therefore to confront our ideas and hypotheses with data using statistical methods to determine the extent to which our hypotheses agree with reality. But doing so in a systematic way is becoming increasingly challenging as our hypotheses become more detailed, and our data becomes more complex. Mathematical methods are therefore gaining in importance across the life- and biomedical sciences. Mathematical models allow us to test our understanding, make testable predictions about future behaviour, and gain insights into how we can control the behaviour of biological systems. It has been argued that mathematical methods can be of great benefit to biologists to make sense of data. But mathematics and mathematicians are set to benefit equally from considering the often bewildering complexity inherent to living systems. Here we present a small selection of open problems and challenges in mathematical biology. We have chosen these open problems because they are of both biological and mathematical interest.
1602.00579
Antonio Galves
A. Duarte, R. Fraiman, A. Galves, G. Ost and C. Vargas
Context tree selection for functional data
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
It has been repeatedly conjectured that the brain retrieves statistical regularities from stimuli. Here we present a new statistical approach allowing to address this conjecture. This approach is based on a new class of stochastic processes driven by chains with memory of variable length. It leads to a new experimental protocol in which sequences of auditory stimuli generated by a stochastic chain are presented to volunteers while electroencephalographic (EEG) data is recorded from their scalp. A new statistical model selection procedure for functional data is introduced and proved to be consistent. Applied to samples of EEG data collected using our experimental protocol it produces results supporting the conjecture that the brain effectively identifies the structure of the chain generating the sequence of stimuli.
[ { "created": "Mon, 1 Feb 2016 16:18:00 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2017 20:32:37 GMT", "version": "v2" }, { "created": "Thu, 11 Jan 2018 21:47:24 GMT", "version": "v3" } ]
2018-01-15
[ [ "Duarte", "A.", "" ], [ "Fraiman", "R.", "" ], [ "Galves", "A.", "" ], [ "Ost", "G.", "" ], [ "Vargas", "C.", "" ] ]
It has been repeatedly conjectured that the brain retrieves statistical regularities from stimuli. Here we present a new statistical approach allowing to address this conjecture. This approach is based on a new class of stochastic processes driven by chains with memory of variable length. It leads to a new experimental protocol in which sequences of auditory stimuli generated by a stochastic chain are presented to volunteers while electroencephalographic (EEG) data is recorded from their scalp. A new statistical model selection procedure for functional data is introduced and proved to be consistent. Applied to samples of EEG data collected using our experimental protocol it produces results supporting the conjecture that the brain effectively identifies the structure of the chain generating the sequence of stimuli.
2211.03515
Amandine Veber
N.H. Barton, A.M. Etheridge and A. V\'eber
The infinitesimal model with dominance
68 pages, 9 figures. To appear in Genetics
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
The classical infinitesimal model is a simple and robust model for the inheritance of quantitative traits. In this model, a quantitative trait is expressed as the sum of a genetic and a non-genetic (environmental) component and the genetic component of offspring traits within a family follows a normal distribution around the average of the parents' trait values, and has a variance that is independent of the trait values of the parents. In previous work, Barton et al.(2017), we showed that when trait values are determined by the sum of a large number of Mendelian factors, each of small effect, one can justify the infinitesimal model as limit of Mendelian inheritance. In this paper, we show that the robustness of the infinitesimal model extends to include dominance. We define the model in terms of classical quantities of quantitative genetics, before justifying it as a limit of Mendelian inheritance as the number, M, of underlying loci tends to infinity. As in the additive case, the multivariate normal distribution of trait values across the pedigree can be expressed in terms of variance components in an ancestral population and probabilities of identity by descent determined by the pedigree. In this setting, it is natural to decompose trait values, not just into the additive and dominance components, but into a component that is shared by all individuals within the family and an independent `residual' for each offspring, which captures the randomness of Mendelian inheritance. We show that, even if we condition on parental trait values, both the shared component and the residuals within each family will be asymptotically normally distributed as the number of loci tends to infinity, with an error of order 1/\sqrt{M}. We illustrate our results with some numerical examples.
[ { "created": "Mon, 31 Oct 2022 22:00:44 GMT", "version": "v1" }, { "created": "Fri, 19 May 2023 08:49:52 GMT", "version": "v2" }, { "created": "Wed, 28 Jun 2023 12:35:22 GMT", "version": "v3" } ]
2023-06-29
[ [ "Barton", "N. H.", "" ], [ "Etheridge", "A. M.", "" ], [ "Véber", "A.", "" ] ]
The classical infinitesimal model is a simple and robust model for the inheritance of quantitative traits. In this model, a quantitative trait is expressed as the sum of a genetic and a non-genetic (environmental) component and the genetic component of offspring traits within a family follows a normal distribution around the average of the parents' trait values, and has a variance that is independent of the trait values of the parents. In previous work, Barton et al.(2017), we showed that when trait values are determined by the sum of a large number of Mendelian factors, each of small effect, one can justify the infinitesimal model as limit of Mendelian inheritance. In this paper, we show that the robustness of the infinitesimal model extends to include dominance. We define the model in terms of classical quantities of quantitative genetics, before justifying it as a limit of Mendelian inheritance as the number, M, of underlying loci tends to infinity. As in the additive case, the multivariate normal distribution of trait values across the pedigree can be expressed in terms of variance components in an ancestral population and probabilities of identity by descent determined by the pedigree. In this setting, it is natural to decompose trait values, not just into the additive and dominance components, but into a component that is shared by all individuals within the family and an independent `residual' for each offspring, which captures the randomness of Mendelian inheritance. We show that, even if we condition on parental trait values, both the shared component and the residuals within each family will be asymptotically normally distributed as the number of loci tends to infinity, with an error of order 1/\sqrt{M}. We illustrate our results with some numerical examples.
2308.13038
Patrick Diaba-Nuhoho
Patrick Diaba-Nuhoho
The role of PHF5A in cancer: A review and update
18 pages, 1 figure, 2 tables
Biomedicine & Pharmacotherapy 169 (2023): 115857
10.1016/j.biopha.2023.115857
PMID: 37951028
q-bio.TO q-bio.CB q-bio.MN
http://creativecommons.org/licenses/by/4.0/
PHF5A is a member of the zinc-finger proteins. To advance knowledge on their role in carcinogenesis, data from experimental studies, animal models and clinical studies in different tumorigenesis have been reviewed. Furthermore, PHF5A as an oncogenic function, is frequently expressed in tumor cells and a potential prognostic marker for different cancers. PHF5A is implicated in the regulation of cancer cell proliferation, invasion, migration and metastasis. Knockdown of PHF5A prevented the invasion and metastasis of tumor cells. Here, the role of PHF5A in different cancers and their possible mechanism in relation to recent literature is reviewed and discussed. However, there is an open promising perspective to their therapeutic management for different cancer types.
[ { "created": "Thu, 24 Aug 2023 19:05:37 GMT", "version": "v1" } ]
2023-11-22
[ [ "Diaba-Nuhoho", "Patrick", "" ] ]
PHF5A is a member of the zinc-finger proteins. To advance knowledge on their role in carcinogenesis, data from experimental studies, animal models and clinical studies in different tumorigenesis have been reviewed. Furthermore, PHF5A as an oncogenic function, is frequently expressed in tumor cells and a potential prognostic marker for different cancers. PHF5A is implicated in the regulation of cancer cell proliferation, invasion, migration and metastasis. Knockdown of PHF5A prevented the invasion and metastasis of tumor cells. Here, the role of PHF5A in different cancers and their possible mechanism in relation to recent literature is reviewed and discussed. However, there is an open promising perspective to their therapeutic management for different cancer types.
2208.04944
Fan Hu
Fan Hu, Dongqi Wang, Huazhen Huang, Yishen Hu and Peng Yin
Bridging the gap between target-based and cell-based drug discovery with a graph generative multi-task model
null
Journal of Chemical Information and Modeling, 2022
10.1021/acs.jcim.2c01180
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug discovery is vitally important for protecting human against disease. Target-based screening is one of the most popular methods to develop new drugs in the past several decades. This method efficiently screens candidate drugs inhibiting target protein in vitro, but it often fails due to inadequate activity of the selected drugs in vivo. Accurate computational methods are needed to bridge this gap. Here, we propose a novel graph multi task deep learning model to identify compounds carrying both target inhibitory and cell active (MATIC) properties. On a carefully curated SARS-CoV-2 dataset, the proposed MATIC model shows advantages comparing with traditional method in screening effective compounds in vivo. Next, we explored the model interpretability and found that the learned features for target inhibition (in vitro) or cell active (in vivo) tasks are different with molecular property correlations and atom functional attentions. Based on these findings, we utilized a monte carlo based reinforcement learning generative model to generate novel multi-property compounds with both in vitro and in vivo efficacy, thus bridging the gap between target-based and cell-based drug discovery.
[ { "created": "Tue, 9 Aug 2022 02:35:42 GMT", "version": "v1" } ]
2022-11-22
[ [ "Hu", "Fan", "" ], [ "Wang", "Dongqi", "" ], [ "Huang", "Huazhen", "" ], [ "Hu", "Yishen", "" ], [ "Yin", "Peng", "" ] ]
Drug discovery is vitally important for protecting human against disease. Target-based screening is one of the most popular methods to develop new drugs in the past several decades. This method efficiently screens candidate drugs inhibiting target protein in vitro, but it often fails due to inadequate activity of the selected drugs in vivo. Accurate computational methods are needed to bridge this gap. Here, we propose a novel graph multi task deep learning model to identify compounds carrying both target inhibitory and cell active (MATIC) properties. On a carefully curated SARS-CoV-2 dataset, the proposed MATIC model shows advantages comparing with traditional method in screening effective compounds in vivo. Next, we explored the model interpretability and found that the learned features for target inhibition (in vitro) or cell active (in vivo) tasks are different with molecular property correlations and atom functional attentions. Based on these findings, we utilized a monte carlo based reinforcement learning generative model to generate novel multi-property compounds with both in vitro and in vivo efficacy, thus bridging the gap between target-based and cell-based drug discovery.
q-bio/0506032
Eugene Shakhnovich
Igor N. Berezovsky, William W. Chen, Paul J. Choi and Eugene I. Shakhnovich
Entropic stabilization of proteins and its proteomic consequences
null
null
10.1371/journal.pcbi.0010047
null
q-bio.BM cond-mat.soft physics.bio-ph q-bio.GN
null
We report here a new entropic mechanism of protein thermostability due to residual dynamics of rotamer isomerization in native state. All-atom simulations show that Lysines have much greater number of accessible rotamers than Arginines in folded states of proteins. This finding suggests that Lysines would preferentially entropically stabilize the native state. Indeed we show in computational experiments that Arginine-to-Lysine amino acid substitutions result in noticeable stabilization of proteins. We then hypothesize that if evolution uses this physical mechanisms in its strategies of thermophilic adaptation then hyperthermostable organisms would have much greater content of Lysines in their proteomes than of comparable in size and similarly charged Arginines.. Consistent with that, high-throughput comparative analysis of complete proteomes shows extremely strong bias towards Arginine-to-Lysine replacement in hyperthermophilic organisms and overall much greater content of Lysines than Arginines in hyperthermophiles. This finding cannot be explained by GC compositional biases. Our study provides an example of how analysis of a delicate physical mechanism of thermostability helps to resolve a puzzle in comparative genomics as to why aminoacid compositions of hyperthermophilic proteomes are significantly biased towards Lysines but not Arginines
[ { "created": "Wed, 22 Jun 2005 01:49:50 GMT", "version": "v1" } ]
2015-06-26
[ [ "Berezovsky", "Igor N.", "" ], [ "Chen", "William W.", "" ], [ "Choi", "Paul J.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
We report here a new entropic mechanism of protein thermostability due to residual dynamics of rotamer isomerization in native state. All-atom simulations show that Lysines have much greater number of accessible rotamers than Arginines in folded states of proteins. This finding suggests that Lysines would preferentially entropically stabilize the native state. Indeed we show in computational experiments that Arginine-to-Lysine amino acid substitutions result in noticeable stabilization of proteins. We then hypothesize that if evolution uses this physical mechanisms in its strategies of thermophilic adaptation then hyperthermostable organisms would have much greater content of Lysines in their proteomes than of comparable in size and similarly charged Arginines.. Consistent with that, high-throughput comparative analysis of complete proteomes shows extremely strong bias towards Arginine-to-Lysine replacement in hyperthermophilic organisms and overall much greater content of Lysines than Arginines in hyperthermophiles. This finding cannot be explained by GC compositional biases. Our study provides an example of how analysis of a delicate physical mechanism of thermostability helps to resolve a puzzle in comparative genomics as to why aminoacid compositions of hyperthermophilic proteomes are significantly biased towards Lysines but not Arginines
1408.0472
Tom Michoel
Eric Bonnet, Laurence Calzone, Tom Michoel
Integrative multi-omics module network inference with Lemon-Tree
minor revision; 13 pages text + 4 figures + 4 tables + 4 pages supplementary methods; supplementary tables available from the authors
PLoS Comput Biol 11(2): e1003983 (2015)
10.1371/journal.pcbi.1003983
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Module network inference is an established statistical method to reconstruct co-expression modules and their upstream regulatory programs from integrated multi-omics datasets measuring the activity levels of various cellular components across different individuals, experimental conditions or time points of a dynamic process. We have developed Lemon-Tree, an open-source, platform-independent, modular, extensible software package implementing state-of-the-art ensemble methods for module network inference. We benchmarked Lemon-Tree using large-scale tumor datasets and showed that Lemon-Tree algorithms compare favorably with state-of-the-art module network inference software. We also analyzed a large dataset of somatic copy-number alterations and gene expression levels measured in glioblastoma samples from The Cancer Genome Atlas and found that Lemon-Tree correctly identifies known glioblastoma oncogenes and tumor suppressors as master regulators in the inferred module network. Novel candidate driver genes predicted by Lemon-Tree were validated using tumor pathway and survival analyses. Lemon-Tree is available from http://lemon-tree.googlecode.com under the GNU General Public License version 2.0.
[ { "created": "Sun, 3 Aug 2014 08:39:37 GMT", "version": "v1" }, { "created": "Tue, 14 Oct 2014 15:18:30 GMT", "version": "v2" } ]
2015-05-20
[ [ "Bonnet", "Eric", "" ], [ "Calzone", "Laurence", "" ], [ "Michoel", "Tom", "" ] ]
Module network inference is an established statistical method to reconstruct co-expression modules and their upstream regulatory programs from integrated multi-omics datasets measuring the activity levels of various cellular components across different individuals, experimental conditions or time points of a dynamic process. We have developed Lemon-Tree, an open-source, platform-independent, modular, extensible software package implementing state-of-the-art ensemble methods for module network inference. We benchmarked Lemon-Tree using large-scale tumor datasets and showed that Lemon-Tree algorithms compare favorably with state-of-the-art module network inference software. We also analyzed a large dataset of somatic copy-number alterations and gene expression levels measured in glioblastoma samples from The Cancer Genome Atlas and found that Lemon-Tree correctly identifies known glioblastoma oncogenes and tumor suppressors as master regulators in the inferred module network. Novel candidate driver genes predicted by Lemon-Tree were validated using tumor pathway and survival analyses. Lemon-Tree is available from http://lemon-tree.googlecode.com under the GNU General Public License version 2.0.
2305.09884
Syeda Khadija Zaidi
Khadija F. Zaidi and Michelle Harris-Love
A Novel Procrustes Analysis Method to Quantify Multi-Joint Coordination of the Upper Extremity after Stroke
Accepted Paper - 45th Annual IEEE Engineering in Medicine and Biology Conference
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Upper extremity motor impairment affects about 80\% of persons after strokes. For stroke rehabilitation, upper limb kinematic assessments have increasingly been used as primary or secondary outcome measures. Studying the upper extremity provides a valuable tool for assessing limb coordination, mal-adaptations, and recovery. There is currently no universal standardized scale for categorizing multi-joint upper extremity movement. We propose a modified Procrustes statistical shape method as a quantitative analysis that can recognize segments of movement where multiple limb segments are coordinating movement. Generalized Procrustes methods allow data points to be compared across an array simultaneously rather than comparing them in pairs. Rather than rely solely on discrete kinematic values to contrast movement, this method allows evaluation of how movement progresses. The Procrustes analysis of able-bodied movement showed that the hand and forearm segments moved in a more coordinated manner during initiation. The shoulder and elbow become more coordinated during movement completion. In impaired movement, this coordination between the hand and forearm is disrupted. Potentially mal-adaptive compensation occurs between the upper arm and forearm after movement enters the deceleration phase. The utilization of Procrustes analysis may be a step towards developing a comprehensive and universal quantitative tool that does not require changes to existing treatments or increase patient burden. Copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
[ { "created": "Wed, 17 May 2023 01:42:32 GMT", "version": "v1" } ]
2023-05-18
[ [ "Zaidi", "Khadija F.", "" ], [ "Harris-Love", "Michelle", "" ] ]
Upper extremity motor impairment affects about 80\% of persons after strokes. For stroke rehabilitation, upper limb kinematic assessments have increasingly been used as primary or secondary outcome measures. Studying the upper extremity provides a valuable tool for assessing limb coordination, mal-adaptations, and recovery. There is currently no universal standardized scale for categorizing multi-joint upper extremity movement. We propose a modified Procrustes statistical shape method as a quantitative analysis that can recognize segments of movement where multiple limb segments are coordinating movement. Generalized Procrustes methods allow data points to be compared across an array simultaneously rather than comparing them in pairs. Rather than rely solely on discrete kinematic values to contrast movement, this method allows evaluation of how movement progresses. The Procrustes analysis of able-bodied movement showed that the hand and forearm segments moved in a more coordinated manner during initiation. The shoulder and elbow become more coordinated during movement completion. In impaired movement, this coordination between the hand and forearm is disrupted. Potentially mal-adaptive compensation occurs between the upper arm and forearm after movement enters the deceleration phase. The utilization of Procrustes analysis may be a step towards developing a comprehensive and universal quantitative tool that does not require changes to existing treatments or increase patient burden. Copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
1106.4783
Renato Vicente
Roberto H. Schonmann, Renato Vicente, Nestor Caticha
Two-level Fisher-Wright framework with selection and migration: An approach to studying evolution in group structured populations
Complete abstract in the paper. 71 pages, 20 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A framework for the mathematical modeling of evolution in group structured populations is introduced. The population is divided into a fixed large number of groups of fixed size. From generation to generation, new groups are formed that descend from previous groups, through a two-level Fisher-Wright process, with selection between groups and within groups and with migration between groups at rate $m$. When $m=1$, the framework reduces to the often used trait-group framework, so that our setting can be seen as an extension of that approach. Our framework allows the analysis of previously introduced models in which altruists and non-altruists compete, and provides new insights into these models. We focus on the situation in which initially there is a single altruistic allele in the population, and no further mutations occur. The main questions are conditions for the viability of that altruistic allele to spread, and the fashion in which it spreads when it does. Because our results and methods are rigorous, we see them as shedding light on various controversial issues in this field, including the role of Hamilton's rule, and of the Price equation, the relevance of linearity in fitness functions and the need to only consider pairwise interactions, or weak selection. In this paper we analyze the early stages of the evolution, during which the number of altruists is small compared to the size of the population. We show that during this stage the evolution is well described by a multitype branching process. The driving matrix for this process can be obtained, reducing the problem of determining when the altruistic gene is viable to a comparison between the leading eigenvalue of that matrix, and the fitness of the non-altruists before the altruistic gene appeared. This leads to a generalization of Hamilton's condition for the viability of a mutant gene.
[ { "created": "Thu, 23 Jun 2011 17:45:41 GMT", "version": "v1" } ]
2011-06-24
[ [ "Schonmann", "Roberto H.", "" ], [ "Vicente", "Renato", "" ], [ "Caticha", "Nestor", "" ] ]
A framework for the mathematical modeling of evolution in group structured populations is introduced. The population is divided into a fixed large number of groups of fixed size. From generation to generation, new groups are formed that descend from previous groups, through a two-level Fisher-Wright process, with selection between groups and within groups and with migration between groups at rate $m$. When $m=1$, the framework reduces to the often used trait-group framework, so that our setting can be seen as an extension of that approach. Our framework allows the analysis of previously introduced models in which altruists and non-altruists compete, and provides new insights into these models. We focus on the situation in which initially there is a single altruistic allele in the population, and no further mutations occur. The main questions are conditions for the viability of that altruistic allele to spread, and the fashion in which it spreads when it does. Because our results and methods are rigorous, we see them as shedding light on various controversial issues in this field, including the role of Hamilton's rule, and of the Price equation, the relevance of linearity in fitness functions and the need to only consider pairwise interactions, or weak selection. In this paper we analyze the early stages of the evolution, during which the number of altruists is small compared to the size of the population. We show that during this stage the evolution is well described by a multitype branching process. The driving matrix for this process can be obtained, reducing the problem of determining when the altruistic gene is viable to a comparison between the leading eigenvalue of that matrix, and the fitness of the non-altruists before the altruistic gene appeared. This leads to a generalization of Hamilton's condition for the viability of a mutant gene.
2005.04342
Demetrius DiMucci
Demetrius DiMucci
JigSaw: A tool for discovering explanatory high-order interactions from random forests
15 pages 5 figures 7 tables
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning is revolutionizing biology by facilitating the prediction of outcomes from complex patterns found in massive data sets. Large biological data sets, like those generated by transcriptome or microbiome studies,measure many relevant components that interact in vivo with one another in modular ways.Identifying the high-order interactions that machine learning models use to make predictions would facilitate the development of hypotheses linking combinations of measured components to outcome. By using the structure of random forests, a new algorithmic approach, termed JigSaw,was developed to aid in the discovery of patterns that could explain predictions made by the forest. By examining the patterns of individual decision trees JigSaw identifies high-order interactions between measured features that are strongly associated with a particular outcome and identifies the relevant decision thresholds. JigSaw's effectiveness was tested in simulation studies where it was able to recover multiple ground truth patterns;even in the presence of significant noise. It was then used to find patterns associated with outcomes in two real world data sets.It was first used to identify patterns clinical measurements associated with heart disease. It was then used to find patterns associated with breast cancer using metabolites measured in the blood. In heart disease, JigSaw identified several three-way interactions that combine to explain most of the heart disease records (66%) with high precision (93%). In breast cancer, three two-way interactions were recovered that can be combined to explain almost all records (92%) with good precision (79%). JigSaw is an efficient method for exploring high-dimensional feature spaces for rules that explain statistical associations with a given outcome and can inspire the generation of testable hypotheses.
[ { "created": "Sat, 9 May 2020 01:53:45 GMT", "version": "v1" } ]
2020-05-12
[ [ "DiMucci", "Demetrius", "" ] ]
Machine learning is revolutionizing biology by facilitating the prediction of outcomes from complex patterns found in massive data sets. Large biological data sets, like those generated by transcriptome or microbiome studies,measure many relevant components that interact in vivo with one another in modular ways.Identifying the high-order interactions that machine learning models use to make predictions would facilitate the development of hypotheses linking combinations of measured components to outcome. By using the structure of random forests, a new algorithmic approach, termed JigSaw,was developed to aid in the discovery of patterns that could explain predictions made by the forest. By examining the patterns of individual decision trees JigSaw identifies high-order interactions between measured features that are strongly associated with a particular outcome and identifies the relevant decision thresholds. JigSaw's effectiveness was tested in simulation studies where it was able to recover multiple ground truth patterns;even in the presence of significant noise. It was then used to find patterns associated with outcomes in two real world data sets.It was first used to identify patterns clinical measurements associated with heart disease. It was then used to find patterns associated with breast cancer using metabolites measured in the blood. In heart disease, JigSaw identified several three-way interactions that combine to explain most of the heart disease records (66%) with high precision (93%). In breast cancer, three two-way interactions were recovered that can be combined to explain almost all records (92%) with good precision (79%). JigSaw is an efficient method for exploring high-dimensional feature spaces for rules that explain statistical associations with a given outcome and can inspire the generation of testable hypotheses.
1505.03342
Drew Fudenberg
Drew Fudenberg, Philipp Strack, and Tomasz Strzalecki
Stochastic Choice and Optimal Sequential Sampling
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model the joint distribution of choice probabilities and decision times in binary choice tasks as the solution to a problem of optimal sequential sampling, where the agent is uncertain of the utility of each action and pays a constant cost per unit time for gathering information. In the resulting optimal policy, the agent's choices are more likely to be correct when the agent chooses to decide quickly, provided that the agent's prior beliefs are correct. For this reason it better matches the observed correlation between decision time and choice probability than does the classical drift-diffusion model, where the agent is uncertain which of two actions is best but knows the utility difference between them
[ { "created": "Wed, 13 May 2015 11:54:11 GMT", "version": "v1" } ]
2015-05-14
[ [ "Fudenberg", "Drew", "" ], [ "Strack", "Philipp", "" ], [ "Strzalecki", "Tomasz", "" ] ]
We model the joint distribution of choice probabilities and decision times in binary choice tasks as the solution to a problem of optimal sequential sampling, where the agent is uncertain of the utility of each action and pays a constant cost per unit time for gathering information. In the resulting optimal policy, the agent's choices are more likely to be correct when the agent chooses to decide quickly, provided that the agent's prior beliefs are correct. For this reason it better matches the observed correlation between decision time and choice probability than does the classical drift-diffusion model, where the agent is uncertain which of two actions is best but knows the utility difference between them
2307.09334
William Podlaski
William F. Podlaski, Christian K. Machens
Approximating nonlinear functions with latent boundaries in low-rank excitatory-inhibitory spiking networks
Accepted in Neural Computation
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep feedforward and recurrent rate-based neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
[ { "created": "Tue, 18 Jul 2023 15:17:00 GMT", "version": "v1" }, { "created": "Mon, 31 Jul 2023 15:34:35 GMT", "version": "v2" }, { "created": "Thu, 28 Dec 2023 15:40:08 GMT", "version": "v3" } ]
2023-12-29
[ [ "Podlaski", "William F.", "" ], [ "Machens", "Christian K.", "" ] ]
Deep feedforward and recurrent rate-based neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
1909.13146
Andreas Georgiou
Andreas Georgiou, Vincent Fortuin, Harun Mustafa, Gunnar R\"atsch
META$^\mathbf{2}$: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning
null
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples.One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.
[ { "created": "Sat, 28 Sep 2019 20:30:40 GMT", "version": "v1" }, { "created": "Mon, 10 Feb 2020 16:05:08 GMT", "version": "v2" } ]
2020-02-11
[ [ "Georgiou", "Andreas", "" ], [ "Fortuin", "Vincent", "" ], [ "Mustafa", "Harun", "" ], [ "Rätsch", "Gunnar", "" ] ]
Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples.One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.
2002.09640
Lingjian Ye
Yimin Zhou, Zuguo Chen, Xiangdong Wu, Zengwu Tian, Liang Cheng, Lingjian Ye
The Outbreak Evaluation of COVID-19 in Wuhan District of China
7 pages, 18 figures
Healthcare. Multidisciplinary Digital Publishing Institute, 2021, 9(1): 61
10.3390/healthcare9010061
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There were 27 novel coronavirus pneumonia cases found in Wuhan, China in December 2019, named as 2019-nCoV temporarily and COVID-19 formally by WHO on 11 February, 2020. In December 2019 and January 2020, COVID-19 has spread in large scale among the population, which brought terrible disaster to the life and property of the Chinese people. In this paper, we will first analyze the feature and pattern of the virus transmission, and discuss the key impact factors and uncontrollable factors of epidemic transmission based on public data. Then the virus transmission can be modelled and used for the inflexion and extinction period of epidemic development so as to provide theoretical support for the Chinese government in the decision-making of epidemic prevention and recovery of economic production. Further, this paper demonstrates the effectiveness of the prevention methods taken by the Chinese government such as multi-level administrative region isolation. It is of great importance and practical significance for the world to deal with public health emergencies.
[ { "created": "Sat, 22 Feb 2020 06:22:47 GMT", "version": "v1" } ]
2021-02-23
[ [ "Zhou", "Yimin", "" ], [ "Chen", "Zuguo", "" ], [ "Wu", "Xiangdong", "" ], [ "Tian", "Zengwu", "" ], [ "Cheng", "Liang", "" ], [ "Ye", "Lingjian", "" ] ]
There were 27 novel coronavirus pneumonia cases found in Wuhan, China in December 2019, named as 2019-nCoV temporarily and COVID-19 formally by WHO on 11 February, 2020. In December 2019 and January 2020, COVID-19 has spread in large scale among the population, which brought terrible disaster to the life and property of the Chinese people. In this paper, we will first analyze the feature and pattern of the virus transmission, and discuss the key impact factors and uncontrollable factors of epidemic transmission based on public data. Then the virus transmission can be modelled and used for the inflexion and extinction period of epidemic development so as to provide theoretical support for the Chinese government in the decision-making of epidemic prevention and recovery of economic production. Further, this paper demonstrates the effectiveness of the prevention methods taken by the Chinese government such as multi-level administrative region isolation. It is of great importance and practical significance for the world to deal with public health emergencies.