id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2002.03540
Wenzhuo Zhang
Wen-Zhuo Zhang
A wave-pulse neural network for quasi-quantum coding
5 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We design a physical wave-pulse neural network (WPNN) for both wave and pulse propagation, which gives more degrees of freedom for neural coding than spike neural networks (SNN). We define the rules and the information entropy of this kind of neural network, where the signal speed, arrival time, and the length of connections between neurons all become crucial parameters for signal coding. We call it quasi-quantum coding (QQC) since the combination of wave and pulse signals here behaves like a classical mimic of quantum wave-particle duality, and can be studied by borrowing some concepts form quantum mechanics. We present that the quasi-quantum coding can give efficient methods for both sound and image recognitions. We also discuss the possibility of the wave-pulse neural network and the quasi-quantum coding methods running on it in biological brains where both neural oscillations and action potentials are important to cognition.
[ { "created": "Thu, 6 Feb 2020 08:20:18 GMT", "version": "v1" } ]
2020-02-11
[ [ "Zhang", "Wen-Zhuo", "" ] ]
We design a physical wave-pulse neural network (WPNN) for both wave and pulse propagation, which gives more degrees of freedom for neural coding than spike neural networks (SNN). We define the rules and the information entropy of this kind of neural network, where the signal speed, arrival time, and the length of connections between neurons all become crucial parameters for signal coding. We call it quasi-quantum coding (QQC) since the combination of wave and pulse signals here behaves like a classical mimic of quantum wave-particle duality, and can be studied by borrowing some concepts form quantum mechanics. We present that the quasi-quantum coding can give efficient methods for both sound and image recognitions. We also discuss the possibility of the wave-pulse neural network and the quasi-quantum coding methods running on it in biological brains where both neural oscillations and action potentials are important to cognition.
q-bio/0403014
Ignacio D. Peixoto
V. M. Kenkre, R. R. Parmenter, I. D. Peixoto, L. Sadasiv
A Theoretical Framework for the Analysis of the West Nile Virus Epidemic
12 pages, 9 postscript figures
Mathematical and Computer Modelling vol. 42 (2005), issue 3/4, pages 313-324.
10.1016/j.mcm.2004.08.012
null
q-bio.PE
null
We present a model for the growth of West Nile virus in mosquito and bird populations based on observations of the initial epidemic in the U.S. Increase of bird mortality as a result of infection, which is a feature of the epidemic, is found to yield an effect which is observable in principle, viz., periodic variations in the extent of infection. The vast difference between mosquito and bird lifespans, another peculiarity of the system, is shown to lead to interesting consequences regarding delay in the onset of the steady-state infection. An outline of a framework is provided to treat mosquito diffusion and bird migration.
[ { "created": "Sat, 13 Mar 2004 03:30:11 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kenkre", "V. M.", "" ], [ "Parmenter", "R. R.", "" ], [ "Peixoto", "I. D.", "" ], [ "Sadasiv", "L.", "" ] ]
We present a model for the growth of West Nile virus in mosquito and bird populations based on observations of the initial epidemic in the U.S. Increase of bird mortality as a result of infection, which is a feature of the epidemic, is found to yield an effect which is observable in principle, viz., periodic variations in the extent of infection. The vast difference between mosquito and bird lifespans, another peculiarity of the system, is shown to lead to interesting consequences regarding delay in the onset of the steady-state infection. An outline of a framework is provided to treat mosquito diffusion and bird migration.
1406.3185
Karthik Shankar
Karthik H. Shankar
Generic construction of scale-invariantly coarse grained memory
null
Lecture Notes in Artificial Intelligence, vol: 8955, pp: 175-184, 2015
null
null
q-bio.NC cond-mat.dis-nn cs.AI
http://creativecommons.org/licenses/by/3.0/
Encoding temporal information from the recent past as spatially distributed activations is essential in order for the entire recent past to be simultaneously accessible. Any biological or synthetic agent that relies on the past to predict/plan the future, would be endowed with such a spatially distributed temporal memory. Simplistically, we would expect that resource limitations would demand the memory system to store only the most useful information for future prediction. For natural signals in real world which show scale free temporal fluctuations, the predictive information encoded in memory is maximal if the past information is scale invariantly coarse grained. Here we examine the general mechanism to construct a scale invariantly coarse grained memory system. Remarkably, the generic construction is equivalent to encoding the linear combinations of Laplace transform of the past information and their approximated inverses. This reveals a fundamental construction constraint on memory networks that attempt to maximize predictive information storage relevant to the natural world.
[ { "created": "Thu, 12 Jun 2014 10:32:42 GMT", "version": "v1" }, { "created": "Fri, 2 Jan 2015 16:54:13 GMT", "version": "v2" } ]
2015-01-05
[ [ "Shankar", "Karthik H.", "" ] ]
Encoding temporal information from the recent past as spatially distributed activations is essential in order for the entire recent past to be simultaneously accessible. Any biological or synthetic agent that relies on the past to predict/plan the future, would be endowed with such a spatially distributed temporal memory. Simplistically, we would expect that resource limitations would demand the memory system to store only the most useful information for future prediction. For natural signals in real world which show scale free temporal fluctuations, the predictive information encoded in memory is maximal if the past information is scale invariantly coarse grained. Here we examine the general mechanism to construct a scale invariantly coarse grained memory system. Remarkably, the generic construction is equivalent to encoding the linear combinations of Laplace transform of the past information and their approximated inverses. This reveals a fundamental construction constraint on memory networks that attempt to maximize predictive information storage relevant to the natural world.
2308.01828
Stav Marcus
Stav Marcus, Ari M Turner and Guy Bunin
Local and extensive fluctuations in sparsely-interacting ecological communities
null
null
null
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological communities with many species can be classified into dynamical phases. In systems with all-to-all interactions, a phase where a fixed point is always reached and a dynamically-fluctuating phase have been found. The dynamics when interactions are sparse, with each species interacting with only several others, has remained largely unexplored. Here we show that a new type of phase appears in the phase diagram, where for the same control parameters different communities may reach either a fixed point or a state where the abundances of a finite subset of species fluctuate, and calculate the probability for each outcome. These fluctuating species are organized around short cycles in the interaction graph, and their abundances undergo large non-linear fluctuations. We characterize the approach from this phase to a phase with extensively many fluctuating species, and show that the probability of fluctuations grows continuously to one as the transition is approached, and that the number of fluctuating species diverges. This is qualitatively distinct from the transition to extensive fluctuations coming from a fixed point phase, which is marked by a loss of linear stability. The differences are traced back to the emergent binary character of the dynamics when far away from short cycles in the local fluctuations phase.
[ { "created": "Thu, 3 Aug 2023 15:39:26 GMT", "version": "v1" } ]
2023-08-04
[ [ "Marcus", "Stav", "" ], [ "Turner", "Ari M", "" ], [ "Bunin", "Guy", "" ] ]
Ecological communities with many species can be classified into dynamical phases. In systems with all-to-all interactions, a phase where a fixed point is always reached and a dynamically-fluctuating phase have been found. The dynamics when interactions are sparse, with each species interacting with only several others, has remained largely unexplored. Here we show that a new type of phase appears in the phase diagram, where for the same control parameters different communities may reach either a fixed point or a state where the abundances of a finite subset of species fluctuate, and calculate the probability for each outcome. These fluctuating species are organized around short cycles in the interaction graph, and their abundances undergo large non-linear fluctuations. We characterize the approach from this phase to a phase with extensively many fluctuating species, and show that the probability of fluctuations grows continuously to one as the transition is approached, and that the number of fluctuating species diverges. This is qualitatively distinct from the transition to extensive fluctuations coming from a fixed point phase, which is marked by a loss of linear stability. The differences are traced back to the emergent binary character of the dynamics when far away from short cycles in the local fluctuations phase.
1007.1340
Philipp Altrock
Philipp M. Altrock, Chaytanya S. Gokhale, Arne Traulsen
Stochastic slowdown in evolutionary processes
8 pages, 3 figures, accepted for publication
Phys. Rev. E 82, 011925 (2010)
10.1103/PhysRevE.82.011925
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine birth--death processes with state dependent transition probabilities and at least one absorbing boundary. In evolution, this describes selection acting on two different types in a finite population where reproductive events occur successively. If the two types have equal fitness the system performs a random walk. If one type has a fitness advantage it is favored by selection, which introduces a bias (asymmetry) in the transition probabilities. How long does it take until advantageous mutants have invaded and taken over? Surprisingly, we find that the average time of such a process can increase, even if the mutant type always has a fitness advantage. We discuss this finding for the Moran process and develop a simplified model which allows a more intuitive understanding. We show that this effect can occur for weak but non--vanishing bias (selection) in the state dependent transition rates and infer the scaling with system size. We also address the Wright-Fisher model commonly used in population genetics, which shows that this stochastic slowdown is not restricted to birth-death processes.
[ { "created": "Thu, 8 Jul 2010 10:43:30 GMT", "version": "v1" }, { "created": "Fri, 9 Jul 2010 13:55:35 GMT", "version": "v2" } ]
2010-10-12
[ [ "Altrock", "Philipp M.", "" ], [ "Gokhale", "Chaytanya S.", "" ], [ "Traulsen", "Arne", "" ] ]
We examine birth--death processes with state dependent transition probabilities and at least one absorbing boundary. In evolution, this describes selection acting on two different types in a finite population where reproductive events occur successively. If the two types have equal fitness the system performs a random walk. If one type has a fitness advantage it is favored by selection, which introduces a bias (asymmetry) in the transition probabilities. How long does it take until advantageous mutants have invaded and taken over? Surprisingly, we find that the average time of such a process can increase, even if the mutant type always has a fitness advantage. We discuss this finding for the Moran process and develop a simplified model which allows a more intuitive understanding. We show that this effect can occur for weak but non--vanishing bias (selection) in the state dependent transition rates and infer the scaling with system size. We also address the Wright-Fisher model commonly used in population genetics, which shows that this stochastic slowdown is not restricted to birth-death processes.
2304.07295
Yongquan Yang
Yongquan Yang, Jie Chen, Yani Wei, Mohammad Alobaidi and Hong Bu
Experts' cognition-driven safe noisy labels learning for precise segmentation of residual tumor in breast cancer
null
null
null
null
q-bio.QM cs.AI eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise segmentation of residual tumor in breast cancer (PSRTBC) after neoadjuvant chemotherapy is a fundamental key technique in the treatment process of breast cancer. However, achieving PSRTBC is still a challenge, since the breast cancer tissue and tumor cells commonly have complex and varied morphological changes after neoadjuvant chemotherapy, which inevitably increases the difficulty to produce a predictive model that has good generalization with machine learning. To alleviate this situation, in this paper, we propose an experts' cognition-driven safe noisy labels learning (ECDSNLL) approach. In the concept of safe noisy labels learning, which is a typical type of safe weakly supervised learning, ECDSNLL is constructed by integrating the pathology experts' cognition about identifying residual tumor in breast cancer and the artificial intelligence experts' cognition about data modeling with provided data basis. We show the advantages of the proposed ECDSNLL approach and its promising potentials in addressing PSRTBC. We also release a better predictive model for achieving PSRTBC, which can be leveraged to promote the development of related application software.
[ { "created": "Thu, 13 Apr 2023 03:46:40 GMT", "version": "v1" } ]
2023-04-18
[ [ "Yang", "Yongquan", "" ], [ "Chen", "Jie", "" ], [ "Wei", "Yani", "" ], [ "Alobaidi", "Mohammad", "" ], [ "Bu", "Hong", "" ] ]
Precise segmentation of residual tumor in breast cancer (PSRTBC) after neoadjuvant chemotherapy is a fundamental key technique in the treatment process of breast cancer. However, achieving PSRTBC is still a challenge, since the breast cancer tissue and tumor cells commonly have complex and varied morphological changes after neoadjuvant chemotherapy, which inevitably increases the difficulty to produce a predictive model that has good generalization with machine learning. To alleviate this situation, in this paper, we propose an experts' cognition-driven safe noisy labels learning (ECDSNLL) approach. In the concept of safe noisy labels learning, which is a typical type of safe weakly supervised learning, ECDSNLL is constructed by integrating the pathology experts' cognition about identifying residual tumor in breast cancer and the artificial intelligence experts' cognition about data modeling with provided data basis. We show the advantages of the proposed ECDSNLL approach and its promising potentials in addressing PSRTBC. We also release a better predictive model for achieving PSRTBC, which can be leveraged to promote the development of related application software.
1007.3311
James Edwards
James R. Edwards, Mary R. Myerscough
Intelligent Decisions from the Hive Mind: Foragers and Nectar Receivers of Apis mellifera Collaborate to Optimise Active Forager Numbers
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a differential equation-based mathematical model of nectar foraging by the honey bee Apis mellifera. The model focuses on two behavioural classes; nectar foragers and nectar receivers. Results generated from the model are used to demonstrate how different classes within a collective can collaborate to combine information and produce finely tuned decisions through simple interactions. In particular we show the importance of the `search time' - the time a returning forager takes to find an available nectar receiver - in restricting the forager population to a level consistent with colony-wide needs.
[ { "created": "Mon, 19 Jul 2010 23:54:04 GMT", "version": "v1" } ]
2010-07-21
[ [ "Edwards", "James R.", "" ], [ "Myerscough", "Mary R.", "" ] ]
We present a differential equation-based mathematical model of nectar foraging by the honey bee Apis mellifera. The model focuses on two behavioural classes; nectar foragers and nectar receivers. Results generated from the model are used to demonstrate how different classes within a collective can collaborate to combine information and produce finely tuned decisions through simple interactions. In particular we show the importance of the `search time' - the time a returning forager takes to find an available nectar receiver - in restricting the forager population to a level consistent with colony-wide needs.
2003.06933
Amirhoshang Hoseinpour Dehkordi
Amirhoshang Hoseinpour Dehkordi, Majid Alizadeh, Pegah Derakhshan, Peyman Babazadeh, Arash Jahandideh
Understanding Epidemic Data and Statistics: A case study of COVID-19
18 pages, 12 figures
null
10.1002/jmv.25885
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 2019-Novel-Coronavirus (COVID-19) has affected 116 countries (By March 12) and out of more than 118,000 confirmed cases. Understanding the transmission dynamics of the infection in each country which affected on a daily basis and evaluating the effectiveness of control policies is critical for our further actions. To date, the statistics of COVID-19 reported cases show more than 80 percent of infected had a mild case of disease, while around 14 percent of infected experienced a severe one and about 5 percent are categorized as critical disease victims. Today's report (2020-03-12; daily updates in the prepared website) shows the confirmed cases of COVID-19 in China, South Korea, Italy, and Iran are 80932, 7869, 12462 and 10075; respectively. Calculating the total Case Fatality Rate (CFR) of Italy (2020-03-04), about 7.9% of confirmed cases passed away. Compared to South Korea's rate of 0.76% (10 times lower than Italy) and China's 3.8% (50% lower than Italy), the CFR of Italy is too high. There are some effective policies that yield significant changes in the trend of cases. The lockdown policy in China and Italy (the effect observed after 11 days), Shutdown of all nonessential companies in Hubei (the effect observed after 5 days), combined policy in South Korea and reducing working hours in Iran.
[ { "created": "Sun, 15 Mar 2020 21:56:15 GMT", "version": "v1" } ]
2020-06-25
[ [ "Dehkordi", "Amirhoshang Hoseinpour", "" ], [ "Alizadeh", "Majid", "" ], [ "Derakhshan", "Pegah", "" ], [ "Babazadeh", "Peyman", "" ], [ "Jahandideh", "Arash", "" ] ]
The 2019-Novel-Coronavirus (COVID-19) has affected 116 countries (By March 12) and out of more than 118,000 confirmed cases. Understanding the transmission dynamics of the infection in each country which affected on a daily basis and evaluating the effectiveness of control policies is critical for our further actions. To date, the statistics of COVID-19 reported cases show more than 80 percent of infected had a mild case of disease, while around 14 percent of infected experienced a severe one and about 5 percent are categorized as critical disease victims. Today's report (2020-03-12; daily updates in the prepared website) shows the confirmed cases of COVID-19 in China, South Korea, Italy, and Iran are 80932, 7869, 12462 and 10075; respectively. Calculating the total Case Fatality Rate (CFR) of Italy (2020-03-04), about 7.9% of confirmed cases passed away. Compared to South Korea's rate of 0.76% (10 times lower than Italy) and China's 3.8% (50% lower than Italy), the CFR of Italy is too high. There are some effective policies that yield significant changes in the trend of cases. The lockdown policy in China and Italy (the effect observed after 11 days), Shutdown of all nonessential companies in Hubei (the effect observed after 5 days), combined policy in South Korea and reducing working hours in Iran.
1901.02068
James Hope Mr
J. Hope, K. Aristovich, C. A. R. Chapman, A. Volschenk, F. Vanholsbeeck, A. McDaid
Extracting impedance changes from a frequency multiplexed signal during neural activity in sciatic nerve of rat: preliminary study in-vitro
15 pages, 8 figures
null
10.1088/1361-6579/ab0c24
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Establish suitable frequency spacing and demodulation steps to use when extracting impedance changes from frequency division multiplexed (FDM) carrier signals in peripheral nerve. Approach: Experiments were performed in-vitro on cadavers immediately following euthanasia. Neural activity was evoked via stimulation of nerves in the hind paw, while carrier signals were injected, and recordings obtained, with a dual ring nerve cuff implanted on the sciatic nerve. Frequency analysis of recorded compound action potentials (CAPs) and extracted impedance changes, with the latter obtained using established demodulation methods, were used to determine suitable frequency spacing of carrier signals, and bandpass filter (BPF) bandwidth and order, for a frequency multiplexed signal. Main results: CAPs and impedance changes were dominant in the frequency band 200 to 500 Hz and 100 to 200 Hz, respectively. A Tukey window was introduced to remove ringing from Gibbs phenomena. A +/- 750 Hz BPF bandwidth was selected to encompass 99.99 % of the frequency power of the impedance change. Modelling predicted a minimum BPF order of 16 for 2 kHz spacing, and 10 for 4 kHz spacing, were required to avoid ringing from the neighbouring carrier signal, while FDM experiments verified BPF orders of 12 and 8, respectively, were required. With a notch filter centred on the neighbouring signal, a BPF order of at least 6 or 4 was required for 2 and 4 kHz, respectively. Significance: The results establish drive frequency spacing and demodulation settings for use in FDM electrical impedance tomography (EIT) experiments, as well as a framework for their selection, and, for the first time, demonstrates the viability of FDM-EIT of neural activity on peripheral nerve, which will be a central aspect of future real-time neural-EIT systems and EIT-based neural prosthetics interfaces.
[ { "created": "Mon, 7 Jan 2019 21:18:50 GMT", "version": "v1" } ]
2019-06-28
[ [ "Hope", "J.", "" ], [ "Aristovich", "K.", "" ], [ "Chapman", "C. A. R.", "" ], [ "Volschenk", "A.", "" ], [ "Vanholsbeeck", "F.", "" ], [ "McDaid", "A.", "" ] ]
Objective: Establish suitable frequency spacing and demodulation steps to use when extracting impedance changes from frequency division multiplexed (FDM) carrier signals in peripheral nerve. Approach: Experiments were performed in-vitro on cadavers immediately following euthanasia. Neural activity was evoked via stimulation of nerves in the hind paw, while carrier signals were injected, and recordings obtained, with a dual ring nerve cuff implanted on the sciatic nerve. Frequency analysis of recorded compound action potentials (CAPs) and extracted impedance changes, with the latter obtained using established demodulation methods, were used to determine suitable frequency spacing of carrier signals, and bandpass filter (BPF) bandwidth and order, for a frequency multiplexed signal. Main results: CAPs and impedance changes were dominant in the frequency band 200 to 500 Hz and 100 to 200 Hz, respectively. A Tukey window was introduced to remove ringing from Gibbs phenomena. A +/- 750 Hz BPF bandwidth was selected to encompass 99.99 % of the frequency power of the impedance change. Modelling predicted a minimum BPF order of 16 for 2 kHz spacing, and 10 for 4 kHz spacing, were required to avoid ringing from the neighbouring carrier signal, while FDM experiments verified BPF orders of 12 and 8, respectively, were required. With a notch filter centred on the neighbouring signal, a BPF order of at least 6 or 4 was required for 2 and 4 kHz, respectively. Significance: The results establish drive frequency spacing and demodulation settings for use in FDM electrical impedance tomography (EIT) experiments, as well as a framework for their selection, and, for the first time, demonstrates the viability of FDM-EIT of neural activity on peripheral nerve, which will be a central aspect of future real-time neural-EIT systems and EIT-based neural prosthetics interfaces.
2404.04723
Brian Camley
Kurmanbek Kaiyrbekov and Brian A. Camley
Does nematic order allow groups of elongated cells to sense electric fields better?
null
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective response to external directional cues like electric fields plays a pivotal role in processes such as tissue development, regeneration, and wound healing. In this study we focus on the impact of anisotropy in cell shape and local cell alignment on the collective response to electric fields. We model elongated cells that have a different accuracy sensing the field depending on their orientation with respect to the field. Elongated cells often line up with their long axes in the same direction - "nematic order" - does this help the group of cells sense the field more accurately? We use simulations of a simple model to show that if cells orient themselves perpendicular to their average velocity, alignment of a cell's long axis to its nearest neighbors' orientation can enhance the directional response to electric fields. However, for cells to benefit from aligning, their accuracy of sensing must be strongly dependent on cell orientation. We also show that cell-cell adhesion modulates the accuracy of cells in the group.
[ { "created": "Sat, 6 Apr 2024 20:20:24 GMT", "version": "v1" } ]
2024-04-09
[ [ "Kaiyrbekov", "Kurmanbek", "" ], [ "Camley", "Brian A.", "" ] ]
Collective response to external directional cues like electric fields plays a pivotal role in processes such as tissue development, regeneration, and wound healing. In this study we focus on the impact of anisotropy in cell shape and local cell alignment on the collective response to electric fields. We model elongated cells that have a different accuracy sensing the field depending on their orientation with respect to the field. Elongated cells often line up with their long axes in the same direction - "nematic order" - does this help the group of cells sense the field more accurately? We use simulations of a simple model to show that if cells orient themselves perpendicular to their average velocity, alignment of a cell's long axis to its nearest neighbors' orientation can enhance the directional response to electric fields. However, for cells to benefit from aligning, their accuracy of sensing must be strongly dependent on cell orientation. We also show that cell-cell adhesion modulates the accuracy of cells in the group.
2011.00297
Jantine Broek PhD
Jantine A.C. Broek and Guillaume Drion
Generalisation of neuronal excitability allows for the identification of an excitability change parameter that links to an experimentally measurable value
21 pages, 10 main figures, 3 supplementary figures
null
10.5281/zenodo.4159691
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal excitability is the phenomena that describes action potential generation due to a stimulus input. Commonly, neuronal excitability is divided into two classes: Type I and Type II, both having different properties that affect information processing, such as thresholding and gain scaling. These properties can be mathematically studied using generalised phenomenological models, such as the Fitzhugh-Nagumo model and the mirrored FHN. The FHN model shows that each excitability type corresponds to one specific type of bifurcation in the phase plane: Type I underlies a saddle-node on invariant cycle bifurcation, and Type II a Hopf bifurcation. The difficulty of modelling Type I excitability is that it is not only represented by its underlying bifurcation, but also should be able to generate frequency while maintaining a small depolarising current. Using the mFHN model, we show that this situation is possible without modifying the phase portrait, due to the incorporation of a slow regenerative variable. We show that in the singular limit of the mFHN model, the time-scale separation can be chosen such that there is a configuration of a classical phase portrait that allows for SNIC bifurcation, zero-frequency onset and a depolarising current, such as observed in Type I excitability. Using the definition of slow conductance, g_s, we show that these mathematical findings for excitability change are translatable to reduced conductance based models and also relates to an experimentally measurable quantity. This not only allows for a measure of excitability change, but also relates the mathematical parameters that indicate a physiological Type I excitability to parameters that can be tuned during experiments.
[ { "created": "Sat, 31 Oct 2020 15:58:02 GMT", "version": "v1" } ]
2020-11-03
[ [ "Broek", "Jantine A. C.", "" ], [ "Drion", "Guillaume", "" ] ]
Neuronal excitability is the phenomena that describes action potential generation due to a stimulus input. Commonly, neuronal excitability is divided into two classes: Type I and Type II, both having different properties that affect information processing, such as thresholding and gain scaling. These properties can be mathematically studied using generalised phenomenological models, such as the Fitzhugh-Nagumo model and the mirrored FHN. The FHN model shows that each excitability type corresponds to one specific type of bifurcation in the phase plane: Type I underlies a saddle-node on invariant cycle bifurcation, and Type II a Hopf bifurcation. The difficulty of modelling Type I excitability is that it is not only represented by its underlying bifurcation, but also should be able to generate frequency while maintaining a small depolarising current. Using the mFHN model, we show that this situation is possible without modifying the phase portrait, due to the incorporation of a slow regenerative variable. We show that in the singular limit of the mFHN model, the time-scale separation can be chosen such that there is a configuration of a classical phase portrait that allows for SNIC bifurcation, zero-frequency onset and a depolarising current, such as observed in Type I excitability. Using the definition of slow conductance, g_s, we show that these mathematical findings for excitability change are translatable to reduced conductance based models and also relates to an experimentally measurable quantity. This not only allows for a measure of excitability change, but also relates the mathematical parameters that indicate a physiological Type I excitability to parameters that can be tuned during experiments.
2306.01629
Forrest Sheldon
T. M. A. Fink and F. C. Sheldon
Number of attractors in the critical Kauffman model is exponential
5 pages, 3 figures
null
null
null
q-bio.MN cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
The Kauffman model is the archetypal model of genetic computation. It highlights the importance of criticality, at which many biological systems seem poised. In a series of advances, researchers have honed in on how the number of attractors in the critical regime grows with network size. But a definitive answer has proved elusive. We prove that, for the critical Kauffman model with connectivity one, the number of attractors grows at least, and at most, as $(2/\!\sqrt{e})^N$. This is the first proof that the number of attractors in a critical Kauffman model grows exponentially.
[ { "created": "Fri, 2 Jun 2023 15:47:57 GMT", "version": "v1" } ]
2023-06-05
[ [ "Fink", "T. M. A.", "" ], [ "Sheldon", "F. C.", "" ] ]
The Kauffman model is the archetypal model of genetic computation. It highlights the importance of criticality, at which many biological systems seem poised. In a series of advances, researchers have honed in on how the number of attractors in the critical regime grows with network size. But a definitive answer has proved elusive. We prove that, for the critical Kauffman model with connectivity one, the number of attractors grows at least, and at most, as $(2/\!\sqrt{e})^N$. This is the first proof that the number of attractors in a critical Kauffman model grows exponentially.
1409.7915
Delfim F. M. Torres
Helena Sofia Rodrigues, M. Teresa T. Monteiro, Delfim F. M. Torres, Ana Clara Silva, Carla Sousa, Cl\'audia Concei\c{c}\~ao
Dengue in Madeira Island
This is a preprint of a paper whose final and definite form will be published in the volume 'Mathematics of Planet Earth' that initiates the book series 'CIM Series in Mathematical Sciences' (CIM-MS) published by Springer. Submitted Oct/2013; Revised 16/July/2014 and 20/Sept/2014; Accepted 28/Sept/2014
Dynamics, Games and Science, CIM Series in Mathematical Sciences 1 (2015), 593--605
10.1007/978-3-319-16118-1_32
null
q-bio.PE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dengue is a vector-borne disease and 40% of world population is at risk. Dengue transcends international borders and can be found in tropical and subtropical regions around the world, predominantly in urban and semi-urban areas. A model for dengue disease transmission, composed by mutually-exclusive compartments representing the human and vector dynamics, is presented in this study. The data is from Madeira, a Portuguese island, where an unprecedented outbreak was detected on October 2012. The aim of this work is to simulate the repercussions of the control measures in the fight of the disease.
[ { "created": "Sun, 28 Sep 2014 14:36:28 GMT", "version": "v1" } ]
2016-08-10
[ [ "Rodrigues", "Helena Sofia", "" ], [ "Monteiro", "M. Teresa T.", "" ], [ "Torres", "Delfim F. M.", "" ], [ "Silva", "Ana Clara", "" ], [ "Sousa", "Carla", "" ], [ "Conceição", "Cláudia", "" ] ]
Dengue is a vector-borne disease and 40% of world population is at risk. Dengue transcends international borders and can be found in tropical and subtropical regions around the world, predominantly in urban and semi-urban areas. A model for dengue disease transmission, composed by mutually-exclusive compartments representing the human and vector dynamics, is presented in this study. The data is from Madeira, a Portuguese island, where an unprecedented outbreak was detected on October 2012. The aim of this work is to simulate the repercussions of the control measures in the fight of the disease.
1608.09009
Pabitra Pal Choudhury
Suvankar Ghosh, Shankar Kumar Ghosh, Camellia Ray, Goutam Paul, Pabitra Pal Choudhury, Raja Banerjee
Understanding the behavioural difference of PPCA among its homologs in C7 family towards recognition of DXCA
Pages-13, Figures-6, Tables-4
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Among all the proteins of Periplasmic C type cytochrome A (PPCA) family obtained from cytochrome C7 found in Geobacter sulfurreducens, PPCA protein can interact with Deoxycholate (DXCA), while its other homologs do not, as observed from the crystal structures. Utilizing the concept of 'structure-function relationship', an effort has been initiated towards understanding the driving force for recognition of DXCA exclusively by PPCA among its homologs. Further, a combinatorial analysis of the binding sequences (contiguous sequence of amino acid residues of binding locations) is performed to build graph-theoretic models, which show that PPCA differs from its homologues. Analysis of the results suggests that the underlying impetus of recognition of DXCA by PPCA is embedded in its primary sequence and 3D conformation.
[ { "created": "Thu, 17 Sep 2015 06:35:59 GMT", "version": "v1" } ]
2016-09-01
[ [ "Ghosh", "Suvankar", "" ], [ "Ghosh", "Shankar Kumar", "" ], [ "Ray", "Camellia", "" ], [ "Paul", "Goutam", "" ], [ "Choudhury", "Pabitra Pal", "" ], [ "Banerjee", "Raja", "" ] ]
Among all the proteins of Periplasmic C type cytochrome A (PPCA) family obtained from cytochrome C7 found in Geobacter sulfurreducens, PPCA protein can interact with Deoxycholate (DXCA), while its other homologs do not, as observed from the crystal structures. Utilizing the concept of 'structure-function relationship', an effort has been initiated towards understanding the driving force for recognition of DXCA exclusively by PPCA among its homologs. Further, a combinatorial analysis of the binding sequences (contiguous sequence of amino acid residues of binding locations) is performed to build graph-theoretic models, which show that PPCA differs from its homologues. Analysis of the results suggests that the underlying impetus of recognition of DXCA by PPCA is embedded in its primary sequence and 3D conformation.
1201.5531
Jose Emilio Jimenez
J. E. Jimenez-Roldan and R. B. Freedman and R. A. R\"omer and S. A. Wells
Rapid simulation of protein motion: merging flexibility, rigidity and normal mode analyses
34 pages, 22 Figures, Phys. Biol. 9 (2012)
Physical Biology 9, 016008-12 (2012)
10.1088/1478-3975/9/1/016008
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein function frequently involves conformational changes with large amplitude on timescales which are difficult and computationally expensive to access using molecular dynamics. In this paper, we report on the combination of three computationally inexpensive simulation methods-normal mode analysis using the elastic network model, rigidity analysis using the pebble game algorithm, and geometric simulation of protein motion-to explore conformational change along normal mode eigenvectors. Using a combination of ELNEMO and FIRST/FRODA software, large-amplitude motions in proteins with hundreds or thousands of residues can be rapidly explored within minutes using desktop computing resources. We apply the method to a representative set of six proteins covering a range of sizes and structural characteristics and show that the method identifies specific types of motion in each case and determines their amplitude limits.
[ { "created": "Wed, 25 Jan 2012 15:31:35 GMT", "version": "v1" } ]
2012-02-10
[ [ "Jimenez-Roldan", "J. E.", "" ], [ "Freedman", "R. B.", "" ], [ "Römer", "R. A.", "" ], [ "Wells", "S. A.", "" ] ]
Protein function frequently involves conformational changes with large amplitude on timescales which are difficult and computationally expensive to access using molecular dynamics. In this paper, we report on the combination of three computationally inexpensive simulation methods-normal mode analysis using the elastic network model, rigidity analysis using the pebble game algorithm, and geometric simulation of protein motion-to explore conformational change along normal mode eigenvectors. Using a combination of ELNEMO and FIRST/FRODA software, large-amplitude motions in proteins with hundreds or thousands of residues can be rapidly explored within minutes using desktop computing resources. We apply the method to a representative set of six proteins covering a range of sizes and structural characteristics and show that the method identifies specific types of motion in each case and determines their amplitude limits.
2002.11013
Nicholas Glykos
Dimitrios A. Mitsikas and Nicholas M. Glykos
On the propensity of Asn-Gly-containing heptapeptides to form $\beta$-turn structures : comparison between ab initio quantum mechanical calculations and Molecular Dynamics simulations
In the supplementary information file figures S5, S6 and S7 the $\beta$-turn occupancies were wrong. They have been corrected
PLoS ONE (2020), 15(12): e0243429
10.1371/journal.pone.0243429
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both molecular mechanical and quantum mechanical calculations play an important role in describing the behavior and structure of molecules. In this work, we compare for the same peptide systems the results obtained from folding molecular dynamics simulations with previously reported results from quantum mechanical calculations. More specifically, three molecular dynamics simulations of 5 $\mu$s each in explicit water solvent were carried out for three Asn-Gly-containing heptapeptides, in order to study their folding and dynamics. Previous data, based on quantum mechanical calculations and the DFT methods have shown that these peptides adopt $\beta$-turn structures in aqueous solution, with type I' $\beta$-turn being the most preferred motif. The results from our analyses indicate that for the given system the two methods diverge in their predictions. The possibility of a force field-dependent deficiency is examined as a possible source of the observed discrepancy.
[ { "created": "Tue, 25 Feb 2020 16:38:31 GMT", "version": "v1" }, { "created": "Thu, 27 Feb 2020 09:36:08 GMT", "version": "v2" } ]
2021-01-07
[ [ "Mitsikas", "Dimitrios A.", "" ], [ "Glykos", "Nicholas M.", "" ] ]
Both molecular mechanical and quantum mechanical calculations play an important role in describing the behavior and structure of molecules. In this work, we compare for the same peptide systems the results obtained from folding molecular dynamics simulations with previously reported results from quantum mechanical calculations. More specifically, three molecular dynamics simulations of 5 $\mu$s each in explicit water solvent were carried out for three Asn-Gly-containing heptapeptides, in order to study their folding and dynamics. Previous data, based on quantum mechanical calculations and the DFT methods have shown that these peptides adopt $\beta$-turn structures in aqueous solution, with type I' $\beta$-turn being the most preferred motif. The results from our analyses indicate that for the given system the two methods diverge in their predictions. The possibility of a force field-dependent deficiency is examined as a possible source of the observed discrepancy.
2007.01158
Saeid Alirezazadeh
Saeid Alirezazadeh and Khadijeh Alibabaei and Stephen P. Hubbell
Species Area Relationship (SAR): Pattern Description with Geometrical Approach
This work is done within 2017-2019. On the date of publishing on ArXiv, Saeid Alirezazadeh is with C4 - Cloud Computing Competence Centre (C4-UBI), Universidade da Beira Interior, Covilh\~{a}, Portugal, and Khadijeh Alibabaei is with C-MAST Center for Mechanical and Aerospace Science and Technologies, University of Beira Interior, Covilh\~{a}, Portugal
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several formulations are describing the pattern of species-area relationship, log-log linear, semi-log linear, among others. These patterns mainly explain the species-area relationship for large areas, and for the small area, they provide significant differences from real data. We consider the geometric position of individuals of species, and base on that, we find the probability of observing at least one individual of the species. We apply a translation of the well-studied problem of mixed salt-water in a tank to describe the formula of SAR. For a rectangular sample area the species-area relationship follows the pattern, with some simplification, $S=c|A^{\beta}+a|^z$, where $S$ is the number of species in the area of size $A$ and $a,c,z$, and $\beta$ are constants with $z<1$ and $\beta\leq1$. We also show how the constant $z$ relates to some macroecological patterns, namely spatial aggregation, percentage of area coverage, and the core-satellite model. We exemplify our method using data on tropical tree species from a 50ha plot in Barro Colorado Island (BCI), Panama, using all individuals.
[ { "created": "Thu, 2 Jul 2020 14:46:52 GMT", "version": "v1" }, { "created": "Tue, 7 Jul 2020 14:24:39 GMT", "version": "v2" }, { "created": "Wed, 8 Jul 2020 14:47:21 GMT", "version": "v3" }, { "created": "Mon, 11 Jan 2021 16:52:53 GMT", "version": "v4" }, { "crea...
2021-07-06
[ [ "Alirezazadeh", "Saeid", "" ], [ "Alibabaei", "Khadijeh", "" ], [ "Hubbell", "Stephen P.", "" ] ]
Several formulations are describing the pattern of species-area relationship, log-log linear, semi-log linear, among others. These patterns mainly explain the species-area relationship for large areas, and for the small area, they provide significant differences from real data. We consider the geometric position of individuals of species, and base on that, we find the probability of observing at least one individual of the species. We apply a translation of the well-studied problem of mixed salt-water in a tank to describe the formula of SAR. For a rectangular sample area the species-area relationship follows the pattern, with some simplification, $S=c|A^{\beta}+a|^z$, where $S$ is the number of species in the area of size $A$ and $a,c,z$, and $\beta$ are constants with $z<1$ and $\beta\leq1$. We also show how the constant $z$ relates to some macroecological patterns, namely spatial aggregation, percentage of area coverage, and the core-satellite model. We exemplify our method using data on tropical tree species from a 50ha plot in Barro Colorado Island (BCI), Panama, using all individuals.
2403.15842
Kristina Wicke
Vincent Moulton, Andreas Spillner, Kristina Wicke
Phylogenetic diversity indices from an affine and projective viewpoint
23 pages, 11 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic diversity indices are commonly used to rank the elements in a collection of species or populations for conservation purposes. The derivation of these indices is typically based on some quantitative description of the evolutionary history of the species in question, which is often given in terms of a phylogenetic tree. Both rooted and unrooted phylogenetic trees can be employed, and there are close connections between the indices that are derived in these two different ways. In this paper, we introduce more general phylogenetic diversity indices that can be derived from collections of subsets (clusters) and collections of bipartitions (splits) of the given set of species. Such indices could be useful, for example, in case there is some uncertainty in the topology of the tree being used to derive a phylogenetic diversity index. As well as characterizing some of the indices that we introduce in terms of their special properties, we provide a link between cluster-based and split-based phylogenetic diversity indices that uses a discrete analogue of the classical link between affine and projective geometry. This provides a unified framework for many of the various phylogenetic diversity indices used in the literature based on rooted and unrooted phylogenetic trees, generalizations and new proofs for previous results concerning tree-based indices, and a way to define some new phylogenetic diversity indices that naturally arise as affine or projective variants of each other.
[ { "created": "Sat, 23 Mar 2024 13:47:43 GMT", "version": "v1" } ]
2024-03-26
[ [ "Moulton", "Vincent", "" ], [ "Spillner", "Andreas", "" ], [ "Wicke", "Kristina", "" ] ]
Phylogenetic diversity indices are commonly used to rank the elements in a collection of species or populations for conservation purposes. The derivation of these indices is typically based on some quantitative description of the evolutionary history of the species in question, which is often given in terms of a phylogenetic tree. Both rooted and unrooted phylogenetic trees can be employed, and there are close connections between the indices that are derived in these two different ways. In this paper, we introduce more general phylogenetic diversity indices that can be derived from collections of subsets (clusters) and collections of bipartitions (splits) of the given set of species. Such indices could be useful, for example, in case there is some uncertainty in the topology of the tree being used to derive a phylogenetic diversity index. As well as characterizing some of the indices that we introduce in terms of their special properties, we provide a link between cluster-based and split-based phylogenetic diversity indices that uses a discrete analogue of the classical link between affine and projective geometry. This provides a unified framework for many of the various phylogenetic diversity indices used in the literature based on rooted and unrooted phylogenetic trees, generalizations and new proofs for previous results concerning tree-based indices, and a way to define some new phylogenetic diversity indices that naturally arise as affine or projective variants of each other.
2103.11230
Pedro Pessoa
Pedro Pessoa
Legendre transformation and information geometry for the maximum entropy theory of ecology
Typos fixed
Phys. Sci. Forum 2021, 3(1), 1
10.3390/psf2021003001
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here I investigate some mathematical aspects of the maximum entropy theory of ecology (METE). In particular I address the geometrical structure of METE endowed by information geometry. As novel results, the macrostate entropy is calculated analytically by the Legendre transformation of the log-normalizer in METE. This result allows for the calculation of the metric terms in the information geometry arising from METE and, by consequence, the covariance matrix between METE variables.
[ { "created": "Sat, 20 Mar 2021 19:50:04 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 14:08:37 GMT", "version": "v2" }, { "created": "Mon, 12 Apr 2021 18:43:54 GMT", "version": "v3" }, { "created": "Sat, 21 Aug 2021 22:41:34 GMT", "version": "v4" } ]
2021-11-09
[ [ "Pessoa", "Pedro", "" ] ]
Here I investigate some mathematical aspects of the maximum entropy theory of ecology (METE). In particular I address the geometrical structure of METE endowed by information geometry. As novel results, the macrostate entropy is calculated analytically by the Legendre transformation of the log-normalizer in METE. This result allows for the calculation of the metric terms in the information geometry arising from METE and, by consequence, the covariance matrix between METE variables.
2407.15220
Yuliya Burankova
Yuliya Burankova, Miriam Abele, Mohammad Bakhtiari, Christine von T\"orne, Teresa Barth, Lisa Schweizer, Pieter Giesbertz, Johannes R. Schmidt, Stefan Kalkhof, Janina M\"uller-Deile, Peter A van Veelen, Yassene Mohammed, Elke Hammer, Lis Arend, Klaudia Adamowicz, Tanja Laske, Anne Hartebrodt, Tobias Frisch, Chen Meng, Julian Matschinske, Julian Sp\"ath, Richard R\"ottger, Veit Schw\"ammle, Stefanie M. Hauck, Stefan Lichtenthaler, Axel Imhof, Matthias Mann, Christina Ludwig, Bernhard Kuster, Jan Baumbach, Olga Zolotareva
Privacy-Preserving Multi-Center Differential Protein Abundance Analysis with FedProt
52 pages, 16 figures, 12 tables. Last two authors listed are joint last authors
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Quantitative mass spectrometry has revolutionized proteomics by enabling simultaneous quantification of thousands of proteins. Pooling patient-derived data from multiple institutions enhances statistical power but raises significant privacy concerns. Here we introduce FedProt, the first privacy-preserving tool for collaborative differential protein abundance analysis of distributed data, which utilizes federated learning and additive secret sharing. In the absence of a multicenter patient-derived dataset for evaluation, we created two, one at five centers from LFQ E.coli experiments and one at three centers from TMT human serum. Evaluations using these datasets confirm that FedProt achieves accuracy equivalent to DEqMS applied to pooled data, with completely negligible absolute differences no greater than $\text{$4 \times 10^{-12}$}$. In contrast, -log10(p-values) computed by the most accurate meta-analysis methods diverged from the centralized analysis results by up to 25-27. FedProt is available as a web tool with detailed documentation as a FeatureCloud App.
[ { "created": "Sun, 21 Jul 2024 17:09:20 GMT", "version": "v1" } ]
2024-07-23
[ [ "Burankova", "Yuliya", "" ], [ "Abele", "Miriam", "" ], [ "Bakhtiari", "Mohammad", "" ], [ "von Törne", "Christine", "" ], [ "Barth", "Teresa", "" ], [ "Schweizer", "Lisa", "" ], [ "Giesbertz", "Pieter", ""...
Quantitative mass spectrometry has revolutionized proteomics by enabling simultaneous quantification of thousands of proteins. Pooling patient-derived data from multiple institutions enhances statistical power but raises significant privacy concerns. Here we introduce FedProt, the first privacy-preserving tool for collaborative differential protein abundance analysis of distributed data, which utilizes federated learning and additive secret sharing. In the absence of a multicenter patient-derived dataset for evaluation, we created two, one at five centers from LFQ E.coli experiments and one at three centers from TMT human serum. Evaluations using these datasets confirm that FedProt achieves accuracy equivalent to DEqMS applied to pooled data, with completely negligible absolute differences no greater than $\text{$4 \times 10^{-12}$}$. In contrast, -log10(p-values) computed by the most accurate meta-analysis methods diverged from the centralized analysis results by up to 25-27. FedProt is available as a web tool with detailed documentation as a FeatureCloud App.
2311.02594
Chenyu Liu
Chenyu Liu, Yong Jin Kweon and Jun Ding
scBeacon: single-cell biomarker extraction via identifying paired cell clusters across biological conditions with contrastive siamese networks
null
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the breakthroughs in biomarker discovery facilitated by differential gene analysis, challenges remain, particularly at the single-cell level. Traditional methodologies heavily rely on user-supplied cell annotations, focusing on individually expressed data, often neglecting the critical interactions between biological conditions, such as healthy versus diseased states. In response, here we introduce scBeacon, an innovative framework built upon a deep contrastive siamese network. scBeacon pioneers an unsupervised approach, adeptly identifying matched cell populations across varied conditions, enabling a refined differential gene analysis. By utilizing a VQ-VAE framework, a contrastive siamese network, and a greedy iterative strategy, scBeacon effectively pinpoints differential genes that hold potential as key biomarkers. Comprehensive evaluations on a diverse array of datasets validate scBeacon's superiority over existing single-cell differential gene analysis tools. Its precision and adaptability underscore its significant role in enhancing diagnostic accuracy in biomarker discovery. With the emphasis on the importance of biomarkers in diagnosis, scBeacon is positioned to be a pivotal asset in the evolution of personalized medicine and targeted treatments.
[ { "created": "Sun, 5 Nov 2023 08:27:24 GMT", "version": "v1" }, { "created": "Thu, 28 Dec 2023 02:16:32 GMT", "version": "v2" } ]
2023-12-29
[ [ "Liu", "Chenyu", "" ], [ "Kweon", "Yong Jin", "" ], [ "Ding", "Jun", "" ] ]
Despite the breakthroughs in biomarker discovery facilitated by differential gene analysis, challenges remain, particularly at the single-cell level. Traditional methodologies heavily rely on user-supplied cell annotations, focusing on individually expressed data, often neglecting the critical interactions between biological conditions, such as healthy versus diseased states. In response, here we introduce scBeacon, an innovative framework built upon a deep contrastive siamese network. scBeacon pioneers an unsupervised approach, adeptly identifying matched cell populations across varied conditions, enabling a refined differential gene analysis. By utilizing a VQ-VAE framework, a contrastive siamese network, and a greedy iterative strategy, scBeacon effectively pinpoints differential genes that hold potential as key biomarkers. Comprehensive evaluations on a diverse array of datasets validate scBeacon's superiority over existing single-cell differential gene analysis tools. Its precision and adaptability underscore its significant role in enhancing diagnostic accuracy in biomarker discovery. With the emphasis on the importance of biomarkers in diagnosis, scBeacon is positioned to be a pivotal asset in the evolution of personalized medicine and targeted treatments.
2403.13569
Guido Tiana
Francesco Borando and Guido Tiana
Effective model of protein--mediated interactions in chromatin
null
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-mediated interactions are ubiquitous in the cellular environment, and particularly in the nucleus, where they are responsible for the structuring of chromatin. We show through molecular--dynamics simulations of a polymer surrounded by binders that the strength of the binder-polymer interaction separates an equilibrium from a non-equilibrium regime. In the equilibrium regime, the system can be efficiently described by an effective model in which the binders are traced out. Even in this case, the polymer display features that are different from those of a standard homopolymer interacting with two-body interactions. We then extend the effective model to deal with the case where binders cannot be regarded as in equilibrium and a new phenomenology appears, including local blobs in the polymer. Providing an effective description of the system can be useful in clarifying the fundamental mechanisms governing chromatin structuring.
[ { "created": "Wed, 20 Mar 2024 13:09:10 GMT", "version": "v1" } ]
2024-03-21
[ [ "Borando", "Francesco", "" ], [ "Tiana", "Guido", "" ] ]
Protein-mediated interactions are ubiquitous in the cellular environment, and particularly in the nucleus, where they are responsible for the structuring of chromatin. We show through molecular--dynamics simulations of a polymer surrounded by binders that the strength of the binder-polymer interaction separates an equilibrium from a non-equilibrium regime. In the equilibrium regime, the system can be efficiently described by an effective model in which the binders are traced out. Even in this case, the polymer display features that are different from those of a standard homopolymer interacting with two-body interactions. We then extend the effective model to deal with the case where binders cannot be regarded as in equilibrium and a new phenomenology appears, including local blobs in the polymer. Providing an effective description of the system can be useful in clarifying the fundamental mechanisms governing chromatin structuring.
q-bio/0409006
Kerwyn Huang
Rahul V. Kulkarni, Kerwyn Casey Huang, Morten Kloster, and Ned S. Wingreen
Pattern formation within Escherichia coli: diffusion, membrane attachment, and self-interaction of MinD molecules
4 pages, 3 figures, submitted to PRL
null
10.1103/PhysRevLett.93.228103
null
q-bio.SC
null
In E. coli, accurate cell division depends upon the oscillation of Min proteins from pole to pole. We provide a model for the polar localization of MinD based only on diffusion, a delay for nucleotide exchange, and different rates of attachment to the bare membrane and the occupied membrane. We derive analytically the probability density, and correspondingly the length scale, for MinD attachment zones. Our simple analytical model illustrates the processes giving rise to the observed localization of cellular MinD zones.
[ { "created": "Wed, 1 Sep 2004 21:01:03 GMT", "version": "v1" } ]
2009-11-10
[ [ "Kulkarni", "Rahul V.", "" ], [ "Huang", "Kerwyn Casey", "" ], [ "Kloster", "Morten", "" ], [ "Wingreen", "Ned S.", "" ] ]
In E. coli, accurate cell division depends upon the oscillation of Min proteins from pole to pole. We provide a model for the polar localization of MinD based only on diffusion, a delay for nucleotide exchange, and different rates of attachment to the bare membrane and the occupied membrane. We derive analytically the probability density, and correspondingly the length scale, for MinD attachment zones. Our simple analytical model illustrates the processes giving rise to the observed localization of cellular MinD zones.
2111.04338
Adam Weisser
Adam Weisser
Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication
626 pages, 124 figures, 13 tables, 1633 references
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.
[ { "created": "Mon, 8 Nov 2021 08:54:39 GMT", "version": "v1" }, { "created": "Tue, 9 Nov 2021 08:18:51 GMT", "version": "v2" }, { "created": "Fri, 12 Nov 2021 18:46:39 GMT", "version": "v3" }, { "created": "Mon, 18 Apr 2022 17:00:58 GMT", "version": "v4" }, { "cre...
2024-06-04
[ [ "Weisser", "Adam", "" ] ]
A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.
1912.10641
Hiroki Ohta
Azusa Tanaka, Yasuhiro Ishitsuka, Hiroki Ohta, Akihiro Fujimoto, Jun-ichirou Yasunaga, Masao Matsuoka
Systematic clustering algorithm for chromatin accessibility data and its application to hematopoietic cells
24 pages, 17 figures
PLOS Comput. Biol. 16(11), e1008422 (2020)
10.1371/journal.pcbi.1008422
null
q-bio.GN cond-mat.stat-mech q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The huge amount of data acquired by high-throughput sequencing requires data reduction for effective analysis. Here we give a clustering algorithm for genome-wide open chromatin data using a new data reduction method. This method regards the genome as a string of $1$s and $0$s based on a set of peaks and calculates the Hamming distances between the strings. This algorithm with the systematically optimized set of peaks enables us to quantitatively evaluate differences between samples of hematopoietic cells and classify cell types, potentially leading to a better understanding of leukemia pathogenesis.
[ { "created": "Mon, 23 Dec 2019 06:34:36 GMT", "version": "v1" }, { "created": "Thu, 26 Nov 2020 19:00:27 GMT", "version": "v2" } ]
2021-01-27
[ [ "Tanaka", "Azusa", "" ], [ "Ishitsuka", "Yasuhiro", "" ], [ "Ohta", "Hiroki", "" ], [ "Fujimoto", "Akihiro", "" ], [ "Yasunaga", "Jun-ichirou", "" ], [ "Matsuoka", "Masao", "" ] ]
The huge amount of data acquired by high-throughput sequencing requires data reduction for effective analysis. Here we give a clustering algorithm for genome-wide open chromatin data using a new data reduction method. This method regards the genome as a string of $1$s and $0$s based on a set of peaks and calculates the Hamming distances between the strings. This algorithm with the systematically optimized set of peaks enables us to quantitatively evaluate differences between samples of hematopoietic cells and classify cell types, potentially leading to a better understanding of leukemia pathogenesis.
1603.08386
Chantriolnt-Andreas Kapourani
Chantriolnt-Andreas Kapourani and Guido Sanguinetti
Higher order methylation features for clustering and prediction in epigenomic studies
12 pages, 5 figures
Bioinformatics (2016) 32 (17): i405-i412
10.1093/bioinformatics/btw432
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: DNA methylation is an intensely studied epigenetic mark, yet its functional role is incompletely understood. Attempts to quantitatively associate average DNA methylation to gene expression yield poor correlations outside of the well-understood methylation-switch at CpG islands. Results: Here we use probabilistic machine learning to extract higher order features associated with the methylation profile across a defined region. These features quantitate precisely notions of shape of a methylation profile, capturing spatial correlations in DNA methylation across genomic regions. Using these higher order features across promoter-proximal regions, we are able to construct a powerful machine learning predictor of gene expression, significantly improving upon the predictive power of average DNA methylation levels. Furthermore, we can use higher order features to cluster promoter-proximal regions, showing that five major patterns of methylation occur at promoters across different cell lines, and we provide evidence that methylation beyond CpG islands may be related to regulation of gene expression. Our results support previous reports of a functional role of spatial correlations in methylation patterns, and provide a mean to quantitate such features for downstream analyses. Availability: https://github.com/andreaskapou/BPRMeth
[ { "created": "Mon, 28 Mar 2016 14:24:13 GMT", "version": "v1" } ]
2016-11-17
[ [ "Kapourani", "Chantriolnt-Andreas", "" ], [ "Sanguinetti", "Guido", "" ] ]
Motivation: DNA methylation is an intensely studied epigenetic mark, yet its functional role is incompletely understood. Attempts to quantitatively associate average DNA methylation to gene expression yield poor correlations outside of the well-understood methylation-switch at CpG islands. Results: Here we use probabilistic machine learning to extract higher order features associated with the methylation profile across a defined region. These features quantitate precisely notions of shape of a methylation profile, capturing spatial correlations in DNA methylation across genomic regions. Using these higher order features across promoter-proximal regions, we are able to construct a powerful machine learning predictor of gene expression, significantly improving upon the predictive power of average DNA methylation levels. Furthermore, we can use higher order features to cluster promoter-proximal regions, showing that five major patterns of methylation occur at promoters across different cell lines, and we provide evidence that methylation beyond CpG islands may be related to regulation of gene expression. Our results support previous reports of a functional role of spatial correlations in methylation patterns, and provide a mean to quantitate such features for downstream analyses. Availability: https://github.com/andreaskapou/BPRMeth
0911.4393
Andrea Cavagna
Andrea Cavagna, Alessio Cimarelli, Irene Giardina, Giorgio Parisi, Raffaele Santagati, Fabio Stefanini, Massimiliano Viale
Scale-free correlations in bird flocks
Submitted to PNAS
Proceedings of the National Academy of Sciences 107 (26), 11865-11870 (2010)
10.1073/pnas.1005766107
null
q-bio.PE cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From bird flocks to fish schools, animal groups often seem to react to environmental perturbations as if of one mind. Most studies in collective animal behaviour have aimed to understand how a globally ordered state may emerge from simple behavioural rules. Less effort has been devoted to understanding the origin of collective response, namely the way the group as a whole reacts to its environment. Yet collective response is the adaptive key to survivor, especially when strong predatory pressure is present. Here we argue that collective response in animal groups is achieved through scale-free behavioural correlations. By reconstructing the three-dimensional position and velocity of individual birds in large flocks of starlings, we measured to what extent the velocity fluctuations of different birds are correlated to each other. We found that the range of such spatial correlation does not have a constant value, but it scales with the linear size of the flock. This result indicates that behavioural correlations are scale-free: the change in the behavioural state of one animal affects and is affected by that of all other animals in the group, no matter how large the group is. Scale-free correlations extend maximally the effective perception range of the individuals, thus compensating for the short-range nature of the direct inter-individual interaction and enhancing global response to perturbations. Our results suggest that flocks behave as critical systems, poised to respond maximally to environmental perturbations.
[ { "created": "Mon, 23 Nov 2009 13:00:16 GMT", "version": "v1" } ]
2014-10-10
[ [ "Cavagna", "Andrea", "" ], [ "Cimarelli", "Alessio", "" ], [ "Giardina", "Irene", "" ], [ "Parisi", "Giorgio", "" ], [ "Santagati", "Raffaele", "" ], [ "Stefanini", "Fabio", "" ], [ "Viale", "Massimiliano", ...
From bird flocks to fish schools, animal groups often seem to react to environmental perturbations as if of one mind. Most studies in collective animal behaviour have aimed to understand how a globally ordered state may emerge from simple behavioural rules. Less effort has been devoted to understanding the origin of collective response, namely the way the group as a whole reacts to its environment. Yet collective response is the adaptive key to survivor, especially when strong predatory pressure is present. Here we argue that collective response in animal groups is achieved through scale-free behavioural correlations. By reconstructing the three-dimensional position and velocity of individual birds in large flocks of starlings, we measured to what extent the velocity fluctuations of different birds are correlated to each other. We found that the range of such spatial correlation does not have a constant value, but it scales with the linear size of the flock. This result indicates that behavioural correlations are scale-free: the change in the behavioural state of one animal affects and is affected by that of all other animals in the group, no matter how large the group is. Scale-free correlations extend maximally the effective perception range of the individuals, thus compensating for the short-range nature of the direct inter-individual interaction and enhancing global response to perturbations. Our results suggest that flocks behave as critical systems, poised to respond maximally to environmental perturbations.
0705.4062
Eugene Shakhnovich
Konstantin Zeldovich, Peiqiu Chen, Eugene Shakhnovich
The Hypercube of Life: How Protein Stability Imposes Limits on Organism Complexity and Speed of Molecular Evolution
null
null
null
null
q-bio.BM q-bio.PE
null
Classical population genetics a priori assigns fitness to alleles without considering molecular or functional properties of proteins that these alleles encode. Here we study population dynamics in a model where fitness can be inferred from physical properties of proteins under a physiological assumption that loss of stability of any protein encoded by an essential gene confers a lethal phenotype. Accumulation of mutations in organisms containing Gamma genes can then be represented as diffusion within the Gamma dimensional hypercube with adsorbing boundaries which are determined, in each dimension, by loss of a protein stability and, at higher stability, by lack of protein sequences. Solving the diffusion equation whose parameters are derived from the data on point mutations in proteins, we determine a universal distribution of protein stabilities, in agreement with existing data. The theory provides a fundamental relation between mutation rate, maximal genome size and thermodynamic response of proteins to point mutations. It establishes a universal speed limit on rate of molecular evolution by predicting that populations go extinct (via lethal mutagenesis) when mutation rate exceeds approximately 6 mutations per essential part of genome per replication for mesophilic organisms and 1 to 2 mutations per genome per replication for thermophilic ones. Further, our results suggest that in absence of error correction, modern RNA viruses and primordial genomes must necessarily be very short. Several RNA viruses function close to the evolutionary speed limit while error correction mechanisms used by DNA viruses and non-mutant strains of bacteria featuring various genome lengths and mutation rates have brought these organisms universally about 1000 fold below the natural speed limit.
[ { "created": "Mon, 28 May 2007 16:00:37 GMT", "version": "v1" } ]
2007-05-29
[ [ "Zeldovich", "Konstantin", "" ], [ "Chen", "Peiqiu", "" ], [ "Shakhnovich", "Eugene", "" ] ]
Classical population genetics a priori assigns fitness to alleles without considering molecular or functional properties of proteins that these alleles encode. Here we study population dynamics in a model where fitness can be inferred from physical properties of proteins under a physiological assumption that loss of stability of any protein encoded by an essential gene confers a lethal phenotype. Accumulation of mutations in organisms containing Gamma genes can then be represented as diffusion within the Gamma dimensional hypercube with adsorbing boundaries which are determined, in each dimension, by loss of a protein stability and, at higher stability, by lack of protein sequences. Solving the diffusion equation whose parameters are derived from the data on point mutations in proteins, we determine a universal distribution of protein stabilities, in agreement with existing data. The theory provides a fundamental relation between mutation rate, maximal genome size and thermodynamic response of proteins to point mutations. It establishes a universal speed limit on rate of molecular evolution by predicting that populations go extinct (via lethal mutagenesis) when mutation rate exceeds approximately 6 mutations per essential part of genome per replication for mesophilic organisms and 1 to 2 mutations per genome per replication for thermophilic ones. Further, our results suggest that in absence of error correction, modern RNA viruses and primordial genomes must necessarily be very short. Several RNA viruses function close to the evolutionary speed limit while error correction mechanisms used by DNA viruses and non-mutant strains of bacteria featuring various genome lengths and mutation rates have brought these organisms universally about 1000 fold below the natural speed limit.
2112.05437
Pratyush Kollepara
Pratyush K. Kollepara and Joel C. Miller
Questioning the use of global estimates of reproduction numbers, with implications for policy
5 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The basic reproduction number, $R_0$ is an important and widely used concept in the study of infectious diseases. We briefly review the recent trend of calculating the average of various $R_0$ estimates in systematic reviews aimed at estimating the basic reproduction number of SARS-CoV-2, and discuss the drawbacks and implications of using such averaging methods. Additionally, we argue that even a theoretically grounded approach such as next generation matrix could have practical impediments in its use. More generally, the practice of associating an infectious disease with a single value of $R_0$ is problematic, when the disease can, in fact have different reproduction numbers in various populations.
[ { "created": "Fri, 10 Dec 2021 10:38:57 GMT", "version": "v1" } ]
2021-12-13
[ [ "Kollepara", "Pratyush K.", "" ], [ "Miller", "Joel C.", "" ] ]
The basic reproduction number, $R_0$ is an important and widely used concept in the study of infectious diseases. We briefly review the recent trend of calculating the average of various $R_0$ estimates in systematic reviews aimed at estimating the basic reproduction number of SARS-CoV-2, and discuss the drawbacks and implications of using such averaging methods. Additionally, we argue that even a theoretically grounded approach such as next generation matrix could have practical impediments in its use. More generally, the practice of associating an infectious disease with a single value of $R_0$ is problematic, when the disease can, in fact have different reproduction numbers in various populations.
2112.13021
Reza Sameni
Reza Sameni
Noninvasive Fetal Electrocardiography: Models, Technologies and Algorithms
null
In Innovative Technologies and Signal Processing in Perinatal Medicine (pp. 99-146). Springer International Publishing (2020)
10.1007/978-3-030-54403-4_5
null
q-bio.QM cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fetal electrocardiogram (fECG) was first recorded from the maternal abdominal surface in the early 1900s. During the past fifty years, the most advanced electronics technologies and signal processing algorithms have been used to convert noninvasive fetal electrocardiography into a reliable technology for fetal cardiac monitoring. In this chapter, the major signal processing techniques, which have been developed for the modeling, extraction and analysis of the fECG from noninvasive maternal abdominal recordings are reviewed and compared with one another in detail. The major topics of the chapter include: 1) the electrophysiology of the fECG from the signal processing viewpoint, 2) the mathematical model of the maternal volume conduction media and the waveform models of the fECG acquired from body surface leads, 3) the signal acquisition requirements, 4) model-based techniques for fECG noise and interference cancellation, including adaptive filters and semi-blind source separation techniques, and 5) recent algorithmic advances for fetal motion tracking and online fECG extraction from few number of channels.
[ { "created": "Fri, 24 Dec 2021 10:16:23 GMT", "version": "v1" } ]
2021-12-28
[ [ "Sameni", "Reza", "" ] ]
The fetal electrocardiogram (fECG) was first recorded from the maternal abdominal surface in the early 1900s. During the past fifty years, the most advanced electronics technologies and signal processing algorithms have been used to convert noninvasive fetal electrocardiography into a reliable technology for fetal cardiac monitoring. In this chapter, the major signal processing techniques, which have been developed for the modeling, extraction and analysis of the fECG from noninvasive maternal abdominal recordings are reviewed and compared with one another in detail. The major topics of the chapter include: 1) the electrophysiology of the fECG from the signal processing viewpoint, 2) the mathematical model of the maternal volume conduction media and the waveform models of the fECG acquired from body surface leads, 3) the signal acquisition requirements, 4) model-based techniques for fECG noise and interference cancellation, including adaptive filters and semi-blind source separation techniques, and 5) recent algorithmic advances for fetal motion tracking and online fECG extraction from few number of channels.
q-bio/0604007
Dmitry Kondrashov
Dmitry A. Kondrashov, Qiang Cui, and George N. Phillips Jr
Optimization and evaluation of a coarse-grained model of protein motion using X-ray crystal data
18 pages, 4 figures, 1 supplemental file (cnm_si.tex)
Biophysical Journal, vol 91 (8), 2006
10.1529/biophysj.106.085894
null
q-bio.BM
null
Simple coarse-grained models, such as the Gaussian Network Model, have been shown to capture some of the features of equilibrium protein dynamics. We extend this model by using atomic contacts to define residue interactions and introducing more than one interaction parameter between residues. We use B-factors from 98 ultra-high resolution X-ray crystal structures to optimize the interaction parameters. The average correlation between GNM fluctuation predictions and the B-factors is 0.64 for the data set, consistent with a previous large-scale study. By separating residue interactions into covalent and noncovalent, we achieve an average correlation of 0.74, and addition of ligands and cofactors further improves the correlation to 0.75. However, further separating the noncovalent interactions into nonpolar, polar, and mixed yields no significant improvement. The addition of simple chemical information results in better prediction quality without increasing the size of the coarse-grained model.
[ { "created": "Thu, 6 Apr 2006 18:56:36 GMT", "version": "v1" } ]
2009-11-13
[ [ "Kondrashov", "Dmitry A.", "" ], [ "Cui", "Qiang", "" ], [ "Phillips", "George N.", "Jr" ] ]
Simple coarse-grained models, such as the Gaussian Network Model, have been shown to capture some of the features of equilibrium protein dynamics. We extend this model by using atomic contacts to define residue interactions and introducing more than one interaction parameter between residues. We use B-factors from 98 ultra-high resolution X-ray crystal structures to optimize the interaction parameters. The average correlation between GNM fluctuation predictions and the B-factors is 0.64 for the data set, consistent with a previous large-scale study. By separating residue interactions into covalent and noncovalent, we achieve an average correlation of 0.74, and addition of ligands and cofactors further improves the correlation to 0.75. However, further separating the noncovalent interactions into nonpolar, polar, and mixed yields no significant improvement. The addition of simple chemical information results in better prediction quality without increasing the size of the coarse-grained model.
2102.03682
Mateusz Chwastyk
Mateusz Chwastyk, Marek Cieplak
Conformational Biases of {\alpha}-Synuclein and Formation of Transient Knots
28 pages, 9 figures, 1 table
J. Phys. Chem. B 2020, 124, 1, 11-19
10.1021/acs.jpcb.9b08481
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
We study local conformational biases in the dynamics of {\alpha}-synuclein by using all-atom simulations with explicit and implicit solvents. The biases are related to the frequency of the specific contact formation. In both approaches, the protein is intrinsically disordered, and its strongest bias is to make bend and turn local structures. The explicit-solvent conformations can be substantially more extended which allows for formation of transient trefoil knots, both deep and shallow, that may last for up to 5 {\mu}s. The two-chain self-association events, both short- and long-lived, are dominated by formation of contacts in the central part of the sequence. This part tends to form helices when bound to a micelle.
[ { "created": "Sat, 6 Feb 2021 23:21:28 GMT", "version": "v1" } ]
2021-02-09
[ [ "Chwastyk", "Mateusz", "" ], [ "Cieplak", "Marek", "" ] ]
We study local conformational biases in the dynamics of {\alpha}-synuclein by using all-atom simulations with explicit and implicit solvents. The biases are related to the frequency of the specific contact formation. In both approaches, the protein is intrinsically disordered, and its strongest bias is to make bend and turn local structures. The explicit-solvent conformations can be substantially more extended which allows for formation of transient trefoil knots, both deep and shallow, that may last for up to 5 {\mu}s. The two-chain self-association events, both short- and long-lived, are dominated by formation of contacts in the central part of the sequence. This part tends to form helices when bound to a micelle.
1504.00817
Alexander Iomin
A. Iomin
Continuous Time Random Walk and Migration Proliferation Dichotomy
null
null
10.1142/9789814730266_0004
null
q-bio.CB cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A theory of fractional kinetics of glial cancer cells is presented. A role of the migration-proliferation dichotomy in the fractional cancer cell dynamics in the outer-invasive zone is discussed an explained in the framework of a continuous time random walk. The main suggested model is based on a construction of a 3D comb model, where the migration-proliferation dichotomy becomes naturally apparent and the outer-invasive zone of glioma cancer is considered as a fractal composite with a fractal dimension $\frD<3$.
[ { "created": "Fri, 3 Apr 2015 11:37:35 GMT", "version": "v1" } ]
2016-03-23
[ [ "Iomin", "A.", "" ] ]
A theory of fractional kinetics of glial cancer cells is presented. A role of the migration-proliferation dichotomy in the fractional cancer cell dynamics in the outer-invasive zone is discussed an explained in the framework of a continuous time random walk. The main suggested model is based on a construction of a 3D comb model, where the migration-proliferation dichotomy becomes naturally apparent and the outer-invasive zone of glioma cancer is considered as a fractal composite with a fractal dimension $\frD<3$.
2405.09595
Noel Malod-Dognin
Natasa Przulj and Noel Malod-Dognin
Simplicity within biological complexity
29 pages, 4 figures
null
null
null
q-bio.OT cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Heterogeneous, interconnected, systems-level, molecular data have become increasingly available and key in precision medicine. We need to utilize them to better stratify patients into risk groups, discover new biomarkers and targets, repurpose known and discover new drugs to personalize medical treatment. Existing methodologies are limited and a paradigm shift is needed to achieve quantitative and qualitative breakthroughs. In this perspective paper, we survey the literature and argue for the development of a comprehensive, general framework for embedding of multi-scale molecular network data that would enable their explainable exploitation in precision medicine in linear time. Network embedding methods map nodes to points in low-dimensional space, so that proximity in the learned space reflects the network's topology-function relationships. They have recently achieved unprecedented performance on hard problems of utilizing few omic data in various biomedical applications. However, research thus far has been limited to special variants of the problems and data, with the performance depending on the underlying topology-function network biology hypotheses, the biomedical applications and evaluation metrics. The availability of multi-omic data, modern graph embedding paradigms and compute power call for a creation and training of efficient, explainable and controllable models, having no potentially dangerous, unexpected behaviour, that make a qualitative breakthrough. We propose to develop a general, comprehensive embedding framework for multi-omic network data, from models to efficient and scalable software implementation, and to apply it to biomedical informatics. It will lead to a paradigm shift in computational and biomedical understanding of data and diseases that will open up ways to solving some of the major bottlenecks in precision medicine and other domains.
[ { "created": "Wed, 15 May 2024 13:32:45 GMT", "version": "v1" } ]
2024-05-17
[ [ "Przulj", "Natasa", "" ], [ "Malod-Dognin", "Noel", "" ] ]
Heterogeneous, interconnected, systems-level, molecular data have become increasingly available and key in precision medicine. We need to utilize them to better stratify patients into risk groups, discover new biomarkers and targets, repurpose known and discover new drugs to personalize medical treatment. Existing methodologies are limited and a paradigm shift is needed to achieve quantitative and qualitative breakthroughs. In this perspective paper, we survey the literature and argue for the development of a comprehensive, general framework for embedding of multi-scale molecular network data that would enable their explainable exploitation in precision medicine in linear time. Network embedding methods map nodes to points in low-dimensional space, so that proximity in the learned space reflects the network's topology-function relationships. They have recently achieved unprecedented performance on hard problems of utilizing few omic data in various biomedical applications. However, research thus far has been limited to special variants of the problems and data, with the performance depending on the underlying topology-function network biology hypotheses, the biomedical applications and evaluation metrics. The availability of multi-omic data, modern graph embedding paradigms and compute power call for a creation and training of efficient, explainable and controllable models, having no potentially dangerous, unexpected behaviour, that make a qualitative breakthrough. We propose to develop a general, comprehensive embedding framework for multi-omic network data, from models to efficient and scalable software implementation, and to apply it to biomedical informatics. It will lead to a paradigm shift in computational and biomedical understanding of data and diseases that will open up ways to solving some of the major bottlenecks in precision medicine and other domains.
1110.5225
Tomas Tokar
Tom\'a\v{s} Tok\'ar and Jozef Uli\v{c}n\'y
Computational study of the mechanism of Bcl-2 apoptotic switch
null
Tokar T., Ulicny J., Computational study of the mechanism of Bcl-2 apoptotic switch, Physica A: Statistical Mechanics and its Applications, Volume 391, Issue 23, 1 December 2012, Pages 6212-6225
10.1016/j.physa.2012.07.006
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programmed cell death - apoptosis is one of the most studied biological phenomenon of recent years. Apoptotic regulatory network contains several significant control points, including probably the most important one - Bcl--2 apoptotic switch. There are two proposed hypotheses regarding its internal working - the indirect activation and direct activation models. Since these hypotheses form extreme poles of full continuum of intermediate models, we have constructed more general model with these two models as extreme cases. By studying relationship between model parameters and steady-state response ultrasensitivity we have found optimal interaction pattern which reproduces behavior of Bcl-2 apoptotic switch. Our results show, that stimulus-response ultrasensitivity is negatively related to spontaneous activation of Bcl-2 effectors - subgroup of Bcl-2 proteins. We found that ultrasensitivity requires effector's activation, mediated by another subgroup of Bcl-2 proteins - activators. We have shown that the auto-activation of effectors forms ultrasensitivity enhancing feedback loop, only if mediated by monomers, but not by oligomers. Robustness analysis revealed that interaction pattern proposed by direct activation hypothesis is able to conserve stimulus-response dependence and preserve ultrasensitivity despite large changes of its internal parameters. This ability is strongly reduced as for the intermediate to indirect side of the models. Computer simulation of the more general model presented here suggest, that stimulus-response ultrasensitivity is an emergent property of the direct activation model, that cannot originate within model of indirect activation. Introduction of indirect-model-specific interactions does not provide better explanation of Bcl-2 functioning compared to direct model.
[ { "created": "Mon, 24 Oct 2011 13:15:07 GMT", "version": "v1" }, { "created": "Thu, 20 Dec 2012 12:32:33 GMT", "version": "v2" } ]
2015-05-30
[ [ "Tokár", "Tomáš", "" ], [ "Uličný", "Jozef", "" ] ]
Programmed cell death - apoptosis is one of the most studied biological phenomenon of recent years. Apoptotic regulatory network contains several significant control points, including probably the most important one - Bcl--2 apoptotic switch. There are two proposed hypotheses regarding its internal working - the indirect activation and direct activation models. Since these hypotheses form extreme poles of full continuum of intermediate models, we have constructed more general model with these two models as extreme cases. By studying relationship between model parameters and steady-state response ultrasensitivity we have found optimal interaction pattern which reproduces behavior of Bcl-2 apoptotic switch. Our results show, that stimulus-response ultrasensitivity is negatively related to spontaneous activation of Bcl-2 effectors - subgroup of Bcl-2 proteins. We found that ultrasensitivity requires effector's activation, mediated by another subgroup of Bcl-2 proteins - activators. We have shown that the auto-activation of effectors forms ultrasensitivity enhancing feedback loop, only if mediated by monomers, but not by oligomers. Robustness analysis revealed that interaction pattern proposed by direct activation hypothesis is able to conserve stimulus-response dependence and preserve ultrasensitivity despite large changes of its internal parameters. This ability is strongly reduced as for the intermediate to indirect side of the models. Computer simulation of the more general model presented here suggest, that stimulus-response ultrasensitivity is an emergent property of the direct activation model, that cannot originate within model of indirect activation. Introduction of indirect-model-specific interactions does not provide better explanation of Bcl-2 functioning compared to direct model.
q-bio/0412011
Hernan Garcia
Lacramioara Bintu, Nicolas E. Buchler, Hernan G. Garcia, Ulrich Gerland, Terence Hwa, Jane' Kondev, Thomas Kuhlman and Rob Phillips
Transcriptional Regulation by the Numbers 2: Applications
15 pages and 9 figures in PDF format
null
null
null
q-bio.MN q-bio.QM
null
With the increasing amount of experimental data on gene expression and regulation, there is a growing need for quantitative models to describe the data and relate them to the different contexts. The thermodynamic models reviewed in the preceding paper provide a useful framework for the quantitative analysis of bacterial transcription regulation. We review a number of well-characterized bacterial promoters that are regulated by one or two species of transcription factors, and apply the thermodynamic framework to these promoters. We show that the framework allows one to quantify vastly different forms of gene expression using a few parameters. As such, it provides a compact description useful for higher-level studies, e.g., of genetic networks, without the need to invoke the biochemical details of every component. Moreover, it can be used to generate hypotheses on the likely mechanisms of transcriptional control.
[ { "created": "Sun, 5 Dec 2004 22:47:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bintu", "Lacramioara", "" ], [ "Buchler", "Nicolas E.", "" ], [ "Garcia", "Hernan G.", "" ], [ "Gerland", "Ulrich", "" ], [ "Hwa", "Terence", "" ], [ "Kondev", "Jane'", "" ], [ "Kuhlman", "Thomas", "" ],...
With the increasing amount of experimental data on gene expression and regulation, there is a growing need for quantitative models to describe the data and relate them to the different contexts. The thermodynamic models reviewed in the preceding paper provide a useful framework for the quantitative analysis of bacterial transcription regulation. We review a number of well-characterized bacterial promoters that are regulated by one or two species of transcription factors, and apply the thermodynamic framework to these promoters. We show that the framework allows one to quantify vastly different forms of gene expression using a few parameters. As such, it provides a compact description useful for higher-level studies, e.g., of genetic networks, without the need to invoke the biochemical details of every component. Moreover, it can be used to generate hypotheses on the likely mechanisms of transcriptional control.
1911.05621
Weikaixin Kong
Miaomiao Gao, Weikaixin Kong, Zhuo Huang and Zhengwei Xie
Identification of key genes related to the mechanism and prognosis of lung squamous cell carcinoma using bioinformatics analysis
This work was supported by National key research and development program of China(2018YFA0900200), National Natural Science Foundation of China Grants (31771519,31871083) and Beijing Natural Science Foundation (5182012, 7182087), the Ministry of Science and Technology of China Grant 2015CB559200. Funding for open access charge: National key research and development program of China
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives Lung squamous cell carcinoma (LUSC) often diagnosed as advanced with poor prognosis. The mechanisms of its pathogenesis and prognosis require urgent elucidation. This study was performed to screen potential biomarkers related to the occurrence, development and prognosis of LUSC to reveal unknown physiological and pathological processes. Materials and Methods Using bioinformatics analysis, the lung squamous cell carcinoma microarray datasets from the GEO and TCGA databases were analyzed to identify differentially expressed genes(DEGs). Furthermore, PPI and WGCNA network analysis were integrated to identify the key genes closely related to the process of LUSC development. In addition, survival analysis was performed to achieve a prognostic model that accomplished a high level of prediction accuracy. Results and Conclusion Eighty-five up-regulated and 39 down-regulated genes were identified, on which functional and pathway enrichment analysis was conducted. GO analysis demonstrated that up-regulated genes were principally enriched in epidermal development and DNA unwinding in DNA replication. Down-regulated genes were mainly involved in cell adhesion, signal transduction and positive regulation of inflammatory response. After PPI and WGCNA network analysis, eight genes, including AURKA, RAD51, TTK, AURKB, CCNA2, TPX2, KPNA2 and KIF23, have been found to play a vital role in LUSC development. The prognostic model contained 20 genes, 18 of which were detrimental to prognosis. The AUC of the established prognostic model for predicting the survival of patients at 1, 3, and 5 years was 0.828, 0.826 and 0.824, respectively. To conclude, this study identified a number of biomarkers of significant interest for additional investigation of the therapies and methods of prognosis of lung squamous cell carcinoma.
[ { "created": "Wed, 13 Nov 2019 17:01:55 GMT", "version": "v1" } ]
2019-11-14
[ [ "Gao", "Miaomiao", "" ], [ "Kong", "Weikaixin", "" ], [ "Huang", "Zhuo", "" ], [ "Xie", "Zhengwei", "" ] ]
Objectives Lung squamous cell carcinoma (LUSC) often diagnosed as advanced with poor prognosis. The mechanisms of its pathogenesis and prognosis require urgent elucidation. This study was performed to screen potential biomarkers related to the occurrence, development and prognosis of LUSC to reveal unknown physiological and pathological processes. Materials and Methods Using bioinformatics analysis, the lung squamous cell carcinoma microarray datasets from the GEO and TCGA databases were analyzed to identify differentially expressed genes(DEGs). Furthermore, PPI and WGCNA network analysis were integrated to identify the key genes closely related to the process of LUSC development. In addition, survival analysis was performed to achieve a prognostic model that accomplished a high level of prediction accuracy. Results and Conclusion Eighty-five up-regulated and 39 down-regulated genes were identified, on which functional and pathway enrichment analysis was conducted. GO analysis demonstrated that up-regulated genes were principally enriched in epidermal development and DNA unwinding in DNA replication. Down-regulated genes were mainly involved in cell adhesion, signal transduction and positive regulation of inflammatory response. After PPI and WGCNA network analysis, eight genes, including AURKA, RAD51, TTK, AURKB, CCNA2, TPX2, KPNA2 and KIF23, have been found to play a vital role in LUSC development. The prognostic model contained 20 genes, 18 of which were detrimental to prognosis. The AUC of the established prognostic model for predicting the survival of patients at 1, 3, and 5 years was 0.828, 0.826 and 0.824, respectively. To conclude, this study identified a number of biomarkers of significant interest for additional investigation of the therapies and methods of prognosis of lung squamous cell carcinoma.
2307.12505
Kanika Bansal
ItaloIvo Lima Dias Pinto, Javier Omar Garcia, Kanika Bansal
Optimizing parameter search for community detection in time evolving networks of complex systems
28 pages, 7 figures
null
null
null
q-bio.NC nlin.AO physics.data-an
http://creativecommons.org/licenses/by/4.0/
Network representations have been effectively employed to analyze complex systems across various areas and applications, leading to the development of network science as a core tool to study systems with multiple components and complex interactions. There is a growing interest in understanding the temporal dynamics of complex networks to decode the underlying dynamic processes through the temporal changes in network structure. Community detection algorithms, which are specialized clustering algorithms, have been instrumental in studying these temporal changes. They work by grouping nodes into communities based on the structure and intensity of network connections over time aiming to maximize modularity of the network partition. However, the performance of these algorithms is highly influenced by the selection of resolution parameters of the modularity function used, which dictate the scale of the represented network, both in size of communities and the temporal resolution of dynamic structure. The selection of these parameters has often been subjective and heavily reliant on the characteristics of the data used to create the network structure. Here, we introduce a method to objectively determine the values of the resolution parameters based on the elements of self-organization. We propose two key approaches: (1) minimization of the biases in spatial scale network characterization and (2) maximization of temporal scale-freeness. We demonstrate the effectiveness of these approaches using benchmark network structures as well as real-world datasets. To implement our method, we also provide an automated parameter selection software package that can be applied to a wide range of complex systems.
[ { "created": "Mon, 24 Jul 2023 03:38:34 GMT", "version": "v1" } ]
2023-07-25
[ [ "Pinto", "ItaloIvo Lima Dias", "" ], [ "Garcia", "Javier Omar", "" ], [ "Bansal", "Kanika", "" ] ]
Network representations have been effectively employed to analyze complex systems across various areas and applications, leading to the development of network science as a core tool to study systems with multiple components and complex interactions. There is a growing interest in understanding the temporal dynamics of complex networks to decode the underlying dynamic processes through the temporal changes in network structure. Community detection algorithms, which are specialized clustering algorithms, have been instrumental in studying these temporal changes. They work by grouping nodes into communities based on the structure and intensity of network connections over time aiming to maximize modularity of the network partition. However, the performance of these algorithms is highly influenced by the selection of resolution parameters of the modularity function used, which dictate the scale of the represented network, both in size of communities and the temporal resolution of dynamic structure. The selection of these parameters has often been subjective and heavily reliant on the characteristics of the data used to create the network structure. Here, we introduce a method to objectively determine the values of the resolution parameters based on the elements of self-organization. We propose two key approaches: (1) minimization of the biases in spatial scale network characterization and (2) maximization of temporal scale-freeness. We demonstrate the effectiveness of these approaches using benchmark network structures as well as real-world datasets. To implement our method, we also provide an automated parameter selection software package that can be applied to a wide range of complex systems.
1909.02093
John Halloran
John T. Halloran and David M. Rocke
Gradients of Generative Models for Improved Discriminative Analysis of Tandem Mass Spectra
13 pages. A partitioned version of this appeared in NIPS 2017
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tandem mass spectrometry (MS/MS) is a high-throughput technology used toidentify the proteins in a complex biological sample, such as a drop of blood. A collection of spectra is generated at the output of the process, each spectrum of which is representative of a peptide (protein subsequence) present in the original complex sample. In this work, we leverage the log-likelihood gradients of generative models to improve the identification of such spectra. In particular, we show that the gradient of a recently proposed dynamic Bayesian network (DBN) may be naturally employed by a kernel-based discriminative classifier. The resulting Fisher kernel substantially improves upon recent attempts to combine generative and discriminative models for post-processing analysis, outperforming all other methods on the evaluated datasets. We extend the improved accuracy offered by the Fisher kernel framework to other search algorithms by introducing Theseus, a DBN representing a large number of widely used MS/MS scoring functions. Furthermore, with gradient ascent and max-product inference at hand, we use Theseus to learn model parameters without any supervision.
[ { "created": "Wed, 4 Sep 2019 20:29:04 GMT", "version": "v1" } ]
2019-09-06
[ [ "Halloran", "John T.", "" ], [ "Rocke", "David M.", "" ] ]
Tandem mass spectrometry (MS/MS) is a high-throughput technology used toidentify the proteins in a complex biological sample, such as a drop of blood. A collection of spectra is generated at the output of the process, each spectrum of which is representative of a peptide (protein subsequence) present in the original complex sample. In this work, we leverage the log-likelihood gradients of generative models to improve the identification of such spectra. In particular, we show that the gradient of a recently proposed dynamic Bayesian network (DBN) may be naturally employed by a kernel-based discriminative classifier. The resulting Fisher kernel substantially improves upon recent attempts to combine generative and discriminative models for post-processing analysis, outperforming all other methods on the evaluated datasets. We extend the improved accuracy offered by the Fisher kernel framework to other search algorithms by introducing Theseus, a DBN representing a large number of widely used MS/MS scoring functions. Furthermore, with gradient ascent and max-product inference at hand, we use Theseus to learn model parameters without any supervision.
1209.2975
Emily Wall
Emily Wall, Frederic Guichard, and Antony R. Humphries
Synchronization in ecological systems by weak dispersal coupling with time delay
Submitted to Theoretical Ecology on 06/09/2012, accepted for publication on 21/01/2013
Theor Ecol (2013) 6: 405
10.1007/s12080-013-0176-6
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most salient spatio-temporal patterns in population ecology is the synchronization of fluctuating local populations across vast spatial extent. Synchronization of abundance has been widely observed across a range of spatial scales in relation to rate of dispersal among discrete populations. However, the dependence of synchrony on patterns of among-patch movement across heterogeneous landscapes has been largely ignored. Here we consider the duration of movement between two predator-prey communities connected by weak dispersal, and its effect on population synchrony. More specifically, we introduce time delayed dispersal to incorporate the finite transmission time between discrete populations across a continuous landscape. Reducing the system to a phase model using weakly connected network theory, it is found that the time delay is an important factor determining the nature and stability of phase-locked states. Our analysis predicts enhanced convergence to stable synchronous fluctuations in general, and a decreased ability of systems to produce in-phase synchronization dynamics in the presence of delayed dispersal. These results introduce delayed dispersal as a tool for understanding the importance of dispersal time across a landscape matrix in affecting metacommunity dynamics. They further highlight the importance of landscape and dispersal patterns for predicting the onset of synchrony between weakly-coupled populations.
[ { "created": "Thu, 13 Sep 2012 17:48:50 GMT", "version": "v1" }, { "created": "Sun, 27 Jan 2013 14:56:08 GMT", "version": "v2" } ]
2017-06-01
[ [ "Wall", "Emily", "" ], [ "Guichard", "Frederic", "" ], [ "Humphries", "Antony R.", "" ] ]
One of the most salient spatio-temporal patterns in population ecology is the synchronization of fluctuating local populations across vast spatial extent. Synchronization of abundance has been widely observed across a range of spatial scales in relation to rate of dispersal among discrete populations. However, the dependence of synchrony on patterns of among-patch movement across heterogeneous landscapes has been largely ignored. Here we consider the duration of movement between two predator-prey communities connected by weak dispersal, and its effect on population synchrony. More specifically, we introduce time delayed dispersal to incorporate the finite transmission time between discrete populations across a continuous landscape. Reducing the system to a phase model using weakly connected network theory, it is found that the time delay is an important factor determining the nature and stability of phase-locked states. Our analysis predicts enhanced convergence to stable synchronous fluctuations in general, and a decreased ability of systems to produce in-phase synchronization dynamics in the presence of delayed dispersal. These results introduce delayed dispersal as a tool for understanding the importance of dispersal time across a landscape matrix in affecting metacommunity dynamics. They further highlight the importance of landscape and dispersal patterns for predicting the onset of synchrony between weakly-coupled populations.
2006.02949
Dale Zhou
Dale Zhou, David M. Lydon-Staley, Perry Zurn, Danielle S. Bassett
The growth and form of knowledge networks by kinesthetic curiosity
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Throughout life, we might seek a calling, companions, skills, entertainment, truth, self-knowledge, beauty, and edification. The practice of curiosity can be viewed as an extended and open-ended search for valuable information with hidden identity and location in a complex space of interconnected information. Despite its importance, curiosity has been challenging to computationally model because the practice of curiosity often flourishes without specific goals, external reward, or immediate feedback. Here, we show how network science, statistical physics, and philosophy can be integrated into an approach that coheres with and expands the psychological taxonomies of specific-diversive and perceptual-epistemic curiosity. Using this interdisciplinary approach, we distill functional modes of curious information seeking as searching movements in information space. The kinesthetic model of curiosity offers a vibrant counterpart to the deliberative predictions of model-based reinforcement learning. In doing so, this model unearths new computational opportunities for identifying what makes curiosity curious.
[ { "created": "Thu, 4 Jun 2020 15:30:41 GMT", "version": "v1" } ]
2020-06-05
[ [ "Zhou", "Dale", "" ], [ "Lydon-Staley", "David M.", "" ], [ "Zurn", "Perry", "" ], [ "Bassett", "Danielle S.", "" ] ]
Throughout life, we might seek a calling, companions, skills, entertainment, truth, self-knowledge, beauty, and edification. The practice of curiosity can be viewed as an extended and open-ended search for valuable information with hidden identity and location in a complex space of interconnected information. Despite its importance, curiosity has been challenging to computationally model because the practice of curiosity often flourishes without specific goals, external reward, or immediate feedback. Here, we show how network science, statistical physics, and philosophy can be integrated into an approach that coheres with and expands the psychological taxonomies of specific-diversive and perceptual-epistemic curiosity. Using this interdisciplinary approach, we distill functional modes of curious information seeking as searching movements in information space. The kinesthetic model of curiosity offers a vibrant counterpart to the deliberative predictions of model-based reinforcement learning. In doing so, this model unearths new computational opportunities for identifying what makes curiosity curious.
2004.03495
Marcelo Savi
Pedro V. Savi, Marcelo A. Savi, Beatriz Borges
A Mathematical Description of the Dynamics of Coronavirus Disease (COVID-19): A Case Study of Brazil
17 pages, 13 Figures
null
null
null
q-bio.PE nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the mathematical modeling and numerical simulations related to the coronavirus dynamics. A description is developed based on the framework of susceptible-exposed-infectious-recovered model. Initially, a model verification is carried out calibrating system parameters with data from China, Italy, Iran and Brazil. Afterward, numerical simulations are performed to analyzed different scenarios of COVID-19 in Brazil. Results show the importance of governmental and individual actions to control the number and the period of the critical situations related to the pandemic.
[ { "created": "Tue, 7 Apr 2020 15:48:16 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 19:18:28 GMT", "version": "v2" } ]
2020-04-27
[ [ "Savi", "Pedro V.", "" ], [ "Savi", "Marcelo A.", "" ], [ "Borges", "Beatriz", "" ] ]
This paper deals with the mathematical modeling and numerical simulations related to the coronavirus dynamics. A description is developed based on the framework of susceptible-exposed-infectious-recovered model. Initially, a model verification is carried out calibrating system parameters with data from China, Italy, Iran and Brazil. Afterward, numerical simulations are performed to analyzed different scenarios of COVID-19 in Brazil. Results show the importance of governmental and individual actions to control the number and the period of the critical situations related to the pandemic.
2111.05315
Cheng Shen
Cheng Shen, Adiyant Lamba, Meng Zhu, Ray Zhang, Changhuei Yang and Magdalena Zernicka Goetz
Stain-free Detection of Embryo Polarization using Deep Learning
null
null
null
null
q-bio.QM cs.CV eess.IV physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Polarization of the mammalian embryo at the right developmental time is critical for its development to term and would be valuable in assessing the potential of human embryos. However, tracking polarization requires invasive fluorescence staining, impermissible in the in vitro fertilization clinic. Here, we report the use of artificial intelligence to detect polarization from unstained time-lapse movies of mouse embryos. We assembled a dataset of bright-field movie frames from 8-cell-stage embryos, side-by-side with corresponding images of fluorescent markers of cell polarization. We then used an ensemble learning model to detect whether any bright-field frame showed an embryo before or after onset of polarization. Our resulting model has an accuracy of 85% for detecting polarization, significantly outperforming human volunteers trained on the same data (61% accuracy). We discovered that our self-learning model focuses upon the angle between cells as one known cue for compaction, which precedes polarization, but it outperforms the use of this cue alone. By compressing three-dimensional time-lapsed image data into two-dimensions, we are able to reduce data to an easily manageable size for deep learning processing. In conclusion, we describe a method for detecting a key developmental feature of embryo development that avoids clinically impermissible fluorescence staining.
[ { "created": "Mon, 8 Nov 2021 17:54:25 GMT", "version": "v1" } ]
2021-11-10
[ [ "Shen", "Cheng", "" ], [ "Lamba", "Adiyant", "" ], [ "Zhu", "Meng", "" ], [ "Zhang", "Ray", "" ], [ "Yang", "Changhuei", "" ], [ "Goetz", "Magdalena Zernicka", "" ] ]
Polarization of the mammalian embryo at the right developmental time is critical for its development to term and would be valuable in assessing the potential of human embryos. However, tracking polarization requires invasive fluorescence staining, impermissible in the in vitro fertilization clinic. Here, we report the use of artificial intelligence to detect polarization from unstained time-lapse movies of mouse embryos. We assembled a dataset of bright-field movie frames from 8-cell-stage embryos, side-by-side with corresponding images of fluorescent markers of cell polarization. We then used an ensemble learning model to detect whether any bright-field frame showed an embryo before or after onset of polarization. Our resulting model has an accuracy of 85% for detecting polarization, significantly outperforming human volunteers trained on the same data (61% accuracy). We discovered that our self-learning model focuses upon the angle between cells as one known cue for compaction, which precedes polarization, but it outperforms the use of this cue alone. By compressing three-dimensional time-lapsed image data into two-dimensions, we are able to reduce data to an easily manageable size for deep learning processing. In conclusion, we describe a method for detecting a key developmental feature of embryo development that avoids clinically impermissible fluorescence staining.
0801.4395
Baruch Vainas
Baruch Vainas
Transition from 12 to near-24 hours glucose circadian rhythm on relaxation of a hyperglycemic condition
10 pages, 3 figures
null
null
null
q-bio.QM
null
A composite, exponential relaxation function, modulated by a periodic component, was used to fit to an experimental time series of blood glucose levels. The 11 parameters function that allows for the detection of a possible rhythm transition was fitted to the experimental time series using a genetic algorithm. It has been found that the relaxation from a hyperglycemic condition following a change in the anti-diabetic treatment, can be characterized by a change from an initial 12 hours ultradian rhythm to a near-24 hours circadian rhythm.
[ { "created": "Mon, 28 Jan 2008 22:27:12 GMT", "version": "v1" } ]
2008-01-30
[ [ "Vainas", "Baruch", "" ] ]
A composite, exponential relaxation function, modulated by a periodic component, was used to fit to an experimental time series of blood glucose levels. The 11 parameters function that allows for the detection of a possible rhythm transition was fitted to the experimental time series using a genetic algorithm. It has been found that the relaxation from a hyperglycemic condition following a change in the anti-diabetic treatment, can be characterized by a change from an initial 12 hours ultradian rhythm to a near-24 hours circadian rhythm.
2212.00430
Robert Worden
Robert Worden
A Speed Limit for Evolution: Postscript
Unpublished paper
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In 1995 I wrote a paper: "A Speed Limit for Evolution" whose main result was that evolution must proceed rather slowly, in accordance with the earlier views and intuitions of many authors. The abstract of the paper said: "The genetic information expressed in some part of the phenotype of a species cannot increase faster than a given rate, determined by the selection pressure on that part. This rate is typically a small fraction of a bit per generation". This result was derived in the presence of sexual reproduction and other effects such as temporarily isolated sub-populations. In 1999 David Mackay published a paper which apparently contradicted this result. In the abstract, he wrote "We find striking differences between populations that have recombination and populations that do not. If variation is produced by mutation alone, then the entire population gains up to roughly 1 bit per generation. If variation is created by recombination, the population can gain of the order of sqrt(G) bits per generation." Mackay proposed that there were outstanding evolutionary benefits to sexual reproduction, and that my result was too low by a very large factor. He later repeated this result in a textbook he wrote in 2003. The purpose of this note is to show that the key assumption of Mackays model, that "fitness is a strictly additive trait" is so unrealistic as to render his results irrelevant to any actual life form. In consequence, the speed limit I derived is still valid, and has important consequences for human cognitive evolution.
[ { "created": "Thu, 1 Dec 2022 11:00:00 GMT", "version": "v1" } ]
2022-12-02
[ [ "Worden", "Robert", "" ] ]
In 1995 I wrote a paper: "A Speed Limit for Evolution" whose main result was that evolution must proceed rather slowly, in accordance with the earlier views and intuitions of many authors. The abstract of the paper said: "The genetic information expressed in some part of the phenotype of a species cannot increase faster than a given rate, determined by the selection pressure on that part. This rate is typically a small fraction of a bit per generation". This result was derived in the presence of sexual reproduction and other effects such as temporarily isolated sub-populations. In 1999 David Mackay published a paper which apparently contradicted this result. In the abstract, he wrote "We find striking differences between populations that have recombination and populations that do not. If variation is produced by mutation alone, then the entire population gains up to roughly 1 bit per generation. If variation is created by recombination, the population can gain of the order of sqrt(G) bits per generation." Mackay proposed that there were outstanding evolutionary benefits to sexual reproduction, and that my result was too low by a very large factor. He later repeated this result in a textbook he wrote in 2003. The purpose of this note is to show that the key assumption of Mackays model, that "fitness is a strictly additive trait" is so unrealistic as to render his results irrelevant to any actual life form. In consequence, the speed limit I derived is still valid, and has important consequences for human cognitive evolution.
2008.01470
Thomas R. Weikl
Batuhan Kav, Andrea Grafm\"uller, Emanuel Schneck, and Thomas R. Weikl
Weak carbohydrate-carbohydrate interactions in membrane adhesion are fuzzy and generic
12 pages, 9 figures
Nanoscale, 2020, DOI: 10.1039/D0NR03696J
10.1039/D0NR03696J
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Carbohydrates such as the trisaccharide motif LeX are key constituents of cell surfaces. Despite intense research, the interactions between carbohydrates of apposing cells or membranes are not well understood. In this article, we investigate carbohydrate-carbohydrate interactions in membrane adhesion as well as in solution with extensive atomistic molecular dynamics simulations that exceed the simulation times of previous studies by orders of magnitude. For LeX, we obtain association constants of soluble carbohydrates, adhesion energies of lipid-anchored carbohydrates, and maximally sustained forces of carbohydrate complexes in membrane adhesion that are in good agreement with experimental results in the literature. Our simulations thus appear to provide a realistic, detailed picture of LeX-LeX interactions in solution and during membrane adhesion. In this picture, the LeX-LeX interactions are fuzzy, i.e. LeX pairs interact in a large variety of short-lived, bound conformations. For the synthetic tetrasaccharide Lac 2, which is composed of two lactose units, we observe similarly fuzzy interactions and obtain association constants of both soluble and lipid-anchored variants that are comparable to the corresponding association constants of LeX. The fuzzy, weak carbohydrate-carbohydrate interactions quantified in our simulations thus appear to be a generic feature of small, neutral carbohydrates such as LeX and Lac 2.
[ { "created": "Tue, 4 Aug 2020 11:35:19 GMT", "version": "v1" } ]
2020-08-05
[ [ "Kav", "Batuhan", "" ], [ "Grafmüller", "Andrea", "" ], [ "Schneck", "Emanuel", "" ], [ "Weikl", "Thomas R.", "" ] ]
Carbohydrates such as the trisaccharide motif LeX are key constituents of cell surfaces. Despite intense research, the interactions between carbohydrates of apposing cells or membranes are not well understood. In this article, we investigate carbohydrate-carbohydrate interactions in membrane adhesion as well as in solution with extensive atomistic molecular dynamics simulations that exceed the simulation times of previous studies by orders of magnitude. For LeX, we obtain association constants of soluble carbohydrates, adhesion energies of lipid-anchored carbohydrates, and maximally sustained forces of carbohydrate complexes in membrane adhesion that are in good agreement with experimental results in the literature. Our simulations thus appear to provide a realistic, detailed picture of LeX-LeX interactions in solution and during membrane adhesion. In this picture, the LeX-LeX interactions are fuzzy, i.e. LeX pairs interact in a large variety of short-lived, bound conformations. For the synthetic tetrasaccharide Lac 2, which is composed of two lactose units, we observe similarly fuzzy interactions and obtain association constants of both soluble and lipid-anchored variants that are comparable to the corresponding association constants of LeX. The fuzzy, weak carbohydrate-carbohydrate interactions quantified in our simulations thus appear to be a generic feature of small, neutral carbohydrates such as LeX and Lac 2.
1302.6422
Ruriko Yoshida
Grady Weyenberg and Peter Huggins and Christopher Schardl and Daniel K Howe and Ruriko Yoshida
kdetrees: Nonparametric Estimation of Phylogenetic Tree Distributions
3 figures
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: While the majority of gene histories found in a clade of organisms are expected to be generated by a common process (e.g. the coalescent process), it is well-known that numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history quite distinct from those of the majority of genes. Such "outlying" gene trees are considered to be biologically interesting and identifying these genes has become an important problem in phylogenetics. Results: We propose and implement KDETREES, a nonparametric method of estimating distributions of phylogenetic trees, with the goal of identifying trees which are significantly different from the rest of the trees in the sample. Our method compares favorably with a similar recently-published method, featuring an improvement of one polynomial order of computational complexity (to quadratic in the number of trees analyzed), with simulation studies suggesting only a small penalty to classification accuracy. Application of KDETREES to a set of Apicomplexa genes identified several unreliable sequence alignments which had escaped previous detection, as well as a gene independently reported as a possible case of horizontal gene transfer. We also analyze a set of Epichloe genes, fungi symbiotic with grasses, successfully identifying a contrived instance of paralogy. Availability: Our method for estimating tree distributions and identifying outlying trees is implemented as the R package KDETREES, and is available for download from CRAN.
[ { "created": "Tue, 26 Feb 2013 13:03:06 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2013 01:22:34 GMT", "version": "v2" }, { "created": "Tue, 22 Apr 2014 17:24:04 GMT", "version": "v3" } ]
2014-04-23
[ [ "Weyenberg", "Grady", "" ], [ "Huggins", "Peter", "" ], [ "Schardl", "Christopher", "" ], [ "Howe", "Daniel K", "" ], [ "Yoshida", "Ruriko", "" ] ]
Motivation: While the majority of gene histories found in a clade of organisms are expected to be generated by a common process (e.g. the coalescent process), it is well-known that numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history quite distinct from those of the majority of genes. Such "outlying" gene trees are considered to be biologically interesting and identifying these genes has become an important problem in phylogenetics. Results: We propose and implement KDETREES, a nonparametric method of estimating distributions of phylogenetic trees, with the goal of identifying trees which are significantly different from the rest of the trees in the sample. Our method compares favorably with a similar recently-published method, featuring an improvement of one polynomial order of computational complexity (to quadratic in the number of trees analyzed), with simulation studies suggesting only a small penalty to classification accuracy. Application of KDETREES to a set of Apicomplexa genes identified several unreliable sequence alignments which had escaped previous detection, as well as a gene independently reported as a possible case of horizontal gene transfer. We also analyze a set of Epichloe genes, fungi symbiotic with grasses, successfully identifying a contrived instance of paralogy. Availability: Our method for estimating tree distributions and identifying outlying trees is implemented as the R package KDETREES, and is available for download from CRAN.
1412.1235
Niviere Vincent
Vincent Nivi\`ere (LCBM - UMR 5249), Marc Fontecave (LCBM - UMR 5249)
Discovery of superoxide reductase: an historical perspective
null
Journal of Biological Inorganic Chemistry, 2004, 9, pp.119-23
10.1007/s00775-003-0519-7
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For more than 30 years, the only enzymatic system known to catalyze the elimination of superoxide was superoxide dismutase, SOD. SOD has been found in almost all organisms living in the presence of oxygen, including some anaerobic bacteria, supporting the notion that superoxide is a key and general component of oxidative stress. Recently, a new concept in the field of the mechanisms of cellular defense against superoxide has emerged. It was discovered that elimination of superoxide in some anaerobic and microaerophilic bacteria could occur by reduction, a reaction catalyzed by a small metalloenzyme thus named superoxide reductase, SOR. Having played a major role in this discovery, we describe here how the concept of superoxide reduction emerged and how it was experimentally substantiated independently in our laboratory.
[ { "created": "Wed, 3 Dec 2014 08:45:12 GMT", "version": "v1" } ]
2014-12-04
[ [ "Nivière", "Vincent", "", "LCBM - UMR 5249" ], [ "Fontecave", "Marc", "", "LCBM - UMR 5249" ] ]
For more than 30 years, the only enzymatic system known to catalyze the elimination of superoxide was superoxide dismutase, SOD. SOD has been found in almost all organisms living in the presence of oxygen, including some anaerobic bacteria, supporting the notion that superoxide is a key and general component of oxidative stress. Recently, a new concept in the field of the mechanisms of cellular defense against superoxide has emerged. It was discovered that elimination of superoxide in some anaerobic and microaerophilic bacteria could occur by reduction, a reaction catalyzed by a small metalloenzyme thus named superoxide reductase, SOR. Having played a major role in this discovery, we describe here how the concept of superoxide reduction emerged and how it was experimentally substantiated independently in our laboratory.
2005.12191
Junhua Li
Junhua Li, Anastasios Bezerianos, Nitish Thakor
Cognitive State Analysis, Understanding, and Decoding from the Perspective of Brain Connectivity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive states are involving in our daily life, which motivates us to explore them and understand them by a vast variety of perspectives. Among these perspectives, brain connectivity is increasingly receiving attention in recent years. It is the right time to summarize the past achievements, serving as a cornerstone for the upcoming progress in the field. In this chapter, the definition of the cognitive state is first given and the cognitive states that are frequently investigated are then outlined. The introduction of the methods for estimating connectivity strength and graph theoretical metrics is followed. Subsequently, each cognitive state is separately described and the progress in cognitive state investigation is summarized, including analysis, understanding, and decoding. We concentrate on the literature ascertaining macro-scale representations of cognitive states from the perspective of brain connectivity and give an overview of achievements related to cognitive states to date, especially within the past ten years. The discussions and future prospects are stated at the end of the chapter.
[ { "created": "Wed, 13 May 2020 11:08:42 GMT", "version": "v1" }, { "created": "Mon, 14 Sep 2020 23:05:23 GMT", "version": "v2" } ]
2020-09-16
[ [ "Li", "Junhua", "" ], [ "Bezerianos", "Anastasios", "" ], [ "Thakor", "Nitish", "" ] ]
Cognitive states are involving in our daily life, which motivates us to explore them and understand them by a vast variety of perspectives. Among these perspectives, brain connectivity is increasingly receiving attention in recent years. It is the right time to summarize the past achievements, serving as a cornerstone for the upcoming progress in the field. In this chapter, the definition of the cognitive state is first given and the cognitive states that are frequently investigated are then outlined. The introduction of the methods for estimating connectivity strength and graph theoretical metrics is followed. Subsequently, each cognitive state is separately described and the progress in cognitive state investigation is summarized, including analysis, understanding, and decoding. We concentrate on the literature ascertaining macro-scale representations of cognitive states from the perspective of brain connectivity and give an overview of achievements related to cognitive states to date, especially within the past ten years. The discussions and future prospects are stated at the end of the chapter.
1512.08074
Nathan Baker
Suzette A. Pabit and Andrea M. Katz and Igor S. Tolokh and Aleksander Drozdetski and Nathan Baker and Alexey V. Onufriev and Lois Pollack
Understanding Nucleic Acid Structural Changes by Comparing Wide-Angle X-ray Scattering (WAXS) Experiments to Molecular Dynamics Simulations
null
null
10.1063/1.4950814
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wide-angle x-ray scattering (WAXS) is emerging as a powerful tool for increasing the resolution of solution structure measurements of biomolecules. Compared to its better known complement, small angle x-ray scattering (SAXS), WAXS targets higher scattering angles and can enhance structural studies of molecules by accessing finer details of solution structures. Although the extension from SAXS to WAXS is easy to implement experimentally, the computational tools required to fully harness the power of WAXS are still under development. Currently, WAXS is employed to study structural changes and ligand binding in proteins; however the methods are not as fully developed for nucleic acids. Here, we show how WAXS can qualitatively characterize nucleic acid structures as well as the small but significant structural changes driven by the addition of multivalent ions. We show the potential of WAXS to test all-atom molecular dynamics (MD) simulations and to provide insight in understanding how the trivalent ion cobalt(III) hexammine (CoHex) affects the structure of RNA and DNA helices. We find that MD simulations capture the RNA structural change that occurs due to addition of CoHex.
[ { "created": "Sat, 26 Dec 2015 05:20:46 GMT", "version": "v1" }, { "created": "Sun, 15 May 2016 04:00:55 GMT", "version": "v2" } ]
2016-06-22
[ [ "Pabit", "Suzette A.", "" ], [ "Katz", "Andrea M.", "" ], [ "Tolokh", "Igor S.", "" ], [ "Drozdetski", "Aleksander", "" ], [ "Baker", "Nathan", "" ], [ "Onufriev", "Alexey V.", "" ], [ "Pollack", "Lois", ""...
Wide-angle x-ray scattering (WAXS) is emerging as a powerful tool for increasing the resolution of solution structure measurements of biomolecules. Compared to its better known complement, small angle x-ray scattering (SAXS), WAXS targets higher scattering angles and can enhance structural studies of molecules by accessing finer details of solution structures. Although the extension from SAXS to WAXS is easy to implement experimentally, the computational tools required to fully harness the power of WAXS are still under development. Currently, WAXS is employed to study structural changes and ligand binding in proteins; however the methods are not as fully developed for nucleic acids. Here, we show how WAXS can qualitatively characterize nucleic acid structures as well as the small but significant structural changes driven by the addition of multivalent ions. We show the potential of WAXS to test all-atom molecular dynamics (MD) simulations and to provide insight in understanding how the trivalent ion cobalt(III) hexammine (CoHex) affects the structure of RNA and DNA helices. We find that MD simulations capture the RNA structural change that occurs due to addition of CoHex.
1206.5811
Andrey Dovzhenok
Andrey Dovzhenok, Leonid L. Rubchinsky
On the Origin of Tremor in Parkinson's Disease
21 pages, 8 figures, submitted to PLoS One
(2012) PLoS ONE 7(7): e41598
10.1371/journal.pone.0041598
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exact origin of tremor in Parkinson's disease remains unknown. We explain why the existing data converge on the basal ganglia-thalamo-cortical loop as a tremor generator and consider a conductance-based model of subthalamo-pallidal circuits embedded into a simplified representation of the basal ganglia-thalamo-cortical circuit to investigate the dynamics of this loop. We show how variation of the strength of dopamine-modulated connections in the basal ganglia-thalamo-cortical loop (representing the decreasing dopamine level in Parkinson's disease) leads to the occurrence of tremor-like burst firing. These tremor-like oscillations are suppressed when the connections are modulated back to represent a higher dopamine level (as it would be the case in dopaminergic therapy), as well as when the basal ganglia-thalamo-cortical loop is broken (as would be the case for ablative anti-parkinsonian surgeries). Thus, the proposed model provides an explanation for the basal ganglia-thalamo-cortical loop mechanism of tremor generation. The strengthening of the loop leads to tremor oscillations, while the weakening or disconnection of the loop suppresses them. The loop origin of parkinsonian tremor also suggests that new tremor-suppression therapies may have anatomical targets in different cortical and subcortical areas as long as they are within the basal ganglia-thalamo-cortical loop.
[ { "created": "Mon, 25 Jun 2012 20:00:33 GMT", "version": "v1" } ]
2012-08-13
[ [ "Dovzhenok", "Andrey", "" ], [ "Rubchinsky", "Leonid L.", "" ] ]
The exact origin of tremor in Parkinson's disease remains unknown. We explain why the existing data converge on the basal ganglia-thalamo-cortical loop as a tremor generator and consider a conductance-based model of subthalamo-pallidal circuits embedded into a simplified representation of the basal ganglia-thalamo-cortical circuit to investigate the dynamics of this loop. We show how variation of the strength of dopamine-modulated connections in the basal ganglia-thalamo-cortical loop (representing the decreasing dopamine level in Parkinson's disease) leads to the occurrence of tremor-like burst firing. These tremor-like oscillations are suppressed when the connections are modulated back to represent a higher dopamine level (as it would be the case in dopaminergic therapy), as well as when the basal ganglia-thalamo-cortical loop is broken (as would be the case for ablative anti-parkinsonian surgeries). Thus, the proposed model provides an explanation for the basal ganglia-thalamo-cortical loop mechanism of tremor generation. The strengthening of the loop leads to tremor oscillations, while the weakening or disconnection of the loop suppresses them. The loop origin of parkinsonian tremor also suggests that new tremor-suppression therapies may have anatomical targets in different cortical and subcortical areas as long as they are within the basal ganglia-thalamo-cortical loop.
2003.11614
Maria Luisa Chiusano
Maria Luisa Chiusano
The modelling of COVID19 pathways sheds light on mechanisms, opportunities and on controversial interpretations of medical treatments. v2
null
null
null
null
q-bio.MN q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
The new coronavirus (2019-nCoV or SARS-CoV2), inducing the current pandemic disease (COVID-19) and causing pneumoniae in humans, is dramatically increasing in epidemic scale since its first appearance in Wuhan, China, in December 2019. The first infection from epidemic coronaviruses in 2003 fostered the spread of an overwhelming amount of related scientific efforts. The manifold aspects that have been raised, as well as their redundancy offer precious information that has been underexploited and needs to be critically re-evaluated, appropriately used and offered to the whole community, from scientists, to medical doctors, stakeholders and common people. These efforts will favour a holistic view on the comprehension, prevention and development of strategies (pharmacological, clinical etc) as well as common intervention against the new coronavirus spreading. Here we describe a model that emerged from our analysis that was focused on the Renin Angiotensin System (RAS) and the possible routes linking it to the viral infection. because the infection is mediated by the viral receptor on human cell membranes Angiotensin Converting Enzyme (ACE2), which is a key component in RAS signalling. The model depicts the main pathways determining the disease and the molecular framework for its establishment, and can help to shed light on mechanisms involved in the infection. It promptly gives an answer to some of the controversial, and still open, issues concerning predisposing conditions and medical treatments that protect from or favour the severity of the disease (such as the use of ACE inhibitors or ARBs/sartans), or to the sex related biases in the affected population. The model highlights novel opportunities for further investigations, diagnosis and appropriate intervention to understand and fight COVID19.
[ { "created": "Wed, 25 Mar 2020 20:17:51 GMT", "version": "v1" } ]
2020-03-28
[ [ "Chiusano", "Maria Luisa", "" ] ]
The new coronavirus (2019-nCoV or SARS-CoV2), inducing the current pandemic disease (COVID-19) and causing pneumoniae in humans, is dramatically increasing in epidemic scale since its first appearance in Wuhan, China, in December 2019. The first infection from epidemic coronaviruses in 2003 fostered the spread of an overwhelming amount of related scientific efforts. The manifold aspects that have been raised, as well as their redundancy offer precious information that has been underexploited and needs to be critically re-evaluated, appropriately used and offered to the whole community, from scientists, to medical doctors, stakeholders and common people. These efforts will favour a holistic view on the comprehension, prevention and development of strategies (pharmacological, clinical etc) as well as common intervention against the new coronavirus spreading. Here we describe a model that emerged from our analysis that was focused on the Renin Angiotensin System (RAS) and the possible routes linking it to the viral infection. because the infection is mediated by the viral receptor on human cell membranes Angiotensin Converting Enzyme (ACE2), which is a key component in RAS signalling. The model depicts the main pathways determining the disease and the molecular framework for its establishment, and can help to shed light on mechanisms involved in the infection. It promptly gives an answer to some of the controversial, and still open, issues concerning predisposing conditions and medical treatments that protect from or favour the severity of the disease (such as the use of ACE inhibitors or ARBs/sartans), or to the sex related biases in the affected population. The model highlights novel opportunities for further investigations, diagnosis and appropriate intervention to understand and fight COVID19.
2201.07320
Lucia Peixoto
Elizabeth Medina, Sarah Peterson, Kristan Singletary, and Lucia Peixoto
Critical periods and Autism Spectrum Disorders, a role for sleep
14 pages, 2 figures, 1 Table
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Brain development relies on both experience and genetically defined programs. Time windows where certain brain circuits are particularly receptive to external stimuli, resulting in heightened plasticity, are referred to as critical periods. Sleep is thought to be essential for normal brain development. Importantly, studies have shown that sleep enhances critical period plasticity and promotes experience-dependent synaptic pruning in the developing mammalian brain. Therefore, normal plasticity during critical periods depends on proper sleep. Problems falling and staying asleep occur at a higher rate in Autism Spectrum Disorder (ASD) relative to typical development. In this review, we explore the potential link between sleep, critical period plasticity, and ASD. First, we review the importance of critical period plasticity in typical development and the role of sleep in this process. Next, we summarize the evidence linking ASD with deficits in synaptic plasticity in rodent models of high-confident ASD gene candidates. We then show that almost all the high-confidence rodent models of ASD that show sleep deficits also display plasticity deficits. Given how important sleep is for critical period plasticity, it is essential to understand the connections between synaptic plasticity, sleep, and brain development in ASD. However, studies investigating sleep or plasticity during critical periods in ASD mouse models are lacking. Therefore, we highlight an urgent need to consider developmental trajectory in studies of sleep and plasticity in neurodevelopmental disorders.
[ { "created": "Tue, 18 Jan 2022 21:29:18 GMT", "version": "v1" } ]
2022-01-20
[ [ "Medina", "Elizabeth", "" ], [ "Peterson", "Sarah", "" ], [ "Singletary", "Kristan", "" ], [ "Peixoto", "Lucia", "" ] ]
Brain development relies on both experience and genetically defined programs. Time windows where certain brain circuits are particularly receptive to external stimuli, resulting in heightened plasticity, are referred to as critical periods. Sleep is thought to be essential for normal brain development. Importantly, studies have shown that sleep enhances critical period plasticity and promotes experience-dependent synaptic pruning in the developing mammalian brain. Therefore, normal plasticity during critical periods depends on proper sleep. Problems falling and staying asleep occur at a higher rate in Autism Spectrum Disorder (ASD) relative to typical development. In this review, we explore the potential link between sleep, critical period plasticity, and ASD. First, we review the importance of critical period plasticity in typical development and the role of sleep in this process. Next, we summarize the evidence linking ASD with deficits in synaptic plasticity in rodent models of high-confident ASD gene candidates. We then show that almost all the high-confidence rodent models of ASD that show sleep deficits also display plasticity deficits. Given how important sleep is for critical period plasticity, it is essential to understand the connections between synaptic plasticity, sleep, and brain development in ASD. However, studies investigating sleep or plasticity during critical periods in ASD mouse models are lacking. Therefore, we highlight an urgent need to consider developmental trajectory in studies of sleep and plasticity in neurodevelopmental disorders.
1207.2484
Alan Lapedes
Alan Lapedes, Bertrand Giraud, Christopher Jarzynski
Using Sequence Alignments to Predict Protein Structure and Stability With High Accuracy
This manuscript was originally written in 2002 and available from http://library.lanl.gov/cgi-bin/getfile?01038177.pdf It's being deposited here for greater ease of access
null
null
LA-UR-02-4481
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a sequence-based probabilistic formalism that directly addresses co-operative effects in networks of interacting positions in proteins, providing significantly improved contact prediction, as well as accurate quantitative prediction of free energy changes due to non-additive effects of multiple mutations. In addition to these practical considerations, the agreement of our sequence-based calculations with experimental data for both structure and stability demonstrates a strong relation between the statistical distribution of protein sequences produced by natural evolutionary processes, and the thermodynamic stability of the structures to which these sequences fold.
[ { "created": "Tue, 10 Jul 2012 20:24:42 GMT", "version": "v1" } ]
2012-07-12
[ [ "Lapedes", "Alan", "" ], [ "Giraud", "Bertrand", "" ], [ "Jarzynski", "Christopher", "" ] ]
We present a sequence-based probabilistic formalism that directly addresses co-operative effects in networks of interacting positions in proteins, providing significantly improved contact prediction, as well as accurate quantitative prediction of free energy changes due to non-additive effects of multiple mutations. In addition to these practical considerations, the agreement of our sequence-based calculations with experimental data for both structure and stability demonstrates a strong relation between the statistical distribution of protein sequences produced by natural evolutionary processes, and the thermodynamic stability of the structures to which these sequences fold.
1805.06002
Xin Wang
Xin Wang and Yang-Yu Liu
Overcome Competitive Exclusion in Ecosystems
Manuscript 13 pages, 10 figures; SI 15 pages, 8 figures
iScience 2020;23(4):101009
10.1016/j.isci.2020.101009
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining biodiversity in nature is a fundamental problem in ecology. An outstanding challenge is embodied in the so-called Competitive Exclusion Principle: two species competing for one limiting resource cannot coexist at constant population densities, or more generally, the number of consumer species in steady coexistence cannot exceed that of resources. The fact that competitive exclusion is rarely observed in natural ecosystems has not been fully understood. Here we show that by forming chasing triplets among the consumers and resources in the consumption process, the Competitive Exclusion Principle can be naturally violated. The modeling framework developed here is broadly applicable and can be used to explain the biodiversity of many consumer-resource ecosystems and hence deepens our understanding of biodiversity in nature.
[ { "created": "Tue, 15 May 2018 19:23:35 GMT", "version": "v1" }, { "created": "Fri, 8 Jun 2018 03:33:16 GMT", "version": "v2" }, { "created": "Tue, 28 Aug 2018 22:06:25 GMT", "version": "v3" }, { "created": "Wed, 12 Sep 2018 03:42:02 GMT", "version": "v4" }, { "cr...
2020-04-13
[ [ "Wang", "Xin", "" ], [ "Liu", "Yang-Yu", "" ] ]
Explaining biodiversity in nature is a fundamental problem in ecology. An outstanding challenge is embodied in the so-called Competitive Exclusion Principle: two species competing for one limiting resource cannot coexist at constant population densities, or more generally, the number of consumer species in steady coexistence cannot exceed that of resources. The fact that competitive exclusion is rarely observed in natural ecosystems has not been fully understood. Here we show that by forming chasing triplets among the consumers and resources in the consumption process, the Competitive Exclusion Principle can be naturally violated. The modeling framework developed here is broadly applicable and can be used to explain the biodiversity of many consumer-resource ecosystems and hence deepens our understanding of biodiversity in nature.
0704.1908
Radek Erban
Radek Erban, Jonathan Chapman and Philip Maini
A practical guide to stochastic simulations of reaction-diffusion processes
35 pages
null
null
null
q-bio.SC physics.ed-ph q-bio.QM
null
A practical introduction to stochastic modelling of reaction-diffusion processes is presented. No prior knowledge of stochastic simulations is assumed. The methods are explained using illustrative examples. The article starts with the classical Gillespie algorithm for the stochastic modelling of chemical reactions. Then stochastic algorithms for modelling molecular diffusion are given. Finally, basic stochastic reaction-diffusion methods are presented. The connections between stochastic simulations and deterministic models are explained and basic mathematical tools (e.g. chemical master equation) are presented. The article concludes with an overview of more advanced methods and problems.
[ { "created": "Sun, 15 Apr 2007 17:50:38 GMT", "version": "v1" }, { "created": "Mon, 19 Nov 2007 03:47:24 GMT", "version": "v2" } ]
2007-11-19
[ [ "Erban", "Radek", "" ], [ "Chapman", "Jonathan", "" ], [ "Maini", "Philip", "" ] ]
A practical introduction to stochastic modelling of reaction-diffusion processes is presented. No prior knowledge of stochastic simulations is assumed. The methods are explained using illustrative examples. The article starts with the classical Gillespie algorithm for the stochastic modelling of chemical reactions. Then stochastic algorithms for modelling molecular diffusion are given. Finally, basic stochastic reaction-diffusion methods are presented. The connections between stochastic simulations and deterministic models are explained and basic mathematical tools (e.g. chemical master equation) are presented. The article concludes with an overview of more advanced methods and problems.
2310.08336
Shesha Gopal Marehalli Srinivas
Shesha Gopal Marehalli Srinivas, Francesco Avanzini, Massimiliano Esposito
Thermodynamics of Growth in Open Chemical Reaction Networks
null
null
null
null
q-bio.MN cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
We identify the thermodynamic conditions necessary to observe indefinite growth in homogeneous open chemical reaction networks (CRNs) satisfying mass action kinetics. We also characterize the thermodynamic efficiency of growth by considering the fraction of the chemical work supplied from the surroundings that is converted into CRN free energy. We find that indefinite growth cannot arise in CRNs chemostatted by fixing the concentration of some species at constant values, or in continuous-flow stirred tank reactors. Indefinite growth requires a constant net influx from the surroundings of at least one species. In this case, unimolecular CRNs always generate equilibrium linear growth, i.e., a continuous linear accumulation of species with equilibrium concentrations and efficiency one. Multimolecular CRNs are necessary to generate nonequilibrium growth, i.e., the continuous accumulation of species with nonequilibrium concentrations. Pseudo-unimolecular CRNs - a subclass of multimolecular CRNs - always generate asymptotic linear growth with zero efficiency. Our findings demonstrate the importance of the CRN topology and the chemostatting procedure in determining the dynamics and thermodynamics of growth.
[ { "created": "Thu, 12 Oct 2023 13:48:55 GMT", "version": "v1" } ]
2023-10-13
[ [ "Srinivas", "Shesha Gopal Marehalli", "" ], [ "Avanzini", "Francesco", "" ], [ "Esposito", "Massimiliano", "" ] ]
We identify the thermodynamic conditions necessary to observe indefinite growth in homogeneous open chemical reaction networks (CRNs) satisfying mass action kinetics. We also characterize the thermodynamic efficiency of growth by considering the fraction of the chemical work supplied from the surroundings that is converted into CRN free energy. We find that indefinite growth cannot arise in CRNs chemostatted by fixing the concentration of some species at constant values, or in continuous-flow stirred tank reactors. Indefinite growth requires a constant net influx from the surroundings of at least one species. In this case, unimolecular CRNs always generate equilibrium linear growth, i.e., a continuous linear accumulation of species with equilibrium concentrations and efficiency one. Multimolecular CRNs are necessary to generate nonequilibrium growth, i.e., the continuous accumulation of species with nonequilibrium concentrations. Pseudo-unimolecular CRNs - a subclass of multimolecular CRNs - always generate asymptotic linear growth with zero efficiency. Our findings demonstrate the importance of the CRN topology and the chemostatting procedure in determining the dynamics and thermodynamics of growth.
1904.06667
Johannes M\"uller
Thibaut Sellinger (1), Johannes M\"uller (2 and 3), Volker H\"osel (2), Aur\'elien Tellier (1) ((1) Section of Population Genetics, Center of Life and Food Sciences Weihenstephan, Technische Universit\"at M\"unchen, Germany, (2) Center for Mathematics, Technische Universit\"at M\"unchen, Germany, (3) Institute for Computational Biology, Helmholtz Center Munich, Germany)
Are the better cooperators dormant or quiescent?
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the wealth of empirical and theoretical studies, the origin and maintenance of cooperation is still an evolutionary riddle. In this context, ecological life-history traits which affect the efficiency of selection may play a role, though these are often ignored. We consider here species such as bacteria, fungi, invertebrates and plants which exhibit resting stages in the form of a quiescent state or a seedbank. When quiescent, individuals are inactive and reproduce upon activation, while under seed bank parents produce offspring remaining dormant for different amount of time. We assume weak frequency-dependent selection modeled using game-theory and the prisoners dilemma (cooperation/defect) as payoff matrix. The cooperators and defectors are allowed to evolve different quiescence or dormancy times. By means of singular perturbation theory we reduce the model to a one-dimensional equation resembling the well known replicator equation, where the gain functions are scaled with lumped parameters reflecting the time scale of the resting state of the cooperators and defectors. If both time scales are identical cooperation cannot persist in a homogeneous population. If, however, the time scale of the cooperator is distinctively different from that of the defector, cooperation may become a locally asymptotically stable strategy. Interestingly enough, in the seedbank case the cooperator needs to be faster than the defector, while in the quiescent case the cooperator has to be slower. We use adaptive dynamics to identify situations where cooperation may evolve and form a convergent stable ESS. We conclude by highlighting the relevance fo these results for many non-model species and the maintenance of cooperation in microbial, invertebrate or plant populations.
[ { "created": "Sun, 14 Apr 2019 09:58:29 GMT", "version": "v1" } ]
2019-04-16
[ [ "Sellinger", "Thibaut", "", "2 and 3" ], [ "Müller", "Johannes", "", "2 and 3" ], [ "Hösel", "Volker", "" ], [ "Tellier", "Aurélien", "" ] ]
Despite the wealth of empirical and theoretical studies, the origin and maintenance of cooperation is still an evolutionary riddle. In this context, ecological life-history traits which affect the efficiency of selection may play a role, though these are often ignored. We consider here species such as bacteria, fungi, invertebrates and plants which exhibit resting stages in the form of a quiescent state or a seedbank. When quiescent, individuals are inactive and reproduce upon activation, while under seed bank parents produce offspring remaining dormant for different amount of time. We assume weak frequency-dependent selection modeled using game-theory and the prisoners dilemma (cooperation/defect) as payoff matrix. The cooperators and defectors are allowed to evolve different quiescence or dormancy times. By means of singular perturbation theory we reduce the model to a one-dimensional equation resembling the well known replicator equation, where the gain functions are scaled with lumped parameters reflecting the time scale of the resting state of the cooperators and defectors. If both time scales are identical cooperation cannot persist in a homogeneous population. If, however, the time scale of the cooperator is distinctively different from that of the defector, cooperation may become a locally asymptotically stable strategy. Interestingly enough, in the seedbank case the cooperator needs to be faster than the defector, while in the quiescent case the cooperator has to be slower. We use adaptive dynamics to identify situations where cooperation may evolve and form a convergent stable ESS. We conclude by highlighting the relevance fo these results for many non-model species and the maintenance of cooperation in microbial, invertebrate or plant populations.
2305.03136
David Brookes
David H. Brookes, Jakub Otwinowski, and Sam Sinai
Contrastive losses as generalized models of global epistasis
null
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by/4.0/
Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.
[ { "created": "Thu, 4 May 2023 20:33:05 GMT", "version": "v1" }, { "created": "Mon, 8 May 2023 00:59:43 GMT", "version": "v2" }, { "created": "Fri, 1 Dec 2023 18:09:00 GMT", "version": "v3" } ]
2023-12-04
[ [ "Brookes", "David H.", "" ], [ "Otwinowski", "Jakub", "" ], [ "Sinai", "Sam", "" ] ]
Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.
1511.02976
James Shine
James M. Shine, Patrick G. Bissett, Peter T. Bell, Oluwasanmi Koyejo, Joshua H. Balsters, Krzysztof J. Gorgolewski, Craig A. Moodie, Russell A. Poldrack
The Dynamics of Functional Brain Networks: Integrated Network States during Cognitive Function
38 pages, 4 figures
null
10.1016/j.neuron.2016.09.018
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Higher brain function relies upon the ability to flexibly integrate information across specialized communities of brain regions, however it is unclear how this mechanism manifests over time. In this study, we use time-resolved network analysis of functional magnetic resonance imaging data to demonstrate that the human brain traverses between two functional states that maximize either segregation into tight-knit communities or integration across otherwise disparate neural regions. The integrated state enables faster and more accurate performance on a cognitive task, and is associated with dilations in pupil diameter, suggesting that ascending neuromodulatory systems may govern the transition between these alternative modes of brain function. Our data confirm a direct link between cognitive performance and the dynamic reorganization of the network structure of the brain.
[ { "created": "Tue, 10 Nov 2015 02:57:08 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2016 18:50:08 GMT", "version": "v2" }, { "created": "Mon, 1 Aug 2016 03:46:56 GMT", "version": "v3" } ]
2017-05-30
[ [ "Shine", "James M.", "" ], [ "Bissett", "Patrick G.", "" ], [ "Bell", "Peter T.", "" ], [ "Koyejo", "Oluwasanmi", "" ], [ "Balsters", "Joshua H.", "" ], [ "Gorgolewski", "Krzysztof J.", "" ], [ "Moodie", "Craig...
Higher brain function relies upon the ability to flexibly integrate information across specialized communities of brain regions, however it is unclear how this mechanism manifests over time. In this study, we use time-resolved network analysis of functional magnetic resonance imaging data to demonstrate that the human brain traverses between two functional states that maximize either segregation into tight-knit communities or integration across otherwise disparate neural regions. The integrated state enables faster and more accurate performance on a cognitive task, and is associated with dilations in pupil diameter, suggesting that ascending neuromodulatory systems may govern the transition between these alternative modes of brain function. Our data confirm a direct link between cognitive performance and the dynamic reorganization of the network structure of the brain.
1301.1077
Christos Skiadas H
Christos H Skiadas and Charilaos Skiadas
A Quantitative Method for Estimating the Human Development Stages by Based on the Health State Function Theory and the Resulting Deterioration Process
19 pages, 9 figures, 8 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Health State Function theory is applied to find a quantitative estimate of the Human Development Stages by defining and calculating the specific age groups and subgroups. Early and late adolescence stages, first, second and third stages of adult development are estimated along with the early, middle and old age groups and subgroups. We briefly present the first exit time theory used to find the health state function of a population and then we give the details of the new theoretical approach with the appropriate applications to support and validate the theoretical assumptions. Our approach is useful for people working in several scientific fields and especially in medicine, biology, anthropology, psychology, gerontology, probability and statistics. The results are connected with the speed and acceleration of the deterioration of the human organism during age as a consequence of the changes in the first, second and third differences of the Health State Function and of the Deterioration Function. Keywords: Human development stages, Deterioration, Deterioration function, Human Mortality Database, HMD, World Health Organization, WHO, Quantitative methods, Health State Function, Erikson's stages of psychosocial development, Piaget method, Sullivan method, Disability stages, Light disability, Moderate disability, Severe disability stage, Old ages, Critical ages.
[ { "created": "Sun, 6 Jan 2013 23:42:59 GMT", "version": "v1" } ]
2013-01-08
[ [ "Skiadas", "Christos H", "" ], [ "Skiadas", "Charilaos", "" ] ]
The Health State Function theory is applied to find a quantitative estimate of the Human Development Stages by defining and calculating the specific age groups and subgroups. Early and late adolescence stages, first, second and third stages of adult development are estimated along with the early, middle and old age groups and subgroups. We briefly present the first exit time theory used to find the health state function of a population and then we give the details of the new theoretical approach with the appropriate applications to support and validate the theoretical assumptions. Our approach is useful for people working in several scientific fields and especially in medicine, biology, anthropology, psychology, gerontology, probability and statistics. The results are connected with the speed and acceleration of the deterioration of the human organism during age as a consequence of the changes in the first, second and third differences of the Health State Function and of the Deterioration Function. Keywords: Human development stages, Deterioration, Deterioration function, Human Mortality Database, HMD, World Health Organization, WHO, Quantitative methods, Health State Function, Erikson's stages of psychosocial development, Piaget method, Sullivan method, Disability stages, Light disability, Moderate disability, Severe disability stage, Old ages, Critical ages.
2301.12888
Olivier Thouvenin
Tual Monfort, Salvatore Azzollini, Jeremy Brogard, Marilou Cl\'emen\c{c}on, Am\'elie Slembrouck-Brec, Valerie Forster, Serge Picaud, Olivier Goureau, Sacha Reichman, Olivier Thouvenin, and Kate Grieve
Dynamic Full-Field Optical Coherence Tomography module adapted to commercial microscopes for longitudinal in vitro cell culture study
null
null
null
null
q-bio.QM physics.med-ph physics.optics
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dynamic full-field optical coherence tomography (D-FFOCT) has recently emerged as a label-free imaging tool, capable of resolving cell types and organelles within 3D live samples, whilst monitoring their activity at tens of milliseconds resolution. Here, a D-FFOCT module design is presented which can be coupled to a commercial microscope with a stage top incubator, allowing non-invasive label-free longitudinal imaging over periods of minutes to weeks on the same sample. Long term volumetric imaging on human induced pluripotent stem cell-derived retinal organoids is demonstrated, highlighting tissue and cell organisation as well as cell shape, motility and division. Imaging on retinal explants highlights single 3D cone and rod structures. An optimal workflow for data acquisition, postprocessing and saving is demonstrated, resulting in a time gain factor of 10 compared to prior state of the art. Finally, a method to increase D-FFOCT signal-to-noise ratio is demonstrated, allowing rapid organoid screening.
[ { "created": "Mon, 30 Jan 2023 13:47:40 GMT", "version": "v1" } ]
2023-01-31
[ [ "Monfort", "Tual", "" ], [ "Azzollini", "Salvatore", "" ], [ "Brogard", "Jeremy", "" ], [ "Clémençon", "Marilou", "" ], [ "Slembrouck-Brec", "Amélie", "" ], [ "Forster", "Valerie", "" ], [ "Picaud", "Serge", ...
Dynamic full-field optical coherence tomography (D-FFOCT) has recently emerged as a label-free imaging tool, capable of resolving cell types and organelles within 3D live samples, whilst monitoring their activity at tens of milliseconds resolution. Here, a D-FFOCT module design is presented which can be coupled to a commercial microscope with a stage top incubator, allowing non-invasive label-free longitudinal imaging over periods of minutes to weeks on the same sample. Long term volumetric imaging on human induced pluripotent stem cell-derived retinal organoids is demonstrated, highlighting tissue and cell organisation as well as cell shape, motility and division. Imaging on retinal explants highlights single 3D cone and rod structures. An optimal workflow for data acquisition, postprocessing and saving is demonstrated, resulting in a time gain factor of 10 compared to prior state of the art. Finally, a method to increase D-FFOCT signal-to-noise ratio is demonstrated, allowing rapid organoid screening.
2201.04231
Xin Tu
V. DeGruttola, M. Nakazawa, J. Liu, X. Tu, S. Little, S. Mehta
Modeling Homophily in Dynamic Networks with Application to HIV Molecular Surveillance
null
null
null
null
q-bio.GN stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a novel approach to modeling homphily, i.e. the tendency of nodes that share (or differ in) certain attributes to be linked; we consider dynamic networks in which nodes can be added over time but not removed. Our application is to HIV genetic linkage analysis that has been used to investigate HIV transmission dynamics. In this setting, two HIV sequences from different persons with HIV (PWH) are said to be linked if the genetic distance between these sequences is less than a given threshold. Such linkage suggests that that the nodes representing the two infected PWH, are close to each other in a transmission network; such proximity would imply that either one of the infected people directly transmitted the virus to the other or indirectly transmitted it through a small number of intermediaries. These viral genetic linkage networks are dynamic in the sense that, over time, a group or cluster of genetically linked viral sequences may increase in size as new people are infected by those in the cluster either directly or through intermediaries. Our approach makes use of a logistic model to describe homophily with regard to demographic and behavioral characteristics that is we investigate whether similarities (or differences) between PWH in these characteristics impacts the probability that their sequences are be linked. Such analyses provide information about HIV transmission dynamics within a population.
[ { "created": "Wed, 29 Dec 2021 23:14:04 GMT", "version": "v1" } ]
2022-01-13
[ [ "DeGruttola", "V.", "" ], [ "Nakazawa", "M.", "" ], [ "Liu", "J.", "" ], [ "Tu", "X.", "" ], [ "Little", "S.", "" ], [ "Mehta", "S.", "" ] ]
This paper describes a novel approach to modeling homphily, i.e. the tendency of nodes that share (or differ in) certain attributes to be linked; we consider dynamic networks in which nodes can be added over time but not removed. Our application is to HIV genetic linkage analysis that has been used to investigate HIV transmission dynamics. In this setting, two HIV sequences from different persons with HIV (PWH) are said to be linked if the genetic distance between these sequences is less than a given threshold. Such linkage suggests that that the nodes representing the two infected PWH, are close to each other in a transmission network; such proximity would imply that either one of the infected people directly transmitted the virus to the other or indirectly transmitted it through a small number of intermediaries. These viral genetic linkage networks are dynamic in the sense that, over time, a group or cluster of genetically linked viral sequences may increase in size as new people are infected by those in the cluster either directly or through intermediaries. Our approach makes use of a logistic model to describe homophily with regard to demographic and behavioral characteristics that is we investigate whether similarities (or differences) between PWH in these characteristics impacts the probability that their sequences are be linked. Such analyses provide information about HIV transmission dynamics within a population.
1802.09272
Kelin Xia
Kelin Xia
Persistent homology analysis of ion aggregation and hydrogen-bonding network
21 pages, 11 figures, 2 tables
Physical Chemistry Chemical Physics, 20, 13448-13460, 2018
10.1039/C8CP01552J
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the great advancement of experimental tools and theoretical models, a quantitative characterization of the microscopic structures of ion aggregates and its associated water hydrogen-bonding networks still remains a challenging problem. In this paper, a newly-invented mathematical method called persistent homology is introduced, for the first time, to quantitatively analyze the intrinsic topological properties of ion aggregation systems and hydrogen-bonding networks. Two most distinguishable properties of persistent homology analysis of assembly systems are as follows. First, it does not require a predefined bond length to construct the ion or hydrogen network. Persistent homology results are determined by the morphological structure of the data only. Second, it can directly measure the size of circles or holes in ion aggregates and hydrogen-bonding networks. To validate our model, we consider two well-studied systems, i.e., NaCl and KSCN solutions, generated from molecular dynamics simulations. They are believed to represent two morphological types of aggregation, i.e., local clusters and extended ion network. It has been found that the two aggregation types have distinguishable topological features and can be characterized by our topological model very well. For hydrogen-bonding networks, KSCN systems demonstrate much more dramatic variations in their local circle structures with the concentration increase. A consistent increase of large-sized local circle structures is observed and the sizes of these circles become more and more diverse. In contrast, NaCl systems show no obvious increase of large-sized circles. Instead a consistent decline of the average size of circle structures is observed and the sizes of these circles become more and more uniformed with the concentration increase.
[ { "created": "Mon, 26 Feb 2018 12:31:03 GMT", "version": "v1" } ]
2019-03-08
[ [ "Xia", "Kelin", "" ] ]
Despite the great advancement of experimental tools and theoretical models, a quantitative characterization of the microscopic structures of ion aggregates and its associated water hydrogen-bonding networks still remains a challenging problem. In this paper, a newly-invented mathematical method called persistent homology is introduced, for the first time, to quantitatively analyze the intrinsic topological properties of ion aggregation systems and hydrogen-bonding networks. Two most distinguishable properties of persistent homology analysis of assembly systems are as follows. First, it does not require a predefined bond length to construct the ion or hydrogen network. Persistent homology results are determined by the morphological structure of the data only. Second, it can directly measure the size of circles or holes in ion aggregates and hydrogen-bonding networks. To validate our model, we consider two well-studied systems, i.e., NaCl and KSCN solutions, generated from molecular dynamics simulations. They are believed to represent two morphological types of aggregation, i.e., local clusters and extended ion network. It has been found that the two aggregation types have distinguishable topological features and can be characterized by our topological model very well. For hydrogen-bonding networks, KSCN systems demonstrate much more dramatic variations in their local circle structures with the concentration increase. A consistent increase of large-sized local circle structures is observed and the sizes of these circles become more and more diverse. In contrast, NaCl systems show no obvious increase of large-sized circles. Instead a consistent decline of the average size of circle structures is observed and the sizes of these circles become more and more uniformed with the concentration increase.
2104.04604
Jonathan Karr
Michael L. Blinov, John H. Gennari, Jonathan R. Karr, Ion I. Moraru, David P. Nickerson and Herbert M. Sauro
Practical Resources for Enhancing the Reproducibility of Mechanistic Modeling in Systems Biology
11 pages, 1 figure
null
null
null
q-bio.QM q-bio.CB q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Although reproducibility is a core tenet of the scientific method, it remains challenging to reproduce many results. Surprisingly, this also holds true for computational results in domains such as systems biology where there have been extensive standardization efforts. For example, Tiwari et al. recently found that they could only repeat 50% of published simulation results in systems biology. Toward improving the reproducibility of computational systems research, we identified several resources that investigators can leverage to make their research more accessible, executable, and comprehensible by others. In particular, we identified several domain standards and curation services, as well as powerful approaches pioneered by the software engineering industry that we believe many investigators could adopt. Together, we believe these approaches could substantially enhance the reproducibility of systems biology research. In turn, we believe enhanced reproducibility would accelerate the development of more sophisticated models that could inform precision medicine and synthetic biology.
[ { "created": "Fri, 9 Apr 2021 21:09:27 GMT", "version": "v1" } ]
2021-04-13
[ [ "Blinov", "Michael L.", "" ], [ "Gennari", "John H.", "" ], [ "Karr", "Jonathan R.", "" ], [ "Moraru", "Ion I.", "" ], [ "Nickerson", "David P.", "" ], [ "Sauro", "Herbert M.", "" ] ]
Although reproducibility is a core tenet of the scientific method, it remains challenging to reproduce many results. Surprisingly, this also holds true for computational results in domains such as systems biology where there have been extensive standardization efforts. For example, Tiwari et al. recently found that they could only repeat 50% of published simulation results in systems biology. Toward improving the reproducibility of computational systems research, we identified several resources that investigators can leverage to make their research more accessible, executable, and comprehensible by others. In particular, we identified several domain standards and curation services, as well as powerful approaches pioneered by the software engineering industry that we believe many investigators could adopt. Together, we believe these approaches could substantially enhance the reproducibility of systems biology research. In turn, we believe enhanced reproducibility would accelerate the development of more sophisticated models that could inform precision medicine and synthetic biology.
2207.11174
Jarek Duda Dr
Jarek Duda, Sabina Podlewska
Low cost prediction of probability distributions of molecular properties for early virtual screening
5 pages, 6 figures
null
null
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While there is a general focus on predictions of values, mathematically more appropriate is prediction of probability distributions: with additional possibilities like prediction of uncertainty, higher moments and quantiles. For the purpose of the computer-aided drug design field, this article applies Hierarchical Correlation Reconstruction approach, previously applied in the analysis of demographic, financial and astronomical data. Instead of a single linear regression to predict values, it uses multiple linear regressions to independently predict multiple moments, finally combining them into predicted probability distribution, here of several ADMET properties based on substructural fingerprint developed by Klekota\&Roth. Discussed application example is inexpensive selection of a percentage of molecules with properties nearly certain to be in a predicted or chosen range during virtual screening. Such an approach can facilitate the interpretation of the results as the predictions characterized by high rate of uncertainty are automatically detected. In addition, for each of the investigated predictive problems, we detected crucial structural features, which should be carefully considered when optimizing compounds towards particular property. The whole methodology developed in the study constitutes therefore a great support for medicinal chemists, as it enable fast rejection of compounds with the lowest potential of desired physicochemical/ADMET characteristic and guides the compound optimization process.
[ { "created": "Thu, 21 Jul 2022 13:29:26 GMT", "version": "v1" } ]
2022-07-25
[ [ "Duda", "Jarek", "" ], [ "Podlewska", "Sabina", "" ] ]
While there is a general focus on predictions of values, mathematically more appropriate is prediction of probability distributions: with additional possibilities like prediction of uncertainty, higher moments and quantiles. For the purpose of the computer-aided drug design field, this article applies Hierarchical Correlation Reconstruction approach, previously applied in the analysis of demographic, financial and astronomical data. Instead of a single linear regression to predict values, it uses multiple linear regressions to independently predict multiple moments, finally combining them into predicted probability distribution, here of several ADMET properties based on substructural fingerprint developed by Klekota\&Roth. Discussed application example is inexpensive selection of a percentage of molecules with properties nearly certain to be in a predicted or chosen range during virtual screening. Such an approach can facilitate the interpretation of the results as the predictions characterized by high rate of uncertainty are automatically detected. In addition, for each of the investigated predictive problems, we detected crucial structural features, which should be carefully considered when optimizing compounds towards particular property. The whole methodology developed in the study constitutes therefore a great support for medicinal chemists, as it enable fast rejection of compounds with the lowest potential of desired physicochemical/ADMET characteristic and guides the compound optimization process.
q-bio/0611005
Hernan Garcia
Hernan G. Garcia (1), Paul Grayson (1), Lin Han (2), Mandar Inamdar (2), Jane Kondev (3), Philip C. Nelson (4), Rob Phillips (2), Jonathan Widom (5), Paul A. Wiggins (6) ((1) Department of Physics, California Institute of Technology (2) Division of Engineering and Applied Science, California Institute of Technology, (3) Department of Physics, Brandeis University, (4) Department of Physics and Astronomy, University of Pennsylvania, (5) Department of Biochemistry, Molecular Biology, and Cell Biology, Northwestern University, (6) Whitehead Institute, Cambridge MA)
Biological Consequences of Tightly Bent DNA: The Other Life of a Macromolecular Celebrity
24 pages, 9 figures
null
null
null
q-bio.BM q-bio.QM
null
The mechanical properties of DNA play a critical role in many biological functions. For example, DNA packing in viruses involves confining the viral genome in a volume (the viral capsid) with dimensions that are comparable to the DNA persistence length. Similarly, eukaryotic DNA is packed in DNA-protein complexes (nucleosomes) in which DNA is tightly bent around protein spools. DNA is also tightly bent by many proteins that regulate transcription, resulting in a variation in gene expression that is amenable to quantitative analysis. In these cases, DNA loops are formed with lengths that are comparable to or smaller than the DNA persistence length. The aim of this review is to describe the physical forces associated with tightly bent DNA in all of these settings and to explore the biological consequences of such bending, as increasingly accessible by single-molecule techniques.
[ { "created": "Wed, 1 Nov 2006 22:43:34 GMT", "version": "v1" } ]
2007-05-23
[ [ "Garcia", "Hernan G.", "" ], [ "Grayson", "Paul", "" ], [ "Han", "Lin", "" ], [ "Inamdar", "Mandar", "" ], [ "Kondev", "Jane", "" ], [ "Nelson", "Philip C.", "" ], [ "Phillips", "Rob", "" ], [ "Wido...
The mechanical properties of DNA play a critical role in many biological functions. For example, DNA packing in viruses involves confining the viral genome in a volume (the viral capsid) with dimensions that are comparable to the DNA persistence length. Similarly, eukaryotic DNA is packed in DNA-protein complexes (nucleosomes) in which DNA is tightly bent around protein spools. DNA is also tightly bent by many proteins that regulate transcription, resulting in a variation in gene expression that is amenable to quantitative analysis. In these cases, DNA loops are formed with lengths that are comparable to or smaller than the DNA persistence length. The aim of this review is to describe the physical forces associated with tightly bent DNA in all of these settings and to explore the biological consequences of such bending, as increasingly accessible by single-molecule techniques.
1610.05512
Gianni D'Angelo
Gianni D'Angelo, Salvatore Rampone
Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications
14 pages, BMC Bioinformatics 2014, 15(Suppl 5):S2; http://www.biomedcentral.com/1471-2105/15/S5/S2
BMC Bioinformatics201415(Suppl 5):S2, 6 May 2014
10.1186/1471-2105-15-S5-S2
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n3) and of O(n5) order, respectively, and so, the algorithm is unaffordable for huge data sets.
[ { "created": "Tue, 18 Oct 2016 09:54:14 GMT", "version": "v1" } ]
2016-10-19
[ [ "D'Angelo", "Gianni", "" ], [ "Rampone", "Salvatore", "" ] ]
Background: The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n3) and of O(n5) order, respectively, and so, the algorithm is unaffordable for huge data sets.
q-bio/0511021
Eytan Domany
Joseph Lotem, Dvir Netanely, Eytan Domany and Leo Sachs
Human cancers over express genes that are specific to a variety of normal human tissues
To appear in PNAS
null
10.1073/pnas.0509360102
null
q-bio.TO q-bio.QM
null
We have analyzed gene expression data from 3 different kinds of samples: normal human tissues, human cancer cell lines and leukemic cells from lymphoid and myeloid leukemia pediatric patients. We have searched for genes that are over expressed in human cancer and also show specific patterns of tissue-dependent expression in normal tissues. Using the expression data of the normal tissues we identified 4346 genes with a high variability of expression, and clustered these genes according to their relative expression level. Of 91 stable clusters obtained, 24 clusters included genes preferentially expressed either only in hematopoietic tissues or in hematopoietic and 1-2 other tissues; 28 clusters included genes preferentially expressed in various non-hematopoietic tissues such as neuronal, testis, liver, kidney, muscle, lung, pancreas and placenta. Analysis of the expression levels of these 2 groups of genes in the human cancer cell lines and leukemias, identified genes that were highly expressed in cancer cells but not in their normal counterparts, and were thus over expressed in the cancers. The different cancer cell lines and leukemias varied in the number and identity of these over expressed genes. The results indicate that many genes that are over expressed in human cancer cells are specific to a variety of normal tissues, including normal tissues other than those from which the cancer originated. It is suggested that this general property of cancer cells plays a major role in determining the behavior of the cancers, including their metastatic potential.
[ { "created": "Tue, 15 Nov 2005 08:19:58 GMT", "version": "v1" } ]
2009-11-11
[ [ "Lotem", "Joseph", "" ], [ "Netanely", "Dvir", "" ], [ "Domany", "Eytan", "" ], [ "Sachs", "Leo", "" ] ]
We have analyzed gene expression data from 3 different kinds of samples: normal human tissues, human cancer cell lines and leukemic cells from lymphoid and myeloid leukemia pediatric patients. We have searched for genes that are over expressed in human cancer and also show specific patterns of tissue-dependent expression in normal tissues. Using the expression data of the normal tissues we identified 4346 genes with a high variability of expression, and clustered these genes according to their relative expression level. Of 91 stable clusters obtained, 24 clusters included genes preferentially expressed either only in hematopoietic tissues or in hematopoietic and 1-2 other tissues; 28 clusters included genes preferentially expressed in various non-hematopoietic tissues such as neuronal, testis, liver, kidney, muscle, lung, pancreas and placenta. Analysis of the expression levels of these 2 groups of genes in the human cancer cell lines and leukemias, identified genes that were highly expressed in cancer cells but not in their normal counterparts, and were thus over expressed in the cancers. The different cancer cell lines and leukemias varied in the number and identity of these over expressed genes. The results indicate that many genes that are over expressed in human cancer cells are specific to a variety of normal tissues, including normal tissues other than those from which the cancer originated. It is suggested that this general property of cancer cells plays a major role in determining the behavior of the cancers, including their metastatic potential.
2004.10642
Annika Hagemann
Annika Hagemann, Jens Wilting, Bita Samimizad, Florian Mormann, Viola Priesemann
Assessing criticality in pre-seizure single-neuron activity of human epileptic cortex
19 pages, 8 Figures
PLOS Computational Biology, 17(3), e1008773 (2021)
10.1371/journal.pcbi.1008773
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epileptic seizures are characterized by abnormal and excessive neural activity, where cortical network dynamics seem to become unstable. However, most of the time, during seizure-free periods, cortex of epilepsy patients shows perfectly stable dynamics. This raises the question of how recurring instability can arise in the light of this stable default state. In this work, we examine two potential scenarios of seizure generation: (i) epileptic cortical areas might generally operate closer to instability, which would make epilepsy patients generally more susceptible to seizures, or (ii) epileptic cortical areas might drift systematically towards instability before seizure onset. We analyzed single-unit spike recordings from both the epileptogenic (focal) and the nonfocal cortical hemispheres of 20 epilepsy patients. We quantified the distance to instability in the framework of criticality, using a novel estimator, which enables an unbiased inference from a small set of recorded neurons. Surprisingly, we found no evidence for either scenario: Neither did focal areas generally operate closer to instability, nor were seizures preceded by a drift towards instability. In fact, our results from both pre-seizure and seizure-free intervals suggest that despite epilepsy, human cortex operates in the stable, slightly subcritical regime, just like cortex of other healthy mammalians.
[ { "created": "Wed, 22 Apr 2020 15:38:08 GMT", "version": "v1" }, { "created": "Mon, 5 Apr 2021 14:53:58 GMT", "version": "v2" } ]
2021-04-06
[ [ "Hagemann", "Annika", "" ], [ "Wilting", "Jens", "" ], [ "Samimizad", "Bita", "" ], [ "Mormann", "Florian", "" ], [ "Priesemann", "Viola", "" ] ]
Epileptic seizures are characterized by abnormal and excessive neural activity, where cortical network dynamics seem to become unstable. However, most of the time, during seizure-free periods, cortex of epilepsy patients shows perfectly stable dynamics. This raises the question of how recurring instability can arise in the light of this stable default state. In this work, we examine two potential scenarios of seizure generation: (i) epileptic cortical areas might generally operate closer to instability, which would make epilepsy patients generally more susceptible to seizures, or (ii) epileptic cortical areas might drift systematically towards instability before seizure onset. We analyzed single-unit spike recordings from both the epileptogenic (focal) and the nonfocal cortical hemispheres of 20 epilepsy patients. We quantified the distance to instability in the framework of criticality, using a novel estimator, which enables an unbiased inference from a small set of recorded neurons. Surprisingly, we found no evidence for either scenario: Neither did focal areas generally operate closer to instability, nor were seizures preceded by a drift towards instability. In fact, our results from both pre-seizure and seizure-free intervals suggest that despite epilepsy, human cortex operates in the stable, slightly subcritical regime, just like cortex of other healthy mammalians.
2312.03016
Shuo Zhang
Shuo Zhang, Lei Xie
Protein Language Model-Powered 3D Ligand Binding Site Prediction from Protein Sequence
Accepted by the AI for Science (AI4Science) Workshop and the New Frontiers of AI for Drug Discovery and Development (AI4D3) Workshop at NeurIPS 2023
null
null
null
q-bio.QM cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prediction of ligand binding sites of proteins is a fundamental and important task for understanding the function of proteins and screening potential drugs. Most existing methods require experimentally determined protein holo-structures as input. However, such structures can be unavailable on novel or less-studied proteins. To tackle this limitation, we propose LaMPSite, which only takes protein sequences and ligand molecular graphs as input for ligand binding site predictions. The protein sequences are used to retrieve residue-level embeddings and contact maps from the pre-trained ESM-2 protein language model. The ligand molecular graphs are fed into a graph neural network to compute atom-level embeddings. Then we compute and update the protein-ligand interaction embedding based on the protein residue-level embeddings and ligand atom-level embeddings, and the geometric constraints in the inferred protein contact map and ligand distance map. A final pooling on protein-ligand interaction embedding would indicate which residues belong to the binding sites. Without any 3D coordinate information of proteins, our proposed model achieves competitive performance compared to baseline methods that require 3D protein structures when predicting binding sites. Given that less than 50% of proteins have reliable structure information in the current stage, LaMPSite will provide new opportunities for drug discovery.
[ { "created": "Tue, 5 Dec 2023 01:47:38 GMT", "version": "v1" } ]
2023-12-07
[ [ "Zhang", "Shuo", "" ], [ "Xie", "Lei", "" ] ]
Prediction of ligand binding sites of proteins is a fundamental and important task for understanding the function of proteins and screening potential drugs. Most existing methods require experimentally determined protein holo-structures as input. However, such structures can be unavailable on novel or less-studied proteins. To tackle this limitation, we propose LaMPSite, which only takes protein sequences and ligand molecular graphs as input for ligand binding site predictions. The protein sequences are used to retrieve residue-level embeddings and contact maps from the pre-trained ESM-2 protein language model. The ligand molecular graphs are fed into a graph neural network to compute atom-level embeddings. Then we compute and update the protein-ligand interaction embedding based on the protein residue-level embeddings and ligand atom-level embeddings, and the geometric constraints in the inferred protein contact map and ligand distance map. A final pooling on protein-ligand interaction embedding would indicate which residues belong to the binding sites. Without any 3D coordinate information of proteins, our proposed model achieves competitive performance compared to baseline methods that require 3D protein structures when predicting binding sites. Given that less than 50% of proteins have reliable structure information in the current stage, LaMPSite will provide new opportunities for drug discovery.
1607.01452
Michael Hagan
Michael F. Hagan and Roya Zandi
Recent advances in coarse-grained modeling of virus assembly
9 pages, 3 figures
Curr Opin Virol, 18, 36-43 (2016)
10.1016/j.coviro.2016.02.012
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many virus families, tens to thousands of proteins assemble spontaneously into a capsid (protein shell) while packaging the genomic nucleic acid. This review summarizes recent advances in computational modeling of these dynamical processes. We present an overview of recent technological and algorithmic developments, which are enabling simulations to describe the large ranges of length-and time-scales relevant to assembly, under conditions more closely matched to experiments than in earlier work. We then describe two examples in which computational modeling has recently provided an important complement to experiments.
[ { "created": "Wed, 6 Jul 2016 01:13:37 GMT", "version": "v1" } ]
2016-07-07
[ [ "Hagan", "Michael F.", "" ], [ "Zandi", "Roya", "" ] ]
In many virus families, tens to thousands of proteins assemble spontaneously into a capsid (protein shell) while packaging the genomic nucleic acid. This review summarizes recent advances in computational modeling of these dynamical processes. We present an overview of recent technological and algorithmic developments, which are enabling simulations to describe the large ranges of length-and time-scales relevant to assembly, under conditions more closely matched to experiments than in earlier work. We then describe two examples in which computational modeling has recently provided an important complement to experiments.
1708.01242
Marcus Aguiar de
Marcus A. M. de Aguiar, Erica A. Newman, Mathias M. Pires, Justin D. Yeakel, David H. Hembry, Laura Burkle, Dominique Gravel, Paulo R. Guimaraes Jr, Jimmy O'Donnell, Timothee Poisot, Marie-Josee Fortin
Revealing biases in the sampling of ecological interaction networks
35 pages, 4 figures
PeerJ 7:e7566 2019
10.7717/peerj.7566
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structure of ecological interactions is commonly understood through analyses of interaction networks. However, these analyses may be sensitive to sampling biases in both the interactors (the nodes of the network) and interactions (the links between nodes). These issues may affect the accuracy of empirically constructed ecological networks. We explore the properties of sampled ecological networks by simulating large-scale ecological networks with predetermined topologies, and sampling them with different mathematical procedures. Several types of modular networks were generated, intended to represent a wide variety of communities that vary in size and types of ecological interactions. We sampled these networks with different sampling designs that may be encountered in field experiments. The observed networks generated by each sampling process were analyzed with respect to number and size of components. We show that the sampling effort needed to estimate underlying network properties depends both on the sampling design and on network topology. Networks with random or scale-free modules require more complete sampling compared to networks whose modules are nested or bipartite. Overall, the structure of nested modules was the easiest to detect, regardless of sampling design. Sampling according to species degree was consistently found to be the most accurate strategy to estimate network structure. Conversely, sampling according to module results in an accurate view of certain modules, but fails to provide a global picture of the underlying network. We recommend that these findings are incorporated into the design of projects aiming to characterize large networks of species interactions in the field, to reduce sampling biases. The software scripts developed to construct and sample networks are provided for further explorations of network structure and comparisons to real interaction data.
[ { "created": "Thu, 3 Aug 2017 17:36:09 GMT", "version": "v1" } ]
2020-03-02
[ [ "de Aguiar", "Marcus A. M.", "" ], [ "Newman", "Erica A.", "" ], [ "Pires", "Mathias M.", "" ], [ "Yeakel", "Justin D.", "" ], [ "Hembry", "David H.", "" ], [ "Burkle", "Laura", "" ], [ "Gravel", "Dominique", ...
The structure of ecological interactions is commonly understood through analyses of interaction networks. However, these analyses may be sensitive to sampling biases in both the interactors (the nodes of the network) and interactions (the links between nodes). These issues may affect the accuracy of empirically constructed ecological networks. We explore the properties of sampled ecological networks by simulating large-scale ecological networks with predetermined topologies, and sampling them with different mathematical procedures. Several types of modular networks were generated, intended to represent a wide variety of communities that vary in size and types of ecological interactions. We sampled these networks with different sampling designs that may be encountered in field experiments. The observed networks generated by each sampling process were analyzed with respect to number and size of components. We show that the sampling effort needed to estimate underlying network properties depends both on the sampling design and on network topology. Networks with random or scale-free modules require more complete sampling compared to networks whose modules are nested or bipartite. Overall, the structure of nested modules was the easiest to detect, regardless of sampling design. Sampling according to species degree was consistently found to be the most accurate strategy to estimate network structure. Conversely, sampling according to module results in an accurate view of certain modules, but fails to provide a global picture of the underlying network. We recommend that these findings are incorporated into the design of projects aiming to characterize large networks of species interactions in the field, to reduce sampling biases. The software scripts developed to construct and sample networks are provided for further explorations of network structure and comparisons to real interaction data.
1206.1865
Tobias Reichenbach
Tobias Reichenbach and A. J. Hudspeth
Frequency decoding of periodically timed action potentials through distinct activity patterns in a random neural network
16 pages, 5 figures, and supplementary information
null
10.1088/1367-2630/14/11/113022
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frequency discrimination is a fundamental task of the auditory system. The mammalian inner ear, or cochlea, provides a place code in which different frequencies are detected at different spatial locations. However, a temporal code based on spike timing is also available: action potentials evoked in an auditory-nerve fiber by a low-frequency tone occur at a preferred phase of the stimulus-they exhibit phase locking-and thus provide temporal information about the tone's frequency. In an accompanying psychoacoustic study, and in agreement with previous experiments, we show that humans employ this temporal information for discrimination of low frequencies. How might such temporal information be read out in the brain? Here we demonstrate that recurrent random neural networks in which connections between neurons introduce characteristic time delays, and in which neurons require temporally coinciding inputs for spike initiation, can perform sharp frequency discrimination when stimulated with phase-locked inputs. Although the frequency resolution achieved by such networks is limited by the noise in phase locking, the resolution for realistic values reaches the tiny frequency difference of 0.2% that has been measured in humans.
[ { "created": "Fri, 8 Jun 2012 20:09:44 GMT", "version": "v1" } ]
2015-06-05
[ [ "Reichenbach", "Tobias", "" ], [ "Hudspeth", "A. J.", "" ] ]
Frequency discrimination is a fundamental task of the auditory system. The mammalian inner ear, or cochlea, provides a place code in which different frequencies are detected at different spatial locations. However, a temporal code based on spike timing is also available: action potentials evoked in an auditory-nerve fiber by a low-frequency tone occur at a preferred phase of the stimulus-they exhibit phase locking-and thus provide temporal information about the tone's frequency. In an accompanying psychoacoustic study, and in agreement with previous experiments, we show that humans employ this temporal information for discrimination of low frequencies. How might such temporal information be read out in the brain? Here we demonstrate that recurrent random neural networks in which connections between neurons introduce characteristic time delays, and in which neurons require temporally coinciding inputs for spike initiation, can perform sharp frequency discrimination when stimulated with phase-locked inputs. Although the frequency resolution achieved by such networks is limited by the noise in phase locking, the resolution for realistic values reaches the tiny frequency difference of 0.2% that has been measured in humans.
2006.03175
Philip Gressman
Philip T. Gressman and Jennifer R. Peck
Simulating COVID-19 in a University Environment
30 pages, 9 figures; fixed minor typos and rephrased some unclear points
Mathematical Biosciences Volume 328, October 2020, 108436
10.1016/j.mbs.2020.108436
null
q-bio.PE cs.MA cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Residential colleges and universities face unique challenges in providing in-person instruction during the COVID-19 pandemic. Administrators are currently faced with decisions about whether to open during the pandemic and what modifications of their normal operations might be necessary to protect students, faculty and staff. There is little information, however, on what measures are likely to be most effective and whether existing interventions could contain the spread of an outbreak on campus. We develop a full-scale stochastic agent-based model to determine whether in-person instruction could safely continue during the pandemic and evaluate the necessity of various interventions. Simulation results indicate that large scale randomized testing, contact-tracing, and quarantining are important components of a successful strategy for containing campus outbreaks. High test specificity is critical for keeping the size of the quarantine population manageable. Moving the largest classes online is also crucial for controlling both the size of outbreaks and the number of students in quarantine. Increased residential exposure can significantly impact the size of an outbreak, but it is likely more important to control non-residential social exposure among students. Finally, necessarily high quarantine rates even in controlled outbreaks imply significant absenteeism, indicating a need to plan for remote instruction of quarantined students.
[ { "created": "Fri, 5 Jun 2020 00:04:03 GMT", "version": "v1" }, { "created": "Sun, 28 Jun 2020 16:46:02 GMT", "version": "v2" } ]
2020-12-22
[ [ "Gressman", "Philip T.", "" ], [ "Peck", "Jennifer R.", "" ] ]
Residential colleges and universities face unique challenges in providing in-person instruction during the COVID-19 pandemic. Administrators are currently faced with decisions about whether to open during the pandemic and what modifications of their normal operations might be necessary to protect students, faculty and staff. There is little information, however, on what measures are likely to be most effective and whether existing interventions could contain the spread of an outbreak on campus. We develop a full-scale stochastic agent-based model to determine whether in-person instruction could safely continue during the pandemic and evaluate the necessity of various interventions. Simulation results indicate that large scale randomized testing, contact-tracing, and quarantining are important components of a successful strategy for containing campus outbreaks. High test specificity is critical for keeping the size of the quarantine population manageable. Moving the largest classes online is also crucial for controlling both the size of outbreaks and the number of students in quarantine. Increased residential exposure can significantly impact the size of an outbreak, but it is likely more important to control non-residential social exposure among students. Finally, necessarily high quarantine rates even in controlled outbreaks imply significant absenteeism, indicating a need to plan for remote instruction of quarantined students.
1612.02243
Ruggero G. Bettinardi
Ruggero G. Bettinardi, Gustavo Deco, Vasilis M. Karlaftis, Timothy J. Van Hartevelt, Henrique M. Fernandes, Zoe Kourtzi, Morten L. Kringelbach and Gorka Zamora-L\'opez
How structure sculpts function: unveiling the contribution of anatomical connectivity to the brain's spontaneous correlation structure
null
Chaos 27, 047409 (2017)
10.1063/1.4980099
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Intrinsic brain activity is characterized by highly structured co-activations between different regions, whose origin is still under debate. In this paper, we address the question whether it is possible to unveil how the underlying anatomical connectivity shape the brain's spontaneous correlation structure. We start from the assumption that in order for two nodes to exhibit large covariation, they must be exposed to similar input patterns from the entire network. We then acknowledge that information rarely spreads only along an unique route, but rather travels along all possible paths. In real networks the strength of local perturbations tends to decay as they propagate away from the sources, leading to a progressive attenuation of the original information content and, thus, of their influence. We use these notions to derive a novel analytical measure, $\mathcal{T}$ , which quantifies the similarity of the whole-network input patterns arriving at any two nodes only due to the underlying topology, in what is a generalization of the matching index. We show that this measure of topological similarity can indeed be used to predict the contribution of network topology to the expected correlation structure, thus unveiling the mechanism behind the tight but elusive relationship between structure and function in complex networks. Finally, we use this measure to investigate brain connectivity, showing that information about the topology defined by the complex fabric of brain axonal pathways specifies to a large extent the time-average functional connectivity observed at rest.
[ { "created": "Wed, 7 Dec 2016 13:47:58 GMT", "version": "v1" } ]
2018-11-01
[ [ "Bettinardi", "Ruggero G.", "" ], [ "Deco", "Gustavo", "" ], [ "Karlaftis", "Vasilis M.", "" ], [ "Van Hartevelt", "Timothy J.", "" ], [ "Fernandes", "Henrique M.", "" ], [ "Kourtzi", "Zoe", "" ], [ "Kringelbach", ...
Intrinsic brain activity is characterized by highly structured co-activations between different regions, whose origin is still under debate. In this paper, we address the question whether it is possible to unveil how the underlying anatomical connectivity shape the brain's spontaneous correlation structure. We start from the assumption that in order for two nodes to exhibit large covariation, they must be exposed to similar input patterns from the entire network. We then acknowledge that information rarely spreads only along an unique route, but rather travels along all possible paths. In real networks the strength of local perturbations tends to decay as they propagate away from the sources, leading to a progressive attenuation of the original information content and, thus, of their influence. We use these notions to derive a novel analytical measure, $\mathcal{T}$ , which quantifies the similarity of the whole-network input patterns arriving at any two nodes only due to the underlying topology, in what is a generalization of the matching index. We show that this measure of topological similarity can indeed be used to predict the contribution of network topology to the expected correlation structure, thus unveiling the mechanism behind the tight but elusive relationship between structure and function in complex networks. Finally, we use this measure to investigate brain connectivity, showing that information about the topology defined by the complex fabric of brain axonal pathways specifies to a large extent the time-average functional connectivity observed at rest.
1603.04687
Kanaka Rajan PhD
Kanaka Rajan, Christopher D Harvey, David W Tank
Recurrent Network Models Of Sequence Generation And Memory
60 pages, 6 figures
Neuron 90, 1-15, April 6, 2016 Elsevier Inc, (2016)
10.1016/j.neuron.2016.02.009
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recur- rent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory- guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.
[ { "created": "Mon, 14 Mar 2016 15:00:12 GMT", "version": "v1" } ]
2016-03-16
[ [ "Rajan", "Kanaka", "" ], [ "Harvey", "Christopher D", "" ], [ "Tank", "David W", "" ] ]
Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recur- rent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory- guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.
q-bio/0408013
Guido Tiana
R. A. Broglia, G. Tiana, D. Provasi, F. Simona, L. Sutto, F. Vasile and M. Zanotti
Design of a folding inhibitor of the HIV-1 Protease
null
null
null
null
q-bio.BM
null
Being HIV-1-PR an essential enzyme in the viral life cycle, its inhibition can control AIDS. Because the folding of single domain proteins, like HIV-1-PR is controlled by local elementary structures (LES, folding units stabilized by strongly interacting, highly conserved amino acids) which have evolved over myriads of generations to recognize and strongly attract each other so as to make the protein fold fast, we suggest a novel type of HIV-1-PR inhibitors which interfere with the folding of the protein: short peptides displaying the same amino acid sequence of that of LES. Theoretical and experimental evidence for the specificity and efficiency of such inhibitors are presented.
[ { "created": "Mon, 16 Aug 2004 15:36:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Broglia", "R. A.", "" ], [ "Tiana", "G.", "" ], [ "Provasi", "D.", "" ], [ "Simona", "F.", "" ], [ "Sutto", "L.", "" ], [ "Vasile", "F.", "" ], [ "Zanotti", "M.", "" ] ]
Being HIV-1-PR an essential enzyme in the viral life cycle, its inhibition can control AIDS. Because the folding of single domain proteins, like HIV-1-PR is controlled by local elementary structures (LES, folding units stabilized by strongly interacting, highly conserved amino acids) which have evolved over myriads of generations to recognize and strongly attract each other so as to make the protein fold fast, we suggest a novel type of HIV-1-PR inhibitors which interfere with the folding of the protein: short peptides displaying the same amino acid sequence of that of LES. Theoretical and experimental evidence for the specificity and efficiency of such inhibitors are presented.
1502.02692
Kristina Crona
Kristina Crona
Epistasis and Entropy
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epistasis is a key concept in the theory of adaptation. Indicators of epistasis are of interest for large system where systematic fitness measurements may not be possible. Some recent approaches depend on information theory. We show that considering shared entropy for pairs of loci can be misleading. The reason is that shared entropy does not imply epistasis for the pair. This observation holds true also in the absence of higher order epistasis. We discuss a refined approach for identifying pairwise interactions using entropy.
[ { "created": "Sun, 1 Feb 2015 16:27:32 GMT", "version": "v1" }, { "created": "Wed, 11 Feb 2015 16:30:56 GMT", "version": "v2" }, { "created": "Tue, 3 Mar 2015 20:24:25 GMT", "version": "v3" } ]
2015-03-04
[ [ "Crona", "Kristina", "" ] ]
Epistasis is a key concept in the theory of adaptation. Indicators of epistasis are of interest for large system where systematic fitness measurements may not be possible. Some recent approaches depend on information theory. We show that considering shared entropy for pairs of loci can be misleading. The reason is that shared entropy does not imply epistasis for the pair. This observation holds true also in the absence of higher order epistasis. We discuss a refined approach for identifying pairwise interactions using entropy.
2206.05307
Sage Malingen
Sage Malingen and Padmini Rangamani
Modeling membrane curvature generation using mechanics and machine learning
null
null
10.1101/2022.06.06.495017
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The deformation of cellular membranes regulates trafficking processes, such as exocytosis and endocytosis. Classically, the Helfrich continuum model is used to characterize the forces and mechanical parameters that cells tune to accomplish membrane shape changes. While this classical model effectively captures curvature generation, one of the core challenges in using it to approximate a biological process is selecting a set of mechanical parameters (including bending modulus and membrane tension) from a large set of reasonable values. We used the Helfrich model to generate a large synthetic dataset from a random sampling of realistic mechanical parameters and used this dataset to train machine learning models. These models produced promising results, accurately classifying model behavior and predicting membrane shape from mechanical parameters. We also note emerging methods in machine learning that can leverage the physical insight of the Helfrich model to improve performance and draw greater insight into how cells control membrane shape change.
[ { "created": "Fri, 10 Jun 2022 18:08:12 GMT", "version": "v1" } ]
2022-06-14
[ [ "Malingen", "Sage", "" ], [ "Rangamani", "Padmini", "" ] ]
The deformation of cellular membranes regulates trafficking processes, such as exocytosis and endocytosis. Classically, the Helfrich continuum model is used to characterize the forces and mechanical parameters that cells tune to accomplish membrane shape changes. While this classical model effectively captures curvature generation, one of the core challenges in using it to approximate a biological process is selecting a set of mechanical parameters (including bending modulus and membrane tension) from a large set of reasonable values. We used the Helfrich model to generate a large synthetic dataset from a random sampling of realistic mechanical parameters and used this dataset to train machine learning models. These models produced promising results, accurately classifying model behavior and predicting membrane shape from mechanical parameters. We also note emerging methods in machine learning that can leverage the physical insight of the Helfrich model to improve performance and draw greater insight into how cells control membrane shape change.
2207.02328
Carlo Amodeo
Carlo Amodeo, Igor Fortel, Olusola Ajilore, Liang Zhan, Alex Leow, Theja Tulabandhula
Unified Embeddings of Structural and Functional Connectome via a Function-Constrained Structural Graph Variational Auto-Encoder
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Graph theoretical analyses have become standard tools in modeling functional and anatomical connectivity in the brain. With the advent of connectomics, the primary graphs or networks of interest are structural connectome (derived from DTI tractography) and functional connectome (derived from resting-state fMRI). However, most published connectome studies have focused on either structural or functional connectome, yet complementary information between them, when available in the same dataset, can be jointly leveraged to improve our understanding of the brain. To this end, we propose a function-constrained structural graph variational autoencoder (FCS-GVAE) capable of incorporating information from both functional and structural connectome in an unsupervised fashion. This leads to a joint low-dimensional embedding that establishes a unified spatial coordinate system for comparing across different subjects. We evaluate our approach using the publicly available OASIS-3 Alzheimer's disease (AD) dataset and show that a variational formulation is necessary to optimally encode functional brain dynamics. Further, the proposed joint embedding approach can more accurately distinguish different patient sub-populations than approaches that do not use complementary connectome information.
[ { "created": "Tue, 5 Jul 2022 21:39:13 GMT", "version": "v1" } ]
2022-07-07
[ [ "Amodeo", "Carlo", "" ], [ "Fortel", "Igor", "" ], [ "Ajilore", "Olusola", "" ], [ "Zhan", "Liang", "" ], [ "Leow", "Alex", "" ], [ "Tulabandhula", "Theja", "" ] ]
Graph theoretical analyses have become standard tools in modeling functional and anatomical connectivity in the brain. With the advent of connectomics, the primary graphs or networks of interest are structural connectome (derived from DTI tractography) and functional connectome (derived from resting-state fMRI). However, most published connectome studies have focused on either structural or functional connectome, yet complementary information between them, when available in the same dataset, can be jointly leveraged to improve our understanding of the brain. To this end, we propose a function-constrained structural graph variational autoencoder (FCS-GVAE) capable of incorporating information from both functional and structural connectome in an unsupervised fashion. This leads to a joint low-dimensional embedding that establishes a unified spatial coordinate system for comparing across different subjects. We evaluate our approach using the publicly available OASIS-3 Alzheimer's disease (AD) dataset and show that a variational formulation is necessary to optimally encode functional brain dynamics. Further, the proposed joint embedding approach can more accurately distinguish different patient sub-populations than approaches that do not use complementary connectome information.
1309.0408
Troy Hernandez PhD
Troy Hernandez and Jie Yang
Descriptive Statistics of the Genome: Phylogenetic Classification of Viruses
14 pages, 4 tables, 1 figure
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The typical process for classifying and submitting a newly sequenced virus to the NCBI database involves two steps. First, a BLAST search is performed to determine likely family candidates. That is followed by checking the candidate families with the Pairwise Sequence Alignment tool for similar species. The submitter's judgement is then used to determine the most likely species classification. The aim of this paper is to show that this process can be automated into a fast, accurate, one-step process using the proposed alignment-free method and properly implemented machine learning techniques. We present a new family of alignment-free vectorizations of the genome, the generalized vector, that maintains the speed of existing alignment-free methods while outperforming all available methods. This new alignment-free vectorization uses the frequency of genomic words (k-mers), as is done in the composition vector, and incorporates descriptive statistics of those k-mers' positional information, as inspired by the natural vector. We analyze 5 different characterizations of genome similarity using $k$-nearest neighbor classification, and evaluate these on two collections of viruses totaling over 10,000 viruses. We show that our proposed method performs better than, or as well as, other methods at every level of the phylogenetic hierarchy. The data and R code is available upon request.
[ { "created": "Mon, 2 Sep 2013 13:56:40 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2016 17:19:36 GMT", "version": "v2" } ]
2016-03-22
[ [ "Hernandez", "Troy", "" ], [ "Yang", "Jie", "" ] ]
The typical process for classifying and submitting a newly sequenced virus to the NCBI database involves two steps. First, a BLAST search is performed to determine likely family candidates. That is followed by checking the candidate families with the Pairwise Sequence Alignment tool for similar species. The submitter's judgement is then used to determine the most likely species classification. The aim of this paper is to show that this process can be automated into a fast, accurate, one-step process using the proposed alignment-free method and properly implemented machine learning techniques. We present a new family of alignment-free vectorizations of the genome, the generalized vector, that maintains the speed of existing alignment-free methods while outperforming all available methods. This new alignment-free vectorization uses the frequency of genomic words (k-mers), as is done in the composition vector, and incorporates descriptive statistics of those k-mers' positional information, as inspired by the natural vector. We analyze 5 different characterizations of genome similarity using $k$-nearest neighbor classification, and evaluate these on two collections of viruses totaling over 10,000 viruses. We show that our proposed method performs better than, or as well as, other methods at every level of the phylogenetic hierarchy. The data and R code is available upon request.
2103.16587
Valeriia Demareva
Valeriia Demareva, Elena Mukhina, Tatiana Bobro
Does Double Biofeedback Affect Functional Hemispheric Asymmetry and Activity? A Pilot Study
null
null
10.3390/sym13060937
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the current pilot study, we attempt to find out how double neurofeedback influences functional hemispheric asymmetry and activity. We examined 30 healthy participants (8 males; 22 females, mean age = 29; SD = 8). To measure functional hemispheric asymmetry and activity, we used computer laterometry in the "two-source" lead-lag dichotic paradigm. Double biofeedback included 8 min of EEG oscillation recording with five minutes of basic mode. During the basic mode, the current amplitude of the EEG oscillator gets transformed into feedback sounds while the current amplitude of alpha EEG oscillator is used to modulate the intensity of light signals. Double neurofeedback did not directly influence the asymmetry itself but accelerated individual sound perception characteristics during dichotic listening in the preceding effect paradigm. Further research is needed to investigate the effect of double neurofeedback training on functional brain activity and asymmetry, taking into account participants' age, gender, and motivation.
[ { "created": "Tue, 30 Mar 2021 18:01:59 GMT", "version": "v1" }, { "created": "Fri, 17 Mar 2023 09:25:34 GMT", "version": "v2" } ]
2023-03-20
[ [ "Demareva", "Valeriia", "" ], [ "Mukhina", "Elena", "" ], [ "Bobro", "Tatiana", "" ] ]
In the current pilot study, we attempt to find out how double neurofeedback influences functional hemispheric asymmetry and activity. We examined 30 healthy participants (8 males; 22 females, mean age = 29; SD = 8). To measure functional hemispheric asymmetry and activity, we used computer laterometry in the "two-source" lead-lag dichotic paradigm. Double biofeedback included 8 min of EEG oscillation recording with five minutes of basic mode. During the basic mode, the current amplitude of the EEG oscillator gets transformed into feedback sounds while the current amplitude of alpha EEG oscillator is used to modulate the intensity of light signals. Double neurofeedback did not directly influence the asymmetry itself but accelerated individual sound perception characteristics during dichotic listening in the preceding effect paradigm. Further research is needed to investigate the effect of double neurofeedback training on functional brain activity and asymmetry, taking into account participants' age, gender, and motivation.
q-bio/0404035
Jaewook Joo
Jaewook Joo, Joel L. Lebowitz
Pair approximation of the stochastic susceptible-infected-recovered-susceptible epidemic model on the hypercubic lattice
null
Phys. Rev. E, 70 (2004) 036114
10.1103/PhysRevE.70.036114
null
q-bio.PE cond-mat.stat-mech
null
We investigate the time-evolution and steady states of the stochastic susceptible-infected-recovered-susceptible(SIRS) epidemic model on one- and two- dimensional lattices. We compare the behavior of this system, obtained from computer simulations, with those obtained from the mean-field approximation(MFA) and pair-approximation(PA). The former(latter) approximates higher order moments in terms of first(second) order ones. We find that the PA gives consistently better results than the MFA. In one dimension the improvement is even qualitative.
[ { "created": "Mon, 26 Apr 2004 18:03:44 GMT", "version": "v1" } ]
2009-11-10
[ [ "Joo", "Jaewook", "" ], [ "Lebowitz", "Joel L.", "" ] ]
We investigate the time-evolution and steady states of the stochastic susceptible-infected-recovered-susceptible(SIRS) epidemic model on one- and two- dimensional lattices. We compare the behavior of this system, obtained from computer simulations, with those obtained from the mean-field approximation(MFA) and pair-approximation(PA). The former(latter) approximates higher order moments in terms of first(second) order ones. We find that the PA gives consistently better results than the MFA. In one dimension the improvement is even qualitative.
1112.2994
Alexander Peyser
Alexander Peyser and Wolfgang Nonner
Electrostatic determinants of voltage sensitivity in ion channels: Simulations of sliding-helix mechanisms
null
null
null
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrical signaling via voltage-gated ion channels depends upon the function of the voltage sensor (VS), identified with the S1-S4 domain of voltage-gated K channels. Here we investigate some physical aspects of the sliding-helix model of the VS using simulations based on VS charges, linear dielectrics and whole-body motion. Model electrostatics in voltage-clamped boundary conditions are solved using a boundary element method. The statistical mechanical consequences of the electrostatic configurational energy are computed to gain insight into the sliding-helix mechanism and to predict experimentally measured ensemble properties such as gating charge displaced by an applied voltage. Those consequences and ensemble properties are investigated for variations of: S4 configuration ({\alpha}- and 3(10)-helical), intrinsic counter-charges, protein polarizability, geometry of the gating canal, screening of S4 charges by the baths, and protein charges located at the bath interfaces. We find that the sliding helix VS has an inherent electrostatic stability and its function as a VS is robust in the parameter space explored. Maximal charge displacement is limited by geometry, specifically the range of movement where S4 charges and counter-charges overlap in the region of weak dielectric. The steepness of charge rearrangement in the physiological voltage range is sensitive to the landscape of electrostatic energy: energy differences of <2 kT have substantial consequences. Such variations of energy landscape are produced by all variations of model features tested. The amount of free energy per unit voltage that a sliding-helix VS can deliver to other parts of the channel (conductance voltage sensitivity) is limited by both the maximal displaced charge and the steepness of charge redistribution by voltage (sensor voltage sensitivity).
[ { "created": "Tue, 13 Dec 2011 18:27:15 GMT", "version": "v1" } ]
2015-03-13
[ [ "Peyser", "Alexander", "" ], [ "Nonner", "Wolfgang", "" ] ]
Electrical signaling via voltage-gated ion channels depends upon the function of the voltage sensor (VS), identified with the S1-S4 domain of voltage-gated K channels. Here we investigate some physical aspects of the sliding-helix model of the VS using simulations based on VS charges, linear dielectrics and whole-body motion. Model electrostatics in voltage-clamped boundary conditions are solved using a boundary element method. The statistical mechanical consequences of the electrostatic configurational energy are computed to gain insight into the sliding-helix mechanism and to predict experimentally measured ensemble properties such as gating charge displaced by an applied voltage. Those consequences and ensemble properties are investigated for variations of: S4 configuration ({\alpha}- and 3(10)-helical), intrinsic counter-charges, protein polarizability, geometry of the gating canal, screening of S4 charges by the baths, and protein charges located at the bath interfaces. We find that the sliding helix VS has an inherent electrostatic stability and its function as a VS is robust in the parameter space explored. Maximal charge displacement is limited by geometry, specifically the range of movement where S4 charges and counter-charges overlap in the region of weak dielectric. The steepness of charge rearrangement in the physiological voltage range is sensitive to the landscape of electrostatic energy: energy differences of <2 kT have substantial consequences. Such variations of energy landscape are produced by all variations of model features tested. The amount of free energy per unit voltage that a sliding-helix VS can deliver to other parts of the channel (conductance voltage sensitivity) is limited by both the maximal displaced charge and the steepness of charge redistribution by voltage (sensor voltage sensitivity).
1406.2500
Istvan Kiss Z
P. Rattana, L. Berthouze, I.Z. Kiss
The impact of constrained rewiring on network structure and node dynamics
null
null
10.1103/PhysRevE.90.052806
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study an adaptive spatial network. We consider an SIS (susceptible-infectedsusceptible) epidemic on the network, with a link/contact rewiring process constrained by spatial proximity. In particular, we assume that susceptible nodes break links with infected nodes independently of distance, and reconnect at random to susceptible nodes available within a given radius. By systematically manipulating this radius we investigate the impact of rewiring on the structure of the network and characteristics of the epidemic. We adopt a step-by-step approach whereby we first study the impact of rewiring on the network structure in the absence of an epidemic, then with nodes assigned a disease status but without disease dynamics, and finally running network and epidemic dynamics simultaneously. In the case of no labelling and no epidemic dynamics, we provide both analytic and semi-analytic formulas for the value of clustering achieved in the network. Our results also show that the rewiring radius and the network's initial structure have a pronounced effect on the endemic equilibrium, with increasingly large rewiring radiuses yielding smaller disease prevalence.
[ { "created": "Tue, 10 Jun 2014 10:40:04 GMT", "version": "v1" } ]
2016-11-25
[ [ "Rattana", "P.", "" ], [ "Berthouze", "L.", "" ], [ "Kiss", "I. Z.", "" ] ]
In this paper, we study an adaptive spatial network. We consider an SIS (susceptible-infectedsusceptible) epidemic on the network, with a link/contact rewiring process constrained by spatial proximity. In particular, we assume that susceptible nodes break links with infected nodes independently of distance, and reconnect at random to susceptible nodes available within a given radius. By systematically manipulating this radius we investigate the impact of rewiring on the structure of the network and characteristics of the epidemic. We adopt a step-by-step approach whereby we first study the impact of rewiring on the network structure in the absence of an epidemic, then with nodes assigned a disease status but without disease dynamics, and finally running network and epidemic dynamics simultaneously. In the case of no labelling and no epidemic dynamics, we provide both analytic and semi-analytic formulas for the value of clustering achieved in the network. Our results also show that the rewiring radius and the network's initial structure have a pronounced effect on the endemic equilibrium, with increasingly large rewiring radiuses yielding smaller disease prevalence.
1203.2737
Jeremy Schofield
Jeremy Schofield, Paul Inder, and Raymond Kapral
Modeling of solvent flow effects in enzyme catalysis under physiological conditions
15 pages in double column format
null
10.1063/1.4719539
null
q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A stochastic model for the dynamics of enzymatic catalysis in explicit, effective solvents under physiological conditions is presented. Analytically-computed first passage time densities of a diffusing particle in a spherical shell with absorbing boundaries are combined with densities obtained from explicit simulation to obtain the overall probability density for the total reaction cycle time of the enzymatic system. The method is used to investigate the catalytic transfer of a phosphoryl group in a phosphoglycerate kinase-ADP-bis phosphoglycerate system, one of the steps of glycolysis. The direct simulation of the enzyme-substrate binding and reaction is carried out using an elastic network model for the protein, and the solvent motions are described by multiparticle collision dynamics, which incorporates hydrodynamic flow effects. Systems where solvent-enzyme coupling occurs through explicit intermolecular interactions, as well as systems where this coupling is taken into account by including the protein and substrate in the multiparticle collision step, are investigated and compared with simulations where hydrodynamic coupling is absent. It is demonstrated that the flow of solvent particles around the enzyme facilitates the large-scale hinge motion of the enzyme with bound substrates, and has a significant impact on the shape of the probability densities and average time scales of substrate binding for substrates near the enzyme, the closure of the enzyme after binding, and the overall time of completion of the cycle.
[ { "created": "Tue, 13 Mar 2012 08:48:11 GMT", "version": "v1" } ]
2015-06-04
[ [ "Schofield", "Jeremy", "" ], [ "Inder", "Paul", "" ], [ "Kapral", "Raymond", "" ] ]
A stochastic model for the dynamics of enzymatic catalysis in explicit, effective solvents under physiological conditions is presented. Analytically-computed first passage time densities of a diffusing particle in a spherical shell with absorbing boundaries are combined with densities obtained from explicit simulation to obtain the overall probability density for the total reaction cycle time of the enzymatic system. The method is used to investigate the catalytic transfer of a phosphoryl group in a phosphoglycerate kinase-ADP-bis phosphoglycerate system, one of the steps of glycolysis. The direct simulation of the enzyme-substrate binding and reaction is carried out using an elastic network model for the protein, and the solvent motions are described by multiparticle collision dynamics, which incorporates hydrodynamic flow effects. Systems where solvent-enzyme coupling occurs through explicit intermolecular interactions, as well as systems where this coupling is taken into account by including the protein and substrate in the multiparticle collision step, are investigated and compared with simulations where hydrodynamic coupling is absent. It is demonstrated that the flow of solvent particles around the enzyme facilitates the large-scale hinge motion of the enzyme with bound substrates, and has a significant impact on the shape of the probability densities and average time scales of substrate binding for substrates near the enzyme, the closure of the enzyme after binding, and the overall time of completion of the cycle.
2110.10945
Rasmus Gr{\o}nfeldt Winther
Rasmus Gr{\o}nfeldt Winther
Lewontin (1972)
To appear in a Routledge book anthology: /Remapping Race in a Global Context/ edited by L. Lorusso and R.G. Winther (the author). 64 pages, 7 tables, 5 figures. For associated Dryad Data File see https://datadryad.org/stash/dataset/doi:10.7291/D1F68R?
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Richard C. Lewontin is arguably the most influential evolutionary biologist of the second half of the 20th century. In this chapter, I provide two windows on his influential 1972 article "The Apportionment of Human Diversity": First, I show how the fourteen publications that he cites influenced him and framed his exploration; second, I present close readings of the five sections of the article: "Introduction," "The Genes," "The Samples," "The Measure of Diversity," and "The Results." I hope to illuminate the article's basic anatomy and argumentative arc, and why it became such a historically important document. In particular, I make explicit all of the mathematics (e.g., six Shannon information measures) and the general population genetic theory underlying this mathematics (e.g., the Wahlund effect). Lewontin did not make this explicit in his article. Furthermore, in redoing all of his calculations, I find that Lewontin made calculation errors (including rounding errors or omitting diversity component values) for all the genes he analyzed except one (P), and understated the among races diversity component, according to even just his own calculations. In reproducing the original computation, I find that the values of, respectively, within populations, among populations but within races, and among races diversity apportionments shift slightly (86%, 7%, 7%); here, in this "field guide" to Lewontin (1972), as well as in Winther (2022), I discuss this change in light of the values produced in subsequent replications of Lewontin's calculation with other statistics and data sets.
[ { "created": "Thu, 21 Oct 2021 07:30:00 GMT", "version": "v1" }, { "created": "Sat, 6 Nov 2021 07:59:40 GMT", "version": "v2" } ]
2021-11-09
[ [ "Winther", "Rasmus Grønfeldt", "" ] ]
Richard C. Lewontin is arguably the most influential evolutionary biologist of the second half of the 20th century. In this chapter, I provide two windows on his influential 1972 article "The Apportionment of Human Diversity": First, I show how the fourteen publications that he cites influenced him and framed his exploration; second, I present close readings of the five sections of the article: "Introduction," "The Genes," "The Samples," "The Measure of Diversity," and "The Results." I hope to illuminate the article's basic anatomy and argumentative arc, and why it became such a historically important document. In particular, I make explicit all of the mathematics (e.g., six Shannon information measures) and the general population genetic theory underlying this mathematics (e.g., the Wahlund effect). Lewontin did not make this explicit in his article. Furthermore, in redoing all of his calculations, I find that Lewontin made calculation errors (including rounding errors or omitting diversity component values) for all the genes he analyzed except one (P), and understated the among races diversity component, according to even just his own calculations. In reproducing the original computation, I find that the values of, respectively, within populations, among populations but within races, and among races diversity apportionments shift slightly (86%, 7%, 7%); here, in this "field guide" to Lewontin (1972), as well as in Winther (2022), I discuss this change in light of the values produced in subsequent replications of Lewontin's calculation with other statistics and data sets.
1411.6330
Brandon Barker
Lin Xu, Brandon Barker, Zhenglong Gu
Dynamic epistasis for different alleles of the same gene
26 pages, 12 figures; Proc Natl Acad Sci U S A 109
null
10.1073/pnas.1121507109
null
q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epistasis refers to the phenomenon in which phenotypic consequences caused by mutation of one gene depend on one or more mutations at another gene. Epistasis is critical for understanding many genetic and evolutionary processes, including pathway organization, evolution of sexual reproduction, mutational load, ploidy, genomic complexity, speciation, and the origin of life. Nevertheless, current understandings for the genome-wide distribution of epistasis are mostly inferred from interactions among one mutant type per gene, whereas how epistatic interaction partners change dynamically for different mutant alleles of the same gene is largely unknown. Here we address this issue by combining predictions from flux balance analysis and data from a recently published high-throughput experiment. Our results show that different alleles can epistatically interact with very different gene sets. Furthermore, between two random mutant alleles of the same gene, the chance for the allele with more severe mutational consequence to develop a higher percentage of negative epistasis than the other allele is 50-70% in eukaryotic organisms, but only 20-30% in bacteria and archaea. We developed a population genetics model that predicts that the observed distribution for the sign of epistasis can speed up the process of purging deleterious mutations in eukaryotic organisms. Our results indicate that epistasis among genes can be dynamically rewired at the genome level, and call on future efforts to revisit theories that can integrate epistatic dynamics among genes in biological systems.
[ { "created": "Mon, 24 Nov 2014 02:34:24 GMT", "version": "v1" } ]
2014-11-25
[ [ "Xu", "Lin", "" ], [ "Barker", "Brandon", "" ], [ "Gu", "Zhenglong", "" ] ]
Epistasis refers to the phenomenon in which phenotypic consequences caused by mutation of one gene depend on one or more mutations at another gene. Epistasis is critical for understanding many genetic and evolutionary processes, including pathway organization, evolution of sexual reproduction, mutational load, ploidy, genomic complexity, speciation, and the origin of life. Nevertheless, current understandings for the genome-wide distribution of epistasis are mostly inferred from interactions among one mutant type per gene, whereas how epistatic interaction partners change dynamically for different mutant alleles of the same gene is largely unknown. Here we address this issue by combining predictions from flux balance analysis and data from a recently published high-throughput experiment. Our results show that different alleles can epistatically interact with very different gene sets. Furthermore, between two random mutant alleles of the same gene, the chance for the allele with more severe mutational consequence to develop a higher percentage of negative epistasis than the other allele is 50-70% in eukaryotic organisms, but only 20-30% in bacteria and archaea. We developed a population genetics model that predicts that the observed distribution for the sign of epistasis can speed up the process of purging deleterious mutations in eukaryotic organisms. Our results indicate that epistasis among genes can be dynamically rewired at the genome level, and call on future efforts to revisit theories that can integrate epistatic dynamics among genes in biological systems.
1507.01903
Christoph Adami
Thomas LaBar, Christoph Adami and Arend Hintze
Does self-replication imply evolvability?
8 pages, 5 figures. To appear in "Advances in Artificial Life": Proceedings of the 13th European Conference on Artificial Life (ECAL 2015)
Proc. of the European Conference on Artificial Life, (P. Andrews, L. Caves, R. Doursat, S. Hickinbotham, F. Polack, S. Stepney, T. Taylor & J. Timmis, eds.) MIT Press (Cambridge, MA, 2015) pp. 596-602
10.7551/978-0-262-33027-5-ch103
null
q-bio.PE nlin.AO q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most prominent property of life on Earth is its ability to evolve. It is often taken for granted that self-replication--the characteristic that makes life possible--implies evolvability, but many examples such as the lack of evolvability in computer viruses seem to challenge this view. Is evolvability itself a property that needs to evolve, or is it automatically present within any chemistry that supports sequences that can evolve in principle? Here, we study evolvability in the digital life system Avida, where self-replicating sequences written by hand are used to seed evolutionary experiments. We use 170 self-replicators that we found in a search through 3 billion randomly generated sequences (at three different sequence lengths) to study the evolvability of generic rather than hand-designed self-replicators. We find that most can evolve but some are evolutionarily sterile. From this limited data set we are led to conclude that evolvability is a likely--but not a guaranteed-- property of random replicators in a digital chemistry.
[ { "created": "Tue, 7 Jul 2015 18:01:24 GMT", "version": "v1" } ]
2015-11-18
[ [ "LaBar", "Thomas", "" ], [ "Adami", "Christoph", "" ], [ "Hintze", "Arend", "" ] ]
The most prominent property of life on Earth is its ability to evolve. It is often taken for granted that self-replication--the characteristic that makes life possible--implies evolvability, but many examples such as the lack of evolvability in computer viruses seem to challenge this view. Is evolvability itself a property that needs to evolve, or is it automatically present within any chemistry that supports sequences that can evolve in principle? Here, we study evolvability in the digital life system Avida, where self-replicating sequences written by hand are used to seed evolutionary experiments. We use 170 self-replicators that we found in a search through 3 billion randomly generated sequences (at three different sequence lengths) to study the evolvability of generic rather than hand-designed self-replicators. We find that most can evolve but some are evolutionarily sterile. From this limited data set we are led to conclude that evolvability is a likely--but not a guaranteed-- property of random replicators in a digital chemistry.
1709.04588
Patricio Maturana
Patricio Maturana Russel
Bayesian support for Evolution: detecting phylogenetic signal in a subset of the primate family
null
null
10.1007/978-3-319-91143-4_20
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theory of evolution states that the diversity of species can be explained by descent with modification. Therefore, all living beings are related through a common ancestor. This evolutionary process must have left traces in our molecular composition. In this work, we present a randomization procedure in order to determine if a group of 5 species of the primate family, namely, macaque, guereza, orangutan, chimpanzee and human, has retained these traces in its molecules. Firstly, we present the randomization methodology through two toy examples, which allow to understand its logic. We then carry out a DNA data analysis to assess if the group of primates contains phylogenetic information which links them in a joint evolutionary history. This is carried out by monitoring a Bayesian measure, called marginal likelihood, which we estimate by using nested sampling. We found that it would be unusual to get the relationship observed in the data among these primate species if they had not shared a common ancestor. The results are in total agreement with the theory of evolution.
[ { "created": "Thu, 14 Sep 2017 02:02:25 GMT", "version": "v1" } ]
2018-08-09
[ [ "Russel", "Patricio Maturana", "" ] ]
The theory of evolution states that the diversity of species can be explained by descent with modification. Therefore, all living beings are related through a common ancestor. This evolutionary process must have left traces in our molecular composition. In this work, we present a randomization procedure in order to determine if a group of 5 species of the primate family, namely, macaque, guereza, orangutan, chimpanzee and human, has retained these traces in its molecules. Firstly, we present the randomization methodology through two toy examples, which allow to understand its logic. We then carry out a DNA data analysis to assess if the group of primates contains phylogenetic information which links them in a joint evolutionary history. This is carried out by monitoring a Bayesian measure, called marginal likelihood, which we estimate by using nested sampling. We found that it would be unusual to get the relationship observed in the data among these primate species if they had not shared a common ancestor. The results are in total agreement with the theory of evolution.
1803.08575
Alejandro Saettone
Alejandro Saettone, Jyoti Garg, Jean-Philippe Lambert, Syed Nabeel-Shah, Marcelo Ponce, Alyson Burtch, Cristina Thuppu Mudalige, Anne-Claude Gingras, Ronald E. Pearlman, Jeffrey Fillingham
The bromodomain-containing protein Ibd1 links multiple chromatin related protein complexes to highly expressed genes in Tetrahymena thermophila
Published on BMC Epigenetics & Chromatin
Epigenetics & Chromatin (2018) 11:10
10.1186/s13072-018-0180-6
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The chromatin remodelers of the SWI/SNF family are critical transcriptional regulators. Recognition of lysine acetylation through a bromodomain (BRD) component is key to SWI/SNF function; in most eukaryotes, this function is attributed to SNF2/Brg1. Results: Using affinity purification coupled to mass spectrometry (AP-MS) we identified members of a SWI/SNF complex (SWI/SNFTt) in Tetrahymena thermophila. SWI/SNFTt is composed of 11 proteins, Snf5Tt, Swi1Tt, Swi3Tt, Snf12Tt, Brg1Tt, two proteins with potential chromatin interacting domains and four proteins without orthologs to SWI/SNF proteins in yeast or mammals. SWI/SNFTt subunits localize exclusively to the transcriptionally active macronucleus (MAC) during growth and development, consistent with a role in transcription. While Tetrahymena Brg1 does not contain a BRD, our AP-MS results identified a BRD-containing SWI/SNFTt component, Ibd1 that associates with SWI/SNFTt during growth but not development. AP-MS analysis of epitope-tagged Ibd1 revealed it to be a subunit of several additional protein complexes, including putative SWRTt, and SAGATt complexes as well as a putative H3K4-specific histone methyl transferase complex. Recombinant Ibd1 recognizes acetyl-lysine marks on histones correlated with active transcription. Consistent with our AP-MS and histone array data suggesting a role in regulation of gene expression, ChIP-Seq analysis of Ibd1 indicated that it primarily binds near promoters and within gene bodies of highly expressed genes during growth. Conclusions: Our results suggest that through recognizing specific histones marks, Ibd1 targets active chromatin regions of highly expressed genes in Tetrahymena where it subsequently might coordinate the recruitment of several chromatin remodeling complexes to regulate the transcriptional landscape of vegetatively growing Tetrahymena cells.
[ { "created": "Thu, 22 Mar 2018 20:22:04 GMT", "version": "v1" } ]
2018-03-26
[ [ "Saettone", "Alejandro", "" ], [ "Garg", "Jyoti", "" ], [ "Lambert", "Jean-Philippe", "" ], [ "Nabeel-Shah", "Syed", "" ], [ "Ponce", "Marcelo", "" ], [ "Burtch", "Alyson", "" ], [ "Mudalige", "Cristina Thuppu"...
Background: The chromatin remodelers of the SWI/SNF family are critical transcriptional regulators. Recognition of lysine acetylation through a bromodomain (BRD) component is key to SWI/SNF function; in most eukaryotes, this function is attributed to SNF2/Brg1. Results: Using affinity purification coupled to mass spectrometry (AP-MS) we identified members of a SWI/SNF complex (SWI/SNFTt) in Tetrahymena thermophila. SWI/SNFTt is composed of 11 proteins, Snf5Tt, Swi1Tt, Swi3Tt, Snf12Tt, Brg1Tt, two proteins with potential chromatin interacting domains and four proteins without orthologs to SWI/SNF proteins in yeast or mammals. SWI/SNFTt subunits localize exclusively to the transcriptionally active macronucleus (MAC) during growth and development, consistent with a role in transcription. While Tetrahymena Brg1 does not contain a BRD, our AP-MS results identified a BRD-containing SWI/SNFTt component, Ibd1 that associates with SWI/SNFTt during growth but not development. AP-MS analysis of epitope-tagged Ibd1 revealed it to be a subunit of several additional protein complexes, including putative SWRTt, and SAGATt complexes as well as a putative H3K4-specific histone methyl transferase complex. Recombinant Ibd1 recognizes acetyl-lysine marks on histones correlated with active transcription. Consistent with our AP-MS and histone array data suggesting a role in regulation of gene expression, ChIP-Seq analysis of Ibd1 indicated that it primarily binds near promoters and within gene bodies of highly expressed genes during growth. Conclusions: Our results suggest that through recognizing specific histones marks, Ibd1 targets active chromatin regions of highly expressed genes in Tetrahymena where it subsequently might coordinate the recruitment of several chromatin remodeling complexes to regulate the transcriptional landscape of vegetatively growing Tetrahymena cells.
1705.10170
Rainer Kujala
Rainer Kujala, Enrico Glerean, Raj Kumar Pan, Iiro P. J\"a\"askel\"ainen, Mikko Sams, Jari Saram\"aki
Graph coarse-graining reveals differences in the module-level structure of functional brain networks
Manuscript + Supplementary materials
European Journal of Neuroscience, 2016, 44, 2673
10.1111/ejn.13392
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network analysis is rapidly becoming a standard tool for studying functional magnetic resonance imaging (fMRI) data. In this framework, different brain areas are mapped to the nodes of a network, whose links depict functional dependencies between the areas. The sizes of the areas that the nodes portray vary between studies. Recently, it has been recommended that the original volume elements, voxels, of the imaging experiment should be used as the network nodes to avoid artefacts and biases. However, this results in a large numbers of nodes and links, and the sheer amount of detail may obscure important network features that are manifested on larger scales. One fruitful approach to detecting such features is to partition networks into modules, i.e. groups of nodes that are densely connected internally but have few connections between them. However, attempting to understand how functional networks differ by simply comparing their individual modular structures can be a daunting task, and results may be hard to interpret. We show that instead of comparing different partitions, it is beneficial to analyze differences in the connectivity between and within the very same modules in networks obtained under different conditions. We develop a network coarse-graining methodology that provides easily interpretable results and allows assessing the statistical significance of observed differences. The feasibility of the method is demonstrated by analyzing fMRI data recorded from 13 healthy subjects during rest and movie viewing. While independent partitioning of the networks corresponding to the the two conditions yields few insights on their differences, network coarse-graining allows us to pinpoint e.g. the increased number of intra-module links within the visual cortex during movie viewing.
[ { "created": "Mon, 29 May 2017 13:18:37 GMT", "version": "v1" } ]
2017-05-30
[ [ "Kujala", "Rainer", "" ], [ "Glerean", "Enrico", "" ], [ "Pan", "Raj Kumar", "" ], [ "Jääskeläinen", "Iiro P.", "" ], [ "Sams", "Mikko", "" ], [ "Saramäki", "Jari", "" ] ]
Network analysis is rapidly becoming a standard tool for studying functional magnetic resonance imaging (fMRI) data. In this framework, different brain areas are mapped to the nodes of a network, whose links depict functional dependencies between the areas. The sizes of the areas that the nodes portray vary between studies. Recently, it has been recommended that the original volume elements, voxels, of the imaging experiment should be used as the network nodes to avoid artefacts and biases. However, this results in a large numbers of nodes and links, and the sheer amount of detail may obscure important network features that are manifested on larger scales. One fruitful approach to detecting such features is to partition networks into modules, i.e. groups of nodes that are densely connected internally but have few connections between them. However, attempting to understand how functional networks differ by simply comparing their individual modular structures can be a daunting task, and results may be hard to interpret. We show that instead of comparing different partitions, it is beneficial to analyze differences in the connectivity between and within the very same modules in networks obtained under different conditions. We develop a network coarse-graining methodology that provides easily interpretable results and allows assessing the statistical significance of observed differences. The feasibility of the method is demonstrated by analyzing fMRI data recorded from 13 healthy subjects during rest and movie viewing. While independent partitioning of the networks corresponding to the the two conditions yields few insights on their differences, network coarse-graining allows us to pinpoint e.g. the increased number of intra-module links within the visual cortex during movie viewing.
1902.05828
Pavel Loskot
Pavel Loskot and Komlan Atitey and Lyudmila Mihaylova
Comprehensive review of models and methods for inferences in bio-chemical reaction networks
300 references, 10 tables, 3 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Key processes in biological and chemical systems are described by networks of chemical reactions. From molecular biology to biotechnology applications, computational models of reaction networks are used extensively to elucidate their non-linear dynamics. Model dynamics are crucially dependent on parameter values which are often estimated from observations. Over past decade, the interest in parameter and state estimation in models of (bio-)chemical reaction networks (BRNs) grew considerably. Statistical inference problems are also encountered in many other tasks including model calibration, discrimination, identifiability and checking as well as optimum experiment design, sensitivity analysis, bifurcation analysis and other. The aim of this review paper is to explore developments of past decade to understand what BRN models are commonly used in literature, and for what inference tasks and inference methods. Initial collection of about 700 publications excluding books in computational biology and chemistry were screened to select over 260 research papers and 20 graduate theses concerning estimation problems in BRNs. The paper selection was performed as text mining using scripts to automate search for relevant keywords and terms. The outcome are tables revealing the level of interest in different inference tasks and methods for given models in literature as well as recent trends. In addition, a brief survey of general estimation strategies is provided to facilitate understanding of estimation methods which are used for BRNs. Our findings indicate that many combinations of models, tasks and methods are still relatively sparse representing new research opportunities to explore those that have not been considered - perhaps for a good reason. The paper concludes by discussing future research directions including research problems which cannot be directly deduced from presented tables.
[ { "created": "Fri, 15 Feb 2019 14:52:35 GMT", "version": "v1" } ]
2019-02-18
[ [ "Loskot", "Pavel", "" ], [ "Atitey", "Komlan", "" ], [ "Mihaylova", "Lyudmila", "" ] ]
Key processes in biological and chemical systems are described by networks of chemical reactions. From molecular biology to biotechnology applications, computational models of reaction networks are used extensively to elucidate their non-linear dynamics. Model dynamics are crucially dependent on parameter values which are often estimated from observations. Over past decade, the interest in parameter and state estimation in models of (bio-)chemical reaction networks (BRNs) grew considerably. Statistical inference problems are also encountered in many other tasks including model calibration, discrimination, identifiability and checking as well as optimum experiment design, sensitivity analysis, bifurcation analysis and other. The aim of this review paper is to explore developments of past decade to understand what BRN models are commonly used in literature, and for what inference tasks and inference methods. Initial collection of about 700 publications excluding books in computational biology and chemistry were screened to select over 260 research papers and 20 graduate theses concerning estimation problems in BRNs. The paper selection was performed as text mining using scripts to automate search for relevant keywords and terms. The outcome are tables revealing the level of interest in different inference tasks and methods for given models in literature as well as recent trends. In addition, a brief survey of general estimation strategies is provided to facilitate understanding of estimation methods which are used for BRNs. Our findings indicate that many combinations of models, tasks and methods are still relatively sparse representing new research opportunities to explore those that have not been considered - perhaps for a good reason. The paper concludes by discussing future research directions including research problems which cannot be directly deduced from presented tables.
q-bio/0406038
Adriano Sousa A. O. Sousa
A.O. Sousa
Sympatric speciation in an age-structured population living on a lattice
5 pages including 3 encapsulated postscript (*.eps) figures; To appear in European Physical Journal B
null
10.1140/epjb/e2004-00225-7
null
q-bio.PE cond-mat.stat-mech
null
A square lattice is introduced into the Penna model for biological aging in order to study the evolution of diploid sexual populations under certain conditions when one single locus in the individual's genome is considered as identifier of species. The simulation results show, after several generations, the flourishing and coexistence of two separate species in the same environment, i.e., one original species splits up into two on the same territory (sympatric speciation). As well, the mortalities obtained are in a good agreement with the Gompertz law of exponential increase of mortality with age.
[ { "created": "Wed, 16 Jun 2004 20:14:26 GMT", "version": "v1" } ]
2009-11-10
[ [ "Sousa", "A. O.", "" ] ]
A square lattice is introduced into the Penna model for biological aging in order to study the evolution of diploid sexual populations under certain conditions when one single locus in the individual's genome is considered as identifier of species. The simulation results show, after several generations, the flourishing and coexistence of two separate species in the same environment, i.e., one original species splits up into two on the same territory (sympatric speciation). As well, the mortalities obtained are in a good agreement with the Gompertz law of exponential increase of mortality with age.
1308.4122
Lianchun Yu
Lianchun Yu and Liwei Liu
The Optimal Size of Stochastic Hodgkin-Huxley Neuronal Systems for Maximal Energy Efficiency in Coding of Pulse Signals
22 pages, 10 figures
Phys. Rev. E 89, 032725 (2014)
10.1103/PhysRevE.89.032725
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generation and conduction of action potentials represents a fundamental means of communication in the nervous system, and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in a process of transfer pulse signals with action potentials. By computer simulation of a stochastic version of Hodgkin-Huxley model with detailed description of ion channel random gating, and analytically solve a bistable neuron model that mimic the action potential generation with a particle crossing the barrier of a double well, we find optimal number of ion channels that maximize energy efficiency for a neuron. We also investigate the energy efficiency of neuron population in which input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal combination of the number of neurons in neuron population and the number of ion channels in each neuron that maximize the energy efficiency. The energy efficiency depends on the characters of the input signals, e.g., the pulse strength and the inter-pulse intervals. We argue that trade-off between reliability of signal transmission and energy cost may influence the size of the neural systems if energy use is constrained.
[ { "created": "Sat, 17 Aug 2013 05:13:14 GMT", "version": "v1" } ]
2014-04-23
[ [ "Yu", "Lianchun", "" ], [ "Liu", "Liwei", "" ] ]
The generation and conduction of action potentials represents a fundamental means of communication in the nervous system, and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in a process of transfer pulse signals with action potentials. By computer simulation of a stochastic version of Hodgkin-Huxley model with detailed description of ion channel random gating, and analytically solve a bistable neuron model that mimic the action potential generation with a particle crossing the barrier of a double well, we find optimal number of ion channels that maximize energy efficiency for a neuron. We also investigate the energy efficiency of neuron population in which input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal combination of the number of neurons in neuron population and the number of ion channels in each neuron that maximize the energy efficiency. The energy efficiency depends on the characters of the input signals, e.g., the pulse strength and the inter-pulse intervals. We argue that trade-off between reliability of signal transmission and energy cost may influence the size of the neural systems if energy use is constrained.
0912.2171
Ulrich S. Schwarz
Christian B. Korn, Stefan Klumpp, Reinhard Lipowsky, Ulrich S. Schwarz
Stochastic simulations of cargo transport by processive molecular motors
40 pages, Revtex with 13 figures, to appear in Journal of Chemical Physics
J. Chem. Phys. 131:245107, 2009
10.1063/1.3279305
null
q-bio.SC cond-mat.soft q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use stochastic computer simulations to study the transport of a spherical cargo particle along a microtubule-like track on a planar substrate by several kinesin-like processive motors. Our newly developed adhesive motor dynamics algorithm combines the numerical integration of a Langevin equation for the motion of a sphere with kinetic rules for the molecular motors. The Langevin part includes diffusive motion, the action of the pulling motors, and hydrodynamic interactions between sphere and wall. The kinetic rules for the motors include binding to and unbinding from the filament as well as active motor steps. We find that the simulated mean transport length increases exponentially with the number of bound motors, in good agreement with earlier results. The number of motors in binding range to the motor track fluctuates in time with a Poissonian distribution, both for springs and cables being used as models for the linker mechanics. Cooperativity in the sense of equal load sharing only occurs for high values for viscosity and attachment time.
[ { "created": "Fri, 11 Dec 2009 08:04:01 GMT", "version": "v1" } ]
2010-02-24
[ [ "Korn", "Christian B.", "" ], [ "Klumpp", "Stefan", "" ], [ "Lipowsky", "Reinhard", "" ], [ "Schwarz", "Ulrich S.", "" ] ]
We use stochastic computer simulations to study the transport of a spherical cargo particle along a microtubule-like track on a planar substrate by several kinesin-like processive motors. Our newly developed adhesive motor dynamics algorithm combines the numerical integration of a Langevin equation for the motion of a sphere with kinetic rules for the molecular motors. The Langevin part includes diffusive motion, the action of the pulling motors, and hydrodynamic interactions between sphere and wall. The kinetic rules for the motors include binding to and unbinding from the filament as well as active motor steps. We find that the simulated mean transport length increases exponentially with the number of bound motors, in good agreement with earlier results. The number of motors in binding range to the motor track fluctuates in time with a Poissonian distribution, both for springs and cables being used as models for the linker mechanics. Cooperativity in the sense of equal load sharing only occurs for high values for viscosity and attachment time.
2210.12064
Shengjie Zheng
Shengjie Zheng, Ling Liu, Junjie Yang, Jianwei Zhang, Tao Su, Bin Yue, Xiaojian Li
Embedded Silicon-Organic Integrated Neuromorphic System
This article need to update the corrected figure and data
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by/4.0/
The development of artificial intelligence (AI) and robotics are both based on the tenet of "science and technology are people-oriented", and both need to achieve efficient communication with the human brain. Based on multi-disciplinary research in systems neuroscience, computer architecture, and functional organic materials, we proposed the concept of using AI to simulate the operating principles and materials of the brain in hardware to develop brain-inspired intelligence technology, and realized the preparation of neuromorphic computing devices and basic materials. We simulated neurons and neural networks in terms of material and morphology, using a variety of organic polymers as the base materials for neuroelectronic devices, for building neural interfaces as well as organic neural devices and silicon neural computational modules. We assemble organic artificial synapses with simulated neurons from silicon-based Field-Programmable Gate Array (FPGA) into organic artificial neurons, the basic components of neural networks, and later construct biological neural network models based on the interpreted neural circuits. Finally, we also discuss how to further build neuromorphic devices based on these organic artificial neurons, which have both a neural interface friendly to nervous tissue and interact with information from real biological neural networks.
[ { "created": "Tue, 18 Oct 2022 01:56:48 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2024 19:35:21 GMT", "version": "v2" } ]
2024-06-27
[ [ "Zheng", "Shengjie", "" ], [ "Liu", "Ling", "" ], [ "Yang", "Junjie", "" ], [ "Zhang", "Jianwei", "" ], [ "Su", "Tao", "" ], [ "Yue", "Bin", "" ], [ "Li", "Xiaojian", "" ] ]
The development of artificial intelligence (AI) and robotics are both based on the tenet of "science and technology are people-oriented", and both need to achieve efficient communication with the human brain. Based on multi-disciplinary research in systems neuroscience, computer architecture, and functional organic materials, we proposed the concept of using AI to simulate the operating principles and materials of the brain in hardware to develop brain-inspired intelligence technology, and realized the preparation of neuromorphic computing devices and basic materials. We simulated neurons and neural networks in terms of material and morphology, using a variety of organic polymers as the base materials for neuroelectronic devices, for building neural interfaces as well as organic neural devices and silicon neural computational modules. We assemble organic artificial synapses with simulated neurons from silicon-based Field-Programmable Gate Array (FPGA) into organic artificial neurons, the basic components of neural networks, and later construct biological neural network models based on the interpreted neural circuits. Finally, we also discuss how to further build neuromorphic devices based on these organic artificial neurons, which have both a neural interface friendly to nervous tissue and interact with information from real biological neural networks.
1303.5044
Alan Bergland
Alan O. Bergland, Emily L. Behrman, Katherine R. O'Brien, Paul S. Schmidt and Dmitri A. Petrov
Genomic evidence of rapid and stable adaptive oscillations over seasonal time scales in Drosophila
44 pages, 7 main figures, 7 supplemental figures, 3 tables
null
10.1371/journal.pgen.1004775
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many species, genomic data have revealed pervasive adaptive evolution indicated by the fixation of beneficial alleles. However, when selection pressures are highly variable along a species range or through time adaptive alleles may persist at intermediate frequencies for long periods. So called balanced polymorphisms have long been understood to be an important component of standing genetic variation yet direct evidence of the strength of balancing selection and the stability and prevalence of balanced polymorphisms has remained elusive. We hypothesized that environmental fluctuations between seasons in a North American orchard would impose temporally variable selection on Drosophila melanogaster and consequently maintain allelic variation at polymorphisms adaptively evolving in response to climatic variation. We identified hundreds of polymorphisms whose frequency oscillates among seasons and argue that these loci are subject to strong, temporally variable selection. We show that these polymorphisms respond to acute and persistent changes in climate and are associated in predictable ways with seasonally variable phenotypes. In addition, we show that adaptively oscillating polymorphisms are likely millions of years old, with some likely predating the divergence between D. melanogaster and D. simulans. Taken together, our results demonstrate that rapid temporal fluctuations in climate over generational time promotes adaptive genetic diversity at loci affecting polygenic phenotypes.
[ { "created": "Wed, 20 Mar 2013 19:42:07 GMT", "version": "v1" }, { "created": "Mon, 6 Jan 2014 18:02:02 GMT", "version": "v2" }, { "created": "Mon, 24 Feb 2014 17:53:16 GMT", "version": "v3" } ]
2014-11-10
[ [ "Bergland", "Alan O.", "" ], [ "Behrman", "Emily L.", "" ], [ "O'Brien", "Katherine R.", "" ], [ "Schmidt", "Paul S.", "" ], [ "Petrov", "Dmitri A.", "" ] ]
In many species, genomic data have revealed pervasive adaptive evolution indicated by the fixation of beneficial alleles. However, when selection pressures are highly variable along a species range or through time adaptive alleles may persist at intermediate frequencies for long periods. So called balanced polymorphisms have long been understood to be an important component of standing genetic variation yet direct evidence of the strength of balancing selection and the stability and prevalence of balanced polymorphisms has remained elusive. We hypothesized that environmental fluctuations between seasons in a North American orchard would impose temporally variable selection on Drosophila melanogaster and consequently maintain allelic variation at polymorphisms adaptively evolving in response to climatic variation. We identified hundreds of polymorphisms whose frequency oscillates among seasons and argue that these loci are subject to strong, temporally variable selection. We show that these polymorphisms respond to acute and persistent changes in climate and are associated in predictable ways with seasonally variable phenotypes. In addition, we show that adaptively oscillating polymorphisms are likely millions of years old, with some likely predating the divergence between D. melanogaster and D. simulans. Taken together, our results demonstrate that rapid temporal fluctuations in climate over generational time promotes adaptive genetic diversity at loci affecting polygenic phenotypes.
q-bio/0505021
Hugues Berry
Hugues Berry (INRIA Futurs), Olivier Temam (INRIA Futurs)
Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems
null
null
null
null
q-bio.NC cs.AR cs.NE nlin.AO
null
Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems.
[ { "created": "Tue, 10 May 2005 19:51:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Berry", "Hugues", "", "INRIA Futurs" ], [ "Temam", "Olivier", "", "INRIA Futurs" ] ]
Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems.