text
stringlengths
11
320k
source
stringlengths
26
161
NeuroD , also called Beta2 , is a basic helix-loop-helix transcription factor expressed in certain parts of brain, beta pancreatic cells and enteroendocrine cells. It is involved in the differentiation of nervous system and development of pancreas. [ 1 ] It heterodimerizes with the products of E2A gene and controls the transcription of a variety of genes by identifying and binding E boxes in their promoter region. In rodents NeuroD is involved in the development of the retina . [ 2 ] In mammals there are two types of this factor: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NeuroD
Neuro biomechanics is a field dedicated to the general study of human movement from various basic perspectives: musculo-skeletal functional anatomy, CNS and neuro-muscular physiology, physics, control theory with cybernetics and computer science. [ 1 ] It is based upon the research of bioengineering researchers, neuro-surgery, orthopedic surgery and biomechanists. Neuro Biomechanics are utilized by neurosurgeons, orthopedic surgeons and primarily by integrated physical medicine practitioners. Practitioners are focused on aiding people in the restoration of biomechanics of the skeletal system in order to measurably improve nervous system function, health, function, quality of life, reduce pain and the progression of degenerative joint and disc disease. Neuro: of or having to do with the nervous system. Nervous system : An organ system that coordinates the activities of muscles, monitors organs, constructs and processes data received from the senses and initiates actions. The human nervous system coordinates the functions of itself and all organ systems including but not limited to the cardiovascular system , respiratory system , skin , digestive system , immune system , hormonal, metabolic, musculoskeletal, endocrine system , blood and reproductive system . Optimal function of the organism as a whole depends upon the proper function of the nervous system. Biomechanics: (biology, physics) The branch of biophysics that deals with the mechanics of the human or animal body; especially concerned with muscles and the skeleton. The study of biomechanical influences upon nervous system function and load bearing joints. Research: Panjabi MM, Journal of Biomechanics, 1974. A note on defining body parts configurations Gracovetsky S. Spine 1986; The Optimum Spine Yoganandan, Spine 1996 Harrison. Spine 2004 Modeling of the Sagittal Cervical Spine as a Method to Discriminate Hypolordosis: Results of Elliptical and Circular Modeling in 72 Asymptomatic Subjects, 52 Acute Neck Pain Subjects, and 70 Chronic Neck Pain Subjects; Spine 2004 Panjabi et al. Spine 1997 Whiplash produces and S-Shape curve... Harrision DE, JMPT 2003, Increasing the Cervical Lordosis with CBP Seated Combined Extension-Compression and Transverse Load Cervical Traction with Cervical Manipulation: Non-randomized Clinical Control Trial Harrison. Journal of Spinal Disorders 1998: Elliptical Modeling of the Sagittal Lumbar Lordosis and Segmental Rotation Angles as a Method to Discriminate Between Normal and Low Back Pain Subjects. Gleb DE. Spine 1995: An Analysis of Sagittal Spinal Alignment in 100 Asymptomatic Middle and Older Aged Volunteers. Janik TJ, Journal Orthop Res, 1998, Can the sagittal lumbar curvature be closely approximated by an ellipse? Harrision. Spine 2001: Methods for Cervical Mensuration analyzed Voutsinas SA, Clinical Orthorpedics 1986 Bernhardt M. Spien 1989: Segmental Analysis of the Sagittal Plane Alignment of the Normal Thoracic and Lumbar Spines and Thoracolumbar Junction. Harrision DE, Archives of Physical Medicine and Rehabilitation 2002, A new 3-point bending traction method for restoring cervical lordosis and cervical manipulation: a nonrandomized clinical controlled trial Helliwell PS, Journal of Bone and Joint Surgery, 1994 The straight cervical spine: does it indicate muscle spasm? Banks, R. Journal of Crash Prevention and Injury Control 2000: Alignment of the lumbar vertebrae in a driving posture Troyanovich SJ, JMPT 1998 Structural rehabilitation of the spine and posture: Sheng-Yun L. PLoS One 2014: Comparison of Modic Changes in the Lumbar and Cervical Spine, in 3167 Patients wit hand without Spinal Pain. Harrison DE. Journal of Spinal Disorders & Techniques: 2002, Can the Thoracic Kyphosis Be Modeled With a Simple Geometric Shape?: The Results of Circular and Elliptical Modeling in 80 Asymptomatic Patients Treatment: Non-Surgical Surgical
https://en.wikipedia.org/wiki/Neuro_biomechanics
Neuroangiogenesis is the coordinated growth of nerves and blood vessels. [ 1 ] The nervous and blood vessel systems share guidance cues and cell-surface receptors allowing for this synchronised growth. The term neuroangiogenesis only came into use in 2002 [ 2 ] and the process was previously known as neurovascular patterning. The combination of neurogenesis and angiogenesis is an essential part of embryonic development and early life. [ 3 ] It is thought to have a role in pathologies such as endometriosis , [ 4 ] brain tumors , [ 5 ] and Alzheimer's disease . [ 6 ] Neurovascular development is the parallel emergence and patterning of the nervous system and the vascular system during embryogenesis and early life. [ 3 ] [ 5 ] Neurovascular congruency appears to be determined by shared molecular patterning mechanism involving axon guidance that involves axonal guidance molecules such as sema3A ( semaphorin 3A) and ( neuropilin ). [ 7 ] Neuroangiogenic and axonal guidance molecules act on both neuronal growth cones and endothelial tip cells in order to guide growth. [ 5 ] Neuronal growth cones are situated on the tips of nerve cells and are responsive to different factors, both positive and negative. Growth of the neuron occurs by an extension of the actin (red in image) and microtubule (green in image) cytoskeleton. [ 8 ] Tip cells found at the extremity of the developing blood vessel control adjacent endothelial cells to direct growth. Tip cells have receptors and ligands via which they respond to local neuroangiogenic factors. [ 8 ] There are many neuroangiogenic factors, some of which act to promote neuronal growth and vice versa. [ 5 ] The table shows examples Neuroangiogenesis is implicated in a number of pathologies, including endometriosis, [ 4 ] brain tumors, [ 5 ] and senile dementias , such as Alzheimer's disease. [ 6 ] Each of these incurs a significant cost for the healthcare industry, meaning that complete understanding of processes involved – including neuroangiogenesis – is necessary to enable development of functional treatments. [ 5 ] [ 9 ] Endometriosis is a common gynaecological disease caused by endometrial tissue implanting outside the uterus , a symptom of which is chronic pelvic pain. The formation, growth and persistence of these implants are dependent upon angiogenesis to increase the supply of blood vessels. The resulting increase in blood flow may correlate directly with pain symptoms. [ citation needed ] One possible explanation for this is the simultaneous growth of neurons into these areas alongside blood vessels through neuroangiogenesis. [ 4 ] Brain tumors, such as glioblastoma multiforme , are characterized by dense vascularity associated with high expression of the proangiogenic factors, VEGF and interleukin 8 . [ 5 ] Following ischemic stroke or traumatic brain injury , angiogenesis supports oxygen and nutrient re-supply to injured tissue, and stimulates neurogenesis and synaptogenesis , particularly in the ischemic penumbra . [ 5 ] Neuroangiogenesis is finely regulated and sequential, involving proliferation and migration of endothelial cells to restore blood–brain barrier function, recruitment of pericytes , and stabilization new blood vessels, a process dependent on upregulation of proangiogenic factors, such as VEGF and angiopoietin-1 . [ 5 ] A condition possibly resulting from a reduction in neuroangiogenic factors is Alzheimer’s disease. Without continued neuroangiogenesis during aging, areas of the brain may no longer have the full complement of functional capillaries and hence, by inference, cerebral blood flow and cognitive ability decline. [ 5 ] [ 6 ] This condition of reduced neuroangiogenesis and lower capillary density during senescence , possibly involving impaired regulation of angiogenic factors by hypoxia , could be a vascular basis for Alzheimer's disease. [ 5 ] [ 6 ] [ 10 ]
https://en.wikipedia.org/wiki/Neuroangiogenesis
In vertebrates, a neuroblast or primitive nerve cell [ 1 ] is a postmitotic cell that does not divide further, [ 2 ] and which will develop into a neuron after a migration phase. [ 3 ] In invertebrates such as Drosophila, neuroblasts are neural progenitor cells which divide asymmetrically to produce a neuroblast, and a daughter cell of varying potency depending on the type of neuroblast. Vertebrate neuroblasts differentiate from radial glial cells and are committed to becoming neurons. [ 4 ] Neural stem cells , which only divide symmetrically to produce more neural stem cells, transition gradually into radial glial cells . [ 5 ] Radial glial cells, also called radial glial progenitor cells, divide asymmetrically to produce a neuroblast and another radial glial cell that will re-enter the cell cycle. [ 5 ] [ 3 ] This mitosis occurs in the germinal neuroepithelium (or germinal zone), when a radial glial cell divides to produce the neuroblast. The neuroblast detaches from the epithelium and migrates while the radial glial progenitor cell produced stays in the lumenal epithelium. The migrating cell will not divide further and this is called the neuron's birthday. Cells with the earliest birthdays will only migrate a short distance. Those cells with later birthdays will migrate further to the more outer regions of the cerebral cortex . The positions that the migrated cells occupy will determine their neuronal differentiation. [ 6 ] Neuroblasts are formed by the asymmetric division of radial glial cells. They start to migrate as soon as they are born. Neurogenesis can only take place when neural stem cells have transitioned into radial glial cells. [ 5 ] Neuroblasts are mainly present as precursors of neurons during embryonic development; however, they also constitute one of the cell types involved in adult neurogenesis . Adult neurogenesis is characterized by neural stem cell differentiation and integration in the mature adult mammalian brain. This process occurs in the dentate gyrus of the hippocampus and in the subventricular zones of the adult mammalian brain. Neuroblasts are formed when a neural stem cell , which can differentiate into any type of mature neural cell (i.e. neurons, oligodendrocytes , astrocytes , etc.), divides and becomes a transit amplifying cell . Transit amplifying cells are slightly more differentiated than neural stem cells and can divide asymmetrically to produce postmitotic neuroblasts and glioblasts, as well as other transit amplifying cells. A neuroblast, a daughter cell of a transit amplifying cell, is initially a neural stem cell that has reached the "point of no return." A neuroblast has differentiated such that it will mature into a neuron and not any other neural cell type. [ 7 ] Neuroblasts are being studied extensively as they have the potential to be used therapeutically to combat cell loss due to injury or disease in the brain, although their potential effectiveness is debated. In the embryo neuroblasts form the middle mantle layer of the neural tube wall which goes on to form the grey matter of the spinal cord . The outer layer to the mantle layer is the marginal layer and this contains the myelinated axons from the neuroblasts forming the white matter of the spinal cord. [ 1 ] The inner layer is the ependymal layer that will form the lining of the ventricles and central canal of the spinal cord. [ 8 ] In humans, neuroblasts produced by stem cells in the adult subventricular zone migrate into damaged areas after brain injuries. However, they are restricted to the subtype of small interneuron-like cells, and it is unlikely that they contribute to functional recovery of striatal circuits. [ 9 ] There are several disorders known as neuronal migration disorders that can cause serious problems. These arise from a disruption in the pattern of migration of the neuroblasts on their way to their target destinations. The disorders include, lissencephaly , microlissencephaly , pachygyria , and several types of gray matter heterotopia . In the fruit fly model organism Drosophila melanogaster , a neuroblast is a neural progenitor cell which divides asymmetrically to produce a neuroblast and either a neuron, a ganglion mother cell (GMC), or an intermediate neural progenitor, depending on the type of neuroblast. [ 10 ] [ 11 ] During embryogenesis , embryonic neuroblasts delaminate from either the procephalic neuroectoderm (for brain neuroblasts), or the ventral nerve cord neuroectoderm (for abdominal neuroblasts). During larval development, optic lobe neuroblasts are generated from a neuroectoderm called the Outer Proliferation Center. [ 12 ] There are more than 800 optic lobe neuroblasts, 105 central brain neuroblasts, and 30 abdominal neuroblasts per hemisegment (a bilateral half of a segment). [ 11 ] Neuroblasts undergo three known division types. Type 0 neuroblasts divide to give rise to a neuroblast, and a daughter cell which directly differentiates into a single neuron or glia. Type I neuroblasts give rise to a neuroblast and a ganglion mother cell (GMC), which undergoes a terminal division to generate a pair of sibling neurons. This is the most common form of cell division, and is observed in abdominal, optic lobe, and central brain neuroblasts. Type II neuroblasts give rise to a neuroblast and a transit amplifying Intermediate Neural Progenitor (INP). INPs divide in a manner similar to type I neuroblasts, producing an INP and a ganglion mother cell. While only 8 type II neuroblasts exist in the central brain, their lineages are both much larger and more complex than type I neuroblasts. [ 11 ] The switch from pluripotent neuroblast to differentiated cell fate is facilitated by the proteins Prospero, Numb , and Miranda. Prospero is a transcription factor that triggers differentiation. It is expressed in neuroblasts, but is kept out of the nucleus by Miranda, which tethers it to the cell basal cortex. This also results in asymmetric division, where Prospero localizes in only one out of the two daughter cells. After division, Prospero enters the nucleus, and the cell it is present in becomes the GMC. Neuroblasts are capable of giving rise to the vast neural diversity present in the fly brain using a combination of spatial and temporal restriction of gene expression that give progeny born from each neuroblast a unique identity depending both their parent neuroblast and their birth date. [ 13 ] This is partly based on the position of the neuroblast along the Anterior/Posterior and Dorsal/Ventral axes, and partly on a temporal sequence of transcription factors that are expressed in a specific order as neuroblasts undergo sequential divisions. [ 14 ]
https://en.wikipedia.org/wiki/Neuroblast
A neurochemical is a small organic molecule or peptide that participates in neural activity. The science of neurochemistry studies the functions of neurochemicals.
https://en.wikipedia.org/wiki/Neurochemical
Neurochemical Research is a monthly peer-reviewed scientific journal covering neurochemistry . It was established in 1976 and is published by Springer Science+Business Media . As of 2024, the editor-in-chief is Henry Sershen ( Nathan S. Kline Institute for Psychiatric Research ). [ 1 ] The past editors-in-chief are Abel Lajtha (1976–2011) and Arne Schousboe of the University of Copenhagen (2011–24). [ 2 ] [ 3 ] The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2012 impact factor of 2.125. [ 4 ] This article about a neuroscience journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Neurochemical_Research
Neurochemistry is the study of chemicals , including neurotransmitters and other molecules such as psychopharmaceuticals and neuropeptides , that control and influence the physiology of the nervous system . This particular field within neuroscience examines how neurochemicals influence the operation of neurons , synapses , and neural networks . Neurochemists analyze the biochemistry and molecular biology of organic compounds in the nervous system, and their roles in such neural processes including cortical plasticity , neurogenesis , and neural differentiation . While neurochemistry as a recognized science is relatively new, the idea behind neurochemistry has been around since the 18th century. Originally, the brain had been thought to be a separate entity apart from the peripheral nervous system. Beginning in 1856, there was a string of research that refuted that idea. The chemical makeup of the brain was nearly identical to the makeup of the peripheral nervous system. [ 1 ] The first large leap forward in the study of neurochemistry came from Johann Ludwig Wilhelm Thudichum , who is one of the pioneers in the field of "brain chemistry." He was one of the first to hypothesize that many neurological illnesses could be attributed to an imbalance of chemicals in the brain. He was also one of the first scientists to believe that through chemical means, the vast majority of neurological diseases could be treated, if not cured. [ 2 ] Irvine Page (1901-1991) was an American psychologist that published the first major textbook focusing on neurochemistry in 1937. He had also established the first department that was solely devoted to the study of neurochemistry in 1928 at the Munich Kaiser Wilhelm Institute for Psychiatry. [ 3 ] Back in the 1930s, neurochemistry was mostly referred to as "brain chemistry" and was mostly devoted to finding different chemical species without directly proposing their specific roles and functions in the nervous system. The first biochemical pathology test for any brain disease can be attributed to Vito Maria Buscaino (1887-1978), a neuropsychiatrist who studied schizophrenia. He found that treating her patients' urine who had schizophrenia, extrapyramidal disorders, or amentia, with 5% silver nitrate produced a black precipitate linked with an abnormal level of amines. This became known as the "Buscaino Reaction." [ 3 ] In the 1950s, neurochemistry became a recognized scientific research discipline. [ 4 ] The founding of neurochemistry as a discipline traces its origins to a series of "International Neurochemical Symposia", of which the first symposium volume published in 1954 was titled Biochemistry of the Developing Nervous System . [ 5 ] These meetings led to the formation of the International Society for Neurochemistry and the American Society for Neurochemistry . These early gatherings discussed the tentative nature of possible neurotransmitter substances such as acetylcholine , histamine , substance P , and serotonin . By 1972, ideas were more concrete. One of the first major successes in using chemicals to alter brain function was the L-DOPA experiment. In 1961, Walter Burkmayer injected L-DOPA into a patient with Parkinson's disease . Shortly after injection, the patient had a drastic reduction in tremors, and they were able to control their muscles in ways they hadn't been able to in a long time. The effect peaked within 2.5 hours and lasted approximately 24 hours. [ 1 ] The most important aspect of neurochemistry is the neurotransmitters and neuropeptides that comprise the chemical activity in the nervous system. There are many neurochemicals that are integral for proper neural functioning. The neuropeptide oxytocin , synthesized in magnocellular neurosecretory cells, plays an important role in maternal behavior and sexual reproduction, particularly before and after birth. It is a precursor protein that is processed proteolytically to activate the neuropeptide as its shorter form. It is involved in the letdown reflex when mothers breastfeed, uterine contractions, and the hypothalamic-pituitary-adrenal axis where oxytocin inhibits the release of cortisol and adrenocorticotropic hormone. [ 6 ] [ 7 ] [ 8 ] [ 9 ] Glutamate, which is the most abundant neurotransmitter, is an excitatory neurochemical, meaning that its release in the synaptic cleft causes the firing of an action potential. GABA, or Gamma-aminobutyric acid, is an inhibitory neurotransmitter. It binds to the plasma membrane in the synapses of neurons, triggering the influx of negatively charged chloride ions and the efflux of positively charged potassium ions. This exchange of ions leads to the hyperpolarization of the transmembrane potential of the neuron, which is caused by this negative change. [ 10 ] [ 11 ] Dopamine is a neurotransmitter with much importance in the limbic system which regulates emotional function regulation. Dopamine has many roles in the brain including cognition, sleep, mood, milk production, movement, motivation, and reward. [ 12 ] Serotonin is a neurotransmitter that regulates mood, sleep, and other roles of the brain. It is a peripheral signal mediator and is found in the gastrointestinal tract as well as in blood. Research also suggests that serotonin may play an important role in liver regeneration. [ 13 ] Neurochemistry is the study of the different types, structures, and functions of neurons and their chemical components. Chemical signaling between neurons is mediated by neurotransmitters, neuropeptides, hormones, neuromodulators, and many other types of signaling molecules. Many neurological diseases arise due to an imbalance in the brain's neurochemistry. For example, in Parkinson's Disease, there is an imbalance in the brain's level of dopamine. Medications include neurochemicals that are used to alter brain function and treat disorders of the brain. A typical neurochemist might study how the chemical components of the brain interact, neural plasticity, neural development, physical changes in the brain during disease, and changes in the brain during aging. [ 14 ] [ 15 ] One of the major areas of research within neurochemistry is looking at how post-traumatic stress disorder alters the brain. Neurotransmitter level fluctuations can dictate whether a PTSD episode occurs and how long the episode lasts. Dopamine has less of an effect than norepinephrine . Different neurochemicals can affect different parts of the brain. This allows drugs to be used for PTSD to not have an undesired effect on other brain processes. An effective medication to help alleviate nightmares associated with PTSD is Prazosin . [ 16 ]
https://en.wikipedia.org/wiki/Neurochemistry
Neurochemistry International is a peer-reviewed scientific journal covering research in neurochemistry , including molecular and cellular neurochemistry, neuropharmacology and genetic aspects of central nervous system function, neuroimmunology , metabolism as well as the neurochemistry of neurological and psychiatric disorders of the CNS. It is published by Elsevier and the editor-in-chief is Michael Robinson ( Children's Hospital of Philadelphia ). According to the Journal Citation Reports , the journal has a 2021 impact factor of 4.297. [ 1 ] This article about a neuroscience journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This article about a biochemistry journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Neurochemistry_International
Neurocriminology is an emerging sub-discipline of biocriminology and criminology that applies brain imaging techniques and principles from neuroscience to understand, predict, and prevent crime. While crime is partially a social and environmental problem, the main idea behind neurocriminology (also known as neurolaw) is that the condition of an individual's brain often needs to be included in the analysis for a complete understanding. [ 1 ] [ 2 ] This can include conditions such as brain tumors, psychoses, sociopathy, sleepwalking, and many more. Deviant brain theories have always been part of biocriminology, which explains crime with biological reasons. [ 3 ] Neurocriminology has become mainstream during the past two decades, [ timeframe? ] [ 4 ] since contemporary biocriminologists focus almost exclusively on brain [ 5 ] due to significant advances in neuroscience. Even though neurocriminology is still at odds with traditional sociological theories of crime, [ 4 ] it is becoming more popular in the scientific community. [ 6 ] The origins of neurocriminology go back to one of the founders of modern criminology, 19th-century Italian psychiatrist and prison doctor Cesare Lombroso , whose beliefs that the crime originated from brain abnormalities were partly based on phrenological theories about the shape and size of the human head. Lombroso conducted a postmortem on a serial killer and rapist, who had an unusual indentation at the base of the skull. Lombroso discovered a hollow part in the killer's brain where the cerebellum would be. Lombroso's theory was that crime originated in part from abnormal brain physiology and that violent criminals where throwbacks to less evolved human types identifiable by ape-like physical characteristics. Criminals, he believed, could be identified by physical traits, such as a large jaw and sloping forehead. [ 6 ] The contemporary neuroscientists further developed his idea that physiology and traits of the brain underlie all crime. [ 7 ] The term “neurocriminology” was first introduced [ when? ] by James Hilborn (Cognitive Centre of Canada) and adopted [ when? ] by the leading researcher in the field, Dr. Adrian Raine , the chair of the Criminology Department at University of Pennsylvania. [ 8 ] He was the first to conduct brain imaging study on violent criminals. [ when? ] [ 9 ] Many recent studies [ which? ] have revealed that sometimes structural and functional abnormalities [ which? ] are so striking that anyone can see them. [ citation needed ] Some violent offenders, [ which? ] however, have subtle structural or functional abnormalities [ which? ] and even highly experienced neuroradiologists cannot detect these irregularities right away. [ citation needed ] Yet, the abnormalities can be detected using brain imaging and state-of-the-art analytic tools. [ 10 ] Studies on structural deficiencies suggest that people consistently behaving antisocially have structurally impaired brains. [ citation needed ] The abnormalities can be either of general character or affect specific regions of the brain that control emotions, aggression or are responsible for ethical decisions: Low number of neurons in the prefrontal cortex . A study in 2000 determined that people with a history of persistent antisocial behavior had an 11 percent reduction in the volume of gray matter in the prefrontal cortex, while white matter volume was normal. [ 11 ] Similarly, A 2009 meta-analysis study, which pooled together the findings of 12 anatomical brain-imaging studies conducted on offender populations, found that the prefrontal cortex of the brain is indeed structurally impaired in offenders. [ 12 ] Underdeveloped amygdalae . Two studies found that both the left and especially the right amygdalae are impaired in psychopaths. The psychopaths had on average 18 percent reduction in the volume of the right amygdala. [ 13 ] [ 14 ] Cavum septi pellucidi maldevelopment . A study in 2010 suggested that people with cavum septi pellucidi were prone to psychopathy, antisocial personality disorder, and had more charges and convictions for criminal offenses. This brain maldevelopment was especially linked to lifelong antisocial behavior, i.e. a reckless disregard for self and others, lack of remorse, and aggression. [ 15 ] Bigger right hippocampus . A 2004 study suggested that the psychopaths’ right hippocampus that partially controls emotions and regulates aggression was significantly bigger than the left. This asymmetry was also true in normal people, but it was much more noticeable in psychopaths. [ 16 ] Increase in the volume of the striatum . A study in 2010 found that psychopathic individuals showed a 10 percent increase in the volume of the striatum. [ 17 ] Damage by foreign objects . A large number of studies on structural damage by foreign objects convincingly shows that adults suffering head injuries damaging the prefrontal cortex show impulsive and antisocial behavior that does not conform to the norms of society. [ 18 ] There is a number of famous life stories showing the same causal connection. For example, P. Gage was a well-respected, well-liked, and responsible gentleman. In 1848 because of a construction accident he suffered a serious damage to his brain when a metal rod propelled by an explosive entered his lower left cheek and exited from the top-middle part of his head. Gage healed quickly. After that accident, however, he became erratic, disrespectful, and vulgar. Gage had been transformed from a well-controlled, well-respected person to an individual with psychopathic traits. [ 19 ] Damage by tumors . There is also a number of famous U.S. criminal cases showing that damage of the brain by tumors can result in the same transformation as the damage by foreign objects. Charles Whitman, for instance, was a young man who studied architectural engineering at the University of Texas. Whitman had no history of violence or crime. As a child, he scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. He was an Eagle Scout, volunteered as a scoutmaster, and served in Marine Corps. In 1966 Whitman unexpectedly killed his mother as well as wife, ascended the belltower of the University of Texas, Austin, and fired a rifle at students below. He killed 15 people and wounding 31 more before police officers shot him. Whitman in his final note complained of inability to control his thoughts and requested an autopsy, which revealed a brain tumor in the hypothalamus region of his brain, a growth that, some hypothesized, put pressure on his amygdala. [ 20 ] [ 21 ] Another example would be Michael Oft. Oft was a teacher in Virginia who had no prior psychiatric nor deviant behavior history. At the age of forty, his behavior suddenly changed. He began to frequent massage parlors, collect child pornography, abuse his step-daughter, and was soon found guilty of child molestation. Mr. Oft opted for a treatment program for pedophiles, but still couldn't resist soliciting sexual favors from staff and other clients at the rehabilitation center. A neurologist advised a brain scan, which showed a tumor growing at the base of his orbitofrontal cortex, compressing the right prefrontal region of his brain. After the tumor was removed, Mr. Oft's emotion, behavior and sexual activity returned to normal. But after several months of normal behavior Mr. Oft again began to collect child pornography. Neurologists rescanned his brain and found that the tumor had grown back. After the second surgery removing the tumor, his behavior has been totally appropriate. [ 22 ] Similarly to neurophysiological studies neurofunctional showed that brains of criminals and psychopaths not only are structures differently but also operate in a different way. As you can see below, both structural and functional abnormalities tend to affect the same areas of the brain. These are the major abnormalities found: Lack of Activation in the Prefrontal Cortex . A number of studies replicated the observance that violent criminals’ brains showed a significant reduction in prefrontal glucose metabolism. [ 23 ] [ 12 ] Reduced Activity In The Amygdala . A study found that individuals with high psychopathy scores showed reduced activity in the amygdala during emotional, personal moral decision-making. [ 24 ] Dysfunctional Posterior Cingulate . Two studies found that posterior cingulate functions poorly in adult criminal psychopaths and aggressive patients. [ 25 ] [ 26 ] Reduced Cerebral Blood Flow in Angular Gyrus . A couple of studies found reduced cerebral blood flow in angular gyrus of murderers and impulsive, violent criminals. [ 27 ] [ 28 ] [ 29 ] Higher Activation of Subcortical Limbic Regions . A 1998 study showed higher activation of subcortical limbic regions of two groups of reactive and proactive murderers, especially in the more “emotional” right hemisphere of the brain. [ 30 ] Functional Disturbances of the Hippocampus and Its Parahippocampal Gyrus . A number of studies suggest that this region of the brain is not working properly in murders and violent offenders in general. [ 31 ] [ 32 ] Differences in hormone levels: A 2022 study observed reported crime and hormone levels–mainly that of testosterone and cortisol–within a population of university students as consenting participants. Results from the research found that there was a positive direct correlation between testosterone levels and criminal behavior, particularly in terms of impulsive behavior and aggression. However, further tests may need to be conducted to determine if those behaviors are connected to criminal behavior through dominance-seeking behavior associated with testosterone or if it is more so due to testosterone’s influence in the mesolimbic reward system by instant gratification related to committing crime. It was also found that cortisol–a major hormone in the stress response system–in both high and low levels is correlated with an increase in criminal behavior. In response to stress or threats, high levels of cortisol boost energy, suppress the immune system and increase cardiovascular activity while low levels are associated with signs of psychopathy including a lack of empathy. The two hormones were found to interact to create influence on criminal behavior with low levels of cortisol and baseline levels of testosterone correlating with income-generating crime. [ 33 ] Effects of drugs : Illegal drug use and drug abuse are found to be highly correlated with antisocial behaviors leading to crime. Drugs function to mimic and take the place of naturally-occurring neurotransmitters–or chemical brain signals–that activate brain chemical receptors and affect arousal, mood, as well as physiological and cognitive function among other neurophysiological effects. In cases of addiction, particular drugs may affect the brain’s rewards system, making it overly sensitive to the drug: thus making naturally occurring, healthy behaviors less rewarding and increasing deviant behaviors like attention-seeking, impulsivity, and aggression–often related to withdrawal behavioral traits–all of which can promote criminal behavior. In particular, inhibition caused by drug use can impair regular brain functioning, especially that of the pre-frontal cortex, impairing the ability to make decisions and perform higher-level thinking and reasoning that is otherwise critical in preventing criminal and deviant behaviors. [ 34 ] Unlike the founding father of criminology, Cesare Lombroso, who thought that crime was fundamentally biological in its origin and criminals lacked free will altogether, contemporary neurocriminologists seem to take the middle ground approach. [ citation needed ] They do not argue that biological factors alone cause behavioral problems, but recognize that behavior results from interaction between biology and environment. [ 35 ] [ 36 ] Some authors, [ who? ] however, are more determinist in their views. As Stanford neuroscientist David Eagleman writes: "Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease." [ 37 ] US legal defense teams increasingly use brain scans as mitigating evidence in trials of violent criminals and sex offenders. See Neurolaw for more. Here are some of the most famous cases: In 1991, a sixty-five-year-old advertising executive with no prior history of crime or violence after an argument strangled his wife, opened the window and threw her out of their 12th-floor apartment. His defense team had a structural brain scan done using MRI and PET scan. The images showed a big piece missing from the prefrontal cortex of the brain, i.e., a subarachnoid cyst was growing in his left frontal lobe. The defense team used these images to argue that Weinstein had an impaired ability to regulate his emotions and make rational decisions. The team went with an insanity defense, and the prosecution and defense agreed to a plea of manslaughter. As a result, Weinstein was given a seven-year sentence in contrast to the twenty-five-year sentence he would have served if he had been convicted of second-degree murder. He ended up serving until 2006. [ 38 ] Bustamante was a well-behaved teenager who suddenly at the age of 22 became a career criminal. His crimes included theft, breaking and entering, drug offenses, and robbery. In 1990 Bustamante was charged with a homicide. The defense team discovered that the client had suffered a head injury from a crowbar at the age of twenty. Bustamante's behavior changed fundamentally after that, transforming him from a normal individual into an impulsive and emotionally labile criminal. The defense team had his client's brain scanned, which revealed malfunctioning of the prefrontal cortex. At the end the jury believed that Bustamante's brain was not normal and spared him from the death penalty. [ 39 ] In 1999, Page robbed, raped and killed a female student in Denver. He later was found guilty of first-degree murder and was a candidate for the death penalty. Professor A. Raine from the University of Pennsylvania was an expert witness for defense and brought Page into a laboratory to assess his brain function. Brain imaging scans revealed a distinct lack of activation in the ventral prefrontal cortex. Professor Raine argued for a deep-rooted biological explanation for Mr. Page's violence, who escaped death penalty partly on the basis of his brain pathology. [ 6 ] Even though currently there are no preventive programs in place utilizing the recent discoveries in neurocriminology, there are a number of offender rehabilitation programs (Cognitive Centre of Canada). Some scientists [ who? ] propose using brain imaging to help decide which soon-to-be-released offenders are at greater risk for reoffending. The brain imaging data would be used along with common factors like age, prior arrests, and marital status. [ 6 ] To support this idea, in a 2013 study, Professor Kent Kiehl from the University of New Mexico studying the population of 96 male offenders in the state's prisons found that offenders with low activity in the anterior cingulate cortex where twice as likely to commit an offense in the four years after their release as those who had high activity in this region. [ 6 ] Similarly, Dustin Pardini conducted that which shows that men with a smaller amygdala are three times more likely to commit violence three years after their release. [ 40 ] Trials demonstrated the efficacy of a number of medications, i.e. stimulants antipsychotics, antidepressants and mood stabilizers, in diminishing aggression in adolescents and children. [ 6 ] Even a simple omega-3 supplements in the diets of young offenders reduces offending and aggression. [ 41 ] [ 42 ] However, drug treatment is subject to vary based on biological and environmental influences. Variation in genes predisposes differences in biological systems and brain structure and function within individuals, influencing outcomes. [ 34 ] Meditation can also affect brains, and even change them permanently. In 2003 Professor Richie Davidson from the University of Wisconsin performed a revolutionary study. People were randomly selected into either a mindfulness training group or a control group that was put on a waiting list for training. Davidson showed that even eight weekly sessions of meditation enhanced left frontal EEG functioning. [ 43 ] Similar study was later replicated by Professor Holzel. [ 44 ] In preventing crime on the basis of association to neurobiological function, there could also be adverse effects in increased stigma around those with atypical brain functioning and mental disorders. Although much research has been discovered in relation to neurocriminology, all atypical brain functions do not objectively result in deviant, criminal, or problematic behaviors. This bias can potentially bring bias towards those with divergent mental functionings into being categorized as those who are unable to make–morally–correct decisions. It is also worth considering that while focusing on neurobiological aspects, the social-environmental facets and causes of criminal behaviors cannot be ignored. Research has found that although an early intervention may benefit those who are at risk of violent, anti-social behaviors–especially in children and adolescents–it can adversely cause negative effects. When stigma is associated with the labeling of mental functions, it can increase anxiety and potentially trigger the development of maladaptive cognitions and narratives. Although neurological research is important in crime prevention, performing intervention in light of the criminal justice system is ideally done while respecting the rights of people and earning consent to perform prevention strategies. [ 45 ]
https://en.wikipedia.org/wiki/Neurocriminology
A neurodegenerative disease is caused by the progressive loss of neurons , in the process known as neurodegeneration . [ 2 ] [ 3 ] Neuronal damage may also ultimately result in their death . Neurodegenerative diseases include amyotrophic lateral sclerosis , multiple sclerosis , Parkinson's disease , Alzheimer's disease , Huntington's disease , multiple system atrophy , tauopathies , and prion diseases . Neurodegeneration can be found in the brain at many different levels of neuronal circuitry, ranging from molecular to systemic. [ 4 ] Because there is no known way to reverse the progressive degeneration of neurons, these diseases are considered to be incurable; however research has shown that the two major contributing factors to neurodegeneration are oxidative stress and inflammation . [ 5 ] [ 6 ] [ 7 ] [ 8 ] Biomedical research has revealed many similarities between these diseases at the subcellular level, including atypical protein assemblies (like proteinopathy ) and induced cell death. [ 9 ] [ 10 ] These similarities suggest that therapeutic advances against one neurodegenerative disease might ameliorate other diseases as well. Within neurodegenerative diseases, it is estimated that 55 million people worldwide had dementia in 2019, and that by 2050 this figure will increase to 139 million people. [ 11 ] The consequences of neurodegeneration can vary widely depending on the specific region affected, ranging from issues related to movement to the development of dementia. [ 12 ] [ 13 ] Alzheimer's disease (AD) is a chronic neurodegenerative disease that results in the loss of neurons and synapses in the cerebral cortex and certain subcortical structures, resulting in gross atrophy of the temporal lobe , parietal lobe , and parts of the frontal cortex and cingulate gyrus . [ 14 ] It is the most common neurodegenerative disease. [ 1 ] Even with billions of dollars being used to find a treatment for Alzheimer's disease, no effective treatments have been found. [ 15 ] Within clinical trials stable and effective AD therapeutic strategies have a 99.5% failure rate. [ 16 ] Reasons for this failure rate include inappropriate drug doses, invalid target and participant selection, and inadequate knowledge of pathophysiology of AD. Currently, diagnoses of Alzheimer's is subpar, and better methods need to be utilized for various aspects of clinical diagnoses. [ 17 ] Alzheimer's has a 20% misdiagnosis rate. [ 17 ] AD pathology is primarily characterized by the presence of amyloid plaques and neurofibrillary tangles . Plaques are made up of small peptides , typically 39–43 amino acids in length, called amyloid beta (also written as A-beta or Aβ). Amyloid beta is a fragment from a larger protein called amyloid precursor protein (APP), a transmembrane protein that penetrates through the neuron's membrane. APP appears to play roles in normal neuron growth, survival and post-injury repair. [ 18 ] [ 19 ] APP is cleaved into smaller fragments by enzymes such as gamma secretase and beta secretase . [ 20 ] One of these fragments gives rise to fibrils of amyloid beta which can self-assemble into the dense extracellular amyloid plaques. [ 21 ] [ 22 ] Parkinson's disease (PD) is the second most common neurodegenerative disorder. [ 23 ] It typically manifests as bradykinesia , rigidity, resting tremor and posture instability. The crude prevalence rate of PD has been reported to range from 15 per 100,000 to 12,500 per 100,000, and the incidence of PD from 15 per 100,000 to 328 per 100,000, with the disease being less common in Asian countries. PD is primarily characterized by death of dopaminergic neurons in the substantia nigra , a region of the midbrain . The cause of this selective cell death is unknown. Notably, alpha-synuclein - ubiquitin complexes and aggregates are observed to accumulate in Lewy bodies within affected neurons. It is thought that defects in protein transport machinery and regulation, such as RAB1 , may play a role in this disease mechanism. [ 24 ] Impaired axonal transport of alpha-synuclein may also lead to its accumulation in Lewy bodies. Experiments have revealed reduced transport rates of both wild-type and two familial Parkinson's disease-associated mutant alpha-synucleins through axons of cultured neurons. [ 25 ] Membrane damage by alpha-synuclein could be another Parkinson's disease mechanism. [ 26 ] The main known risk factor is age. Mutations in genes such as α-synuclein (SNCA), leucine-rich repeat kinase 2 (LRRK2), glucocerebrosidase (GBA), and tau protein (MAPT) can also cause hereditary PD or increase PD risk. [ 27 ] While PD is the second most common neurodegenerative disorder, problems with diagnoses still persist. [ 28 ] Problems with the sense of smell is a widespread symptom of Parkinson's disease (PD), however, some neurologists question its efficacy. [ 28 ] This assessment method is a source of controversy among medical professionals. [ 28 ] The gut microbiome might play a role in the diagnosis of PD, and research suggests various ways that could revolutionize the future of PD treatment. [ 29 ] Huntington's disease (HD) is a rare autosomal dominant neurodegenerative disorder caused by mutations in the huntingtin gene (HTT) . HD is characterized by loss of medium spiny neurons and astrogliosis . [ 30 ] [ 31 ] [ 32 ] The first brain region to be substantially affected is the striatum , followed by degeneration of the frontal and temporal cortices. [ 33 ] The striatum's subthalamic nuclei send control signals to the globus pallidus , which initiates and modulates motion. The weaker signals from subthalamic nuclei thus cause reduced initiation and modulation of movement, resulting in the characteristic movements of the disorder, notably chorea . [ 34 ] Huntington's disease presents itself later in life even though the proteins that cause the disease works towards manifestation from their early stages in the humans affected by the proteins. [ 35 ] Along with being a neurodegenerative disorder, HD has links to problems with neurodevelopment. [ 35 ] HD is caused by polyglutamine tract expansion in the huntingtin gene, resulting in the mutant huntingtin. Aggregates of mutant huntingtin form as inclusion bodies in neurons, and may be directly toxic. Additionally, they may damage molecular motors and microtubules to interfere with normal axonal transport , leading to impaired transport of important cargoes such as BDNF . [ 25 ] Huntington's disease currently has no effective treatments that would modify the disease. [ 36 ] Multiple sclerosis (MS) is a chronic debilitating demyelinating disease of the central nervous system , caused by an autoimmune attack resulting in the progressive loss of myelin sheath on neuronal axons. [ 37 ] The resultant decrease in the speed of signal transduction leads to a loss of functionality that includes both cognitive and motor impairment depending on the location of the lesion. [ 37 ] The progression of MS occurs due to episodes of increasing inflammation, which is proposed to be due to the release of antigens such as myelin oligodendrocyte glycoprotein , myelin basic protein , and proteolipid protein , causing an autoimmune response. [ 38 ] This sets off a cascade of signaling molecules that result in T cells, B cells, and macrophages to cross the blood-brain barrier and attack myelin on neuronal axons leading to inflammation. [ 39 ] Further release of antigens drives subsequent degeneration causing increased inflammation. [ 40 ] Multiple sclerosis presents itself as a spectrum based on the degree of inflammation, a majority of patients experience early relapsing and remitting episodes of neuronal deterioration following a period of recovery. Some of these individuals may transition to a more linear progression of the disease, while about 15% of others begin with a progressive course on the onset of multiple sclerosis. The inflammatory response contributes to the loss of the grey matter, and as a result current literature devotes itself to combatting the auto-inflammatory aspect of the disease. [ 39 ] While there are several proposed causal links between EBV and the HLA-DRB1*15:01 allele to the onset of MS – they may contribute to the degree of autoimmune attack and the resultant inflammation – they do not determine the onset of MS. [ 39 ] Amyotrophic lateral sclerosis (ALS), commonly referred to Lou Gehrig's disease, is a rare neurodegenerative disorder characterized by the gradual loss of both upper motor neurons (UMNs) and lower motor neurons (LMNs). [ 41 ] Although initial symptoms may vary, most patients develop skeletal muscle weakness that progresses to involve the entire body. [ 41 ] The precise etiology of ALS remains unknown. In 1993, missense mutations in the gene encoding the antioxidant enzyme superoxide dismutase 1 (SOD1) were discovered in a subset of patients with familial ALS. More recently, TAR DNA-binding protein 43 (TDP-43) and Fused in Sarcoma (FUS) protein aggregates have been implicated in some cases of the disease, and a mutation in chromosome 9 ( C9orf72 ) is thought to be the most common known cause of sporadic ALS. Early diagnosis of ALS is harder than with other neurodegenerative diseases as there are no highly effective means of determining its early onset. [ 41 ] Currently, there is research being done regarding the diagnosis of ALS through upper motor neuron tests. [ 42 ] The Penn Upper Motor Neuron Score (PUMNS) consists of 28 criteria with a score range of 0–32. [ 42 ] A higher score indicates a higher level of burden present on the upper motor neurons. [ 42 ] The PUMNS has proven quite effective in determining the burden that exists on upper motor neurons in affected patients. [ 42 ] Independent research provided in vitro evidence that the primary cellular sites where SOD1 mutations act are located on astrocytes . [ 43 ] [ 44 ] Astrocytes then cause the toxic effects on the motor neurons . [ 45 ] The specific mechanism of toxicity still needs to be investigated, but the findings are significant because they implicate cells other than neuron cells in neurodegeneration. [ 46 ] Batten disease is a rare and fatal recessive neurodegenerative disorder that begins in childhood. [ 47 ] Batten disease is the common name for a group of lysosomal storage disorders known as neuronal ceroid lipofuscinoses (NCLs) – each caused by a specific gene mutation, [ 47 ] of which there are thirteen. [ 48 ] Since Batten disease is quite rare, its worldwide prevalence is about 1 in every 100,000 live births. [ 48 ] In North America, NCL3 disease (juvenile NCL) typically manifests between the ages of 4 and 7. [ 49 ] Batten disease is characterized by motor impairment, epilepsy , dementia , vision loss, and shortened lifespan. [ 50 ] A loss of vision is common first sign of Batten disease. [ 49 ] Loss of vision is typically preceded by cognitive and behavioral changes, seizures, and loss of the ability to walk. [ 49 ] It is common for people to establish cardiac arrhythmias and difficulties eating food as the disease progresses. [ 49 ] Batten disease diagnosis depends on a conflation of many criteria: clinical signs and symptoms, evaluations of the eye, electroencephalograms (EEG), and brain magnetic resonance imaging (MRI) results. [ 48 ] The diagnosis provided by these results are corroborated by genetic and biochemical testing. [ 48 ] It is only in recent years that more models have been created to expedite the research process for methods to treat Batten disease. [ 48 ] Creutzfeldt–Jakob disease (CJD) is a prion disease that is characterized by rapidly progressive dementia. [ 51 ] Misfolded proteins called prions aggregate in brain tissue leading to nerve cell death. [ 52 ] Variant Creutzfeldt–Jakob disease (vCJD) is the infectious form that comes from the meat of a cow that was infected with bovine spongiform encephalopathy , also called mad cow disease. [ 53 ] The greatest risk factor for neurodegenerative diseases is aging . Mitochondrial DNA mutations as well as oxidative stress both contribute to aging. [ 54 ] Many of these diseases are late-onset, meaning there is some factor that changes as a person ages for each disease. [ 9 ] One constant factor is that in each disease, neurons gradually lose function as the disease progresses with age. It has been proposed that DNA damage accumulation provides the underlying causative link between aging and neurodegenerative disease. [ 55 ] [ 56 ] About 20–40% of healthy people between 60 and 78 years old experience discernable decrements in cognitive performance in several domains including working, spatial, and episodic memory, and processing speed. [ 57 ] A study using electronic health records indicates that 45 (with 22 of these being replicated with the UK Biobank ) viral exposures can significantly elevate risks of neurodegenerative disease, including up to 15 years after infection. [ 58 ] [ 59 ] Many neurodegenerative diseases are caused by genetic mutations , most of which are located in completely unrelated genes. In many of the different diseases, the mutated gene has a common feature: a repeat of the CAG nucleotide triplet. CAG codes for the amino acid glutamine . A repeat of CAG results in a polyglutamine (polyQ) tract . Diseases associated with such mutations are known as trinucleotide repeat disorders . [ 60 ] [ 61 ] Polyglutamine repeats typically cause dominant pathogenesis. Extra glutamine residues can acquire toxic properties through a variety of ways, including irregular protein folding and degradation pathways, altered subcellular localization, and abnormal interactions with other cellular proteins. [ 60 ] PolyQ studies often use a variety of animal models because there is such a clearly defined trigger – repeat expansion. Extensive research has been done using the models of nematode ( C. elegans ), and fruit fly ( Drosophila ), mice, and non-human primates. [ 61 ] [ 62 ] Nine inherited neurodegenerative diseases are caused by the expansion of the CAG trinucleotide and polyQ tract, including Huntington's disease and the spinocerebellar ataxias . [ 63 ] The presence of epigenetic modifications for certain genes has been demonstrated in this type of pathology. An example is FKBP5 gene, which progressively increases its expression with age and has been related to Braak staging and increased tau pathology both in vitro and in mouse models of AD. [ 64 ] Several neurodegenerative diseases are classified as proteopathies as they are associated with the aggregation of misfolded proteins . Protein toxicity is one of the key mechanisms of many neurodegenrative diseases. [ 65 ] Parkinson's disease and Huntington's disease are both late-onset and associated with the accumulation of intracellular toxic proteins. Diseases caused by the aggregation of proteins are known as proteopathies , and they are primarily caused by aggregates in the following structures: [ 9 ] There are two main avenues eukaryotic cells use to remove troublesome proteins or organelles: Damage to the membranes of organelles by monomeric or oligomeric proteins could also contribute to these diseases. Alpha-synuclein can damage membranes by inducing membrane curvature, [ 26 ] and cause extensive tubulation and vesiculation when incubated with artificial phospholipid vesicles. [ 26 ] The tubes formed from these lipid vesicles consist of both micellar as well as bilayer tubes. Extensive induction of membrane curvature is deleterious to the cell and would eventually lead to cell death. Apart from tubular structures, alpha-synuclein can also form lipoprotein nanoparticles similar to apolipoproteins. The most common form of cell death in neurodegeneration is through the intrinsic mitochondrial apoptotic pathway. This pathway controls the activation of caspase-9 by regulating the release of cytochrome c from the mitochondrial intermembrane space . Reactive oxygen species (ROS) are normal byproducts of mitochondrial respiratory chain activity. ROS concentration is mediated by mitochondrial antioxidants such as manganese superoxide dismutase (SOD2) and glutathione peroxidase . Over production of ROS ( oxidative stress ) is a central feature of all neurodegenerative disorders. In addition to the generation of ROS, mitochondria are also involved with life-sustaining functions including calcium homeostasis, PCD, mitochondrial fission and fusion , lipid concentration of the mitochondrial membranes, and the mitochondrial permeability transition. Mitochondrial disease leading to neurodegeneration is likely, at least on some level, to involve all of these functions. [ 66 ] There is strong evidence that mitochondrial dysfunction and oxidative stress play a causal role in neurodegenerative disease pathogenesis, including in four of the more well known diseases Alzheimer's , Parkinson's , Huntington's , and amyotrophic lateral sclerosis . [ 54 ] Neurons are particularly vulnerable to oxidative damage due to their strong metabolic activity associated with high transcription levels, high oxygen consumption, and weak antioxidant defense. [ 67 ] [ 68 ] The brain metabolizes as much as a fifth of consumed oxygen, and reactive oxygen species produced by oxidative metabolism are a major source of DNA damage in the brain . Damage to a cell's DNA is particularly harmful because DNA is the blueprint for protein production and unlike other molecules it cannot simply be replaced by re-synthesis. The vulnerability of post-mitotic neurons to DNA damage (such as oxidative lesions or certain types of DNA strand breaks), coupled with a gradual decline in the activities of repair mechanisms , could lead to accumulation of DNA damage with age and contribute to brain aging and neurodegeneration. [ 69 ] DNA single-strand breaks are common and are associated with the neurodegenerative disease ataxia- oculomotor apraxia . [ 70 ] [ 68 ] Increased oxidative DNA damage in the brain is associated with Alzheimer's disease and Parkinson's disease . [ 70 ] Defective DNA repair has been linked to neurodegenerative disorders such as Alzheimer's disease, amyotrophic lateral sclerosis , ataxia telangiectasia , Cockayne syndrome , Parkinson's disease and xeroderma pigmentosum . [ 70 ] [ 69 ] Axonal swelling, and axonal spheroids have been observed in many different neurodegenerative diseases. This suggests that defective axons are not only present in diseased neurons, but also that they may cause certain pathological insult due to accumulation of organelles. Axonal transport can be disrupted by a variety of mechanisms including damage to: kinesin and cytoplasmic dynein , microtubules , cargoes, and mitochondria . [ 25 ] When axonal transport is severely disrupted a degenerative pathway known as Wallerian-like degeneration is often triggered. [ 71 ] Programmed cell death (PCD) is death of a cell in any form, mediated by an intracellular program. [ 72 ] This process can be activated in neurodegenerative diseases including Parkinson's disease, amytrophic lateral sclerosis, Alzheimer's disease and Huntington's disease. [ 73 ] PCD observed in neurodegenerative diseases may be directly pathogenic; alternatively, PCD may occur in response to other injury or disease processes. [ 10 ] Apoptosis is a form of programmed cell death in multicellular organisms. It is one of the main types of programmed cell death (PCD) and involves a series of biochemical events leading to a characteristic cell morphology and death. Caspases (cysteine-aspartic acid proteases) cleave at very specific amino acid residues. There are two types of caspases: initiators and effectors . Initiator caspases cleave inactive forms of effector caspases. This activates the effectors that in turn cleave other proteins resulting in apoptotic initiation. [ 10 ] Autophagy is a form of intracellular phagocytosis in which a cell actively consumes damaged organelles or misfolded proteins by encapsulating them into an autophagosome , which fuses with a lysosome to destroy the contents of the autophagosome. Because many neurodegenerative diseases show unusual protein aggregates, it is hypothesized that defects in autophagy could be a common mechanism of neurodegeneration. [ 10 ] PCD can also occur via non-apoptotic processes, also known as Type III or cytoplasmic cell death. For example, type III PCD might be caused by trophotoxicity, or hyperactivation of trophic factor receptors. Cytotoxins that induce PCD can cause necrosis at low concentrations, or aponecrosis (combination of apoptosis and necrosis) at higher concentrations. It is still unclear exactly what combination of apoptosis, non-apoptosis, and necrosis causes different kinds of aponecrosis. [ 10 ] Transglutaminases are human enzymes ubiquitously present in the human body and in the brain in particular. [ 75 ] The main function of transglutaminases is bind proteins and peptides intra- and intermolecularly, by a type of covalent bonds termed isopeptide bonds , in a reaction termed transamidation or crosslinking . [ 75 ] Transglutaminase binding of these proteins and peptides make them clump together. The resulting structures are turned extremely resistant to chemical and mechanical disruption. [ 75 ] Most relevant human neurodegenerative diseases share the property of having abnormal structures made up of proteins and peptides . [ 75 ] Each of these neurodegenerative diseases have one (or several) specific main protein or peptide. In Alzheimer's disease , these are amyloid-beta and tau . In Parkinson's disease, it is alpha-synuclein . In Huntington's disease, it is huntingtin . [ 75 ] Transglutaminase substrates : Amyloid-beta , tau , alpha-synuclein and huntingtin have been proved to be substrates of transglutaminases in vitro or in vivo, that is, they can be bonded by trasglutaminases by covalent bonds to each other and potentially to any other transglutaminase substrate in the brain. [ 75 ] Transglutaminase augmented expression: It has been proved that in these neurodegenerative diseases (Alzheimer's disease, Parkinson's disease, and Huntington's disease) the expression of the transglutaminase enzyme is increased. [ 75 ] Presence of isopeptide bonds in these structures: The presence of isopeptide bonds (the result of the transglutaminase reaction) have been detected in the abnormal structures that are characteristic of these neurodegenerative diseases . [ 75 ] Co-localization: Co-localization of transglutaminase mediated isopeptide bonds with these abnormal structures has been detected in the autopsy of brains of patients with these diseases. [ 75 ] The process of neurodegeneration is not well understood, so the diseases that stem from it have, as yet, no cures. In the search for effective treatments (as opposed to palliative care ), investigators employ animal models of disease to test potential therapeutic agents. Model organisms provide an inexpensive and relatively quick means to perform two main functions: target identification and target validation. Together, these help show the value of any specific therapeutic strategies and drugs when attempting to ameliorate disease severity. An example is the drug Dimebon by Medivation, Inc. In 2009 this drug was in phase III clinical trials for use in Alzheimer's disease, and also phase II clinical trials for use in Huntington's disease. [ 61 ] In March 2010, the results of a clinical trial phase III were released; the investigational Alzheimer's disease drug Dimebon failed in the pivotal CONNECTION trial of patients with mild-to-moderate disease. [ 76 ] With CONCERT, the remaining Pfizer and Medivation Phase III trial for Dimebon (latrepirdine) in Alzheimer's disease failed in 2012, effectively ending the development in this indication. [ 77 ] In another experiment using a rat model of Alzheimer's disease, it was demonstrated that systemic administration of hypothalamic proline-rich peptide (PRP)-1 offers neuroprotective effects and can prevent neurodegeneration in hippocampus amyloid-beta 25–35. This suggests that there could be therapeutic value to PRP-1. [ 78 ] Protein degradation offers therapeutic options both in preventing the synthesis and degradation of irregular proteins. There is also interest in upregulating autophagy to help clear protein aggregates implicated in neurodegeneration. Both of these options involve very complex pathways that we are only beginning to understand. [ 9 ] The goal of immunotherapy is to enhance aspects of the immune system. Both active and passive vaccinations have been proposed for Alzheimer's disease and other conditions; however, more research must be done to prove safety and efficacy in humans. [ 79 ] A current therapeutic target for the treatment of Alzheimer's disease is the protease β-secretase [ 80 ] [ non-primary source needed ] , which is involved in the amyloidogenic processing pathway that leads to the pathological accumulation of proteins in the brain. When the gene that encodes for amyloid precursor protein (APP) is spliced by α-secretase [ 81 ] [ non-primary source needed ] rather than β-secretase, the toxic protein β amyloid is not produced. Targeted inhibition [ 82 ] of β-secretase can potentially prevent the neuronal death that is responsible for the symptoms of Alzheimer's disease.
https://en.wikipedia.org/wiki/Neurodegenerative_disease
Neuroecology studies ways in which the structure and function of the brain results from adaptations to a specific habitat and niche . [ 1 ] It integrates the multiple disciplines of neuroscience , which examines the biological basis of cognitive and emotional processes, such as perception , memory , and decision-making , [ 2 ] with the field of ecology , which studies the relationship between living organisms and their physical environment. [ 3 ] In biology , the term 'adaptation' signifies the way evolutionary processes enhance an organism 's fitness to survive within a specific ecological context. This fitness includes the development of physical, cognitive, and emotional adaptations specifically suited to the environmental conditions in which the organism or phenotype lives, and in which its species or genotype evolves. [ 4 ] Neuroecology concentrates specifically on neurological adaptations, particularly those of the brain. The purview of this study encompasses two areas. Firstly, neuroecology studies how the physical structure and functional activity of neural networks in a phenotype is influenced by characteristics of the environmental context. This includes the way social stressors , interpersonal relationships, and physical conditions precipitate persistent alterations in the individual brain, providing the neural correlates of cognitive and emotional responses. Secondly, neuroecology studies how neural structure and activity common to a genotype is determined by natural selection of traits that benefit survival and reproduction in a specific environment. [ 5 ] [ 6 ] Cognitive Neuroecology Lab at the FMRIB Centre of the University of Oxford (UK) and Donders Institute in Nijmegen (Netherlands)
https://en.wikipedia.org/wiki/Neuroecology
In philosophy and neuroscience , neuroethics is the study of both the ethics of neuroscience and the neuroscience of ethics. [ 1 ] [ 2 ] The ethics of neuroscience concerns the ethical, legal, and social impact of neuroscience, including the ways in which neurotechnology can be used to predict or alter human behavior and "the implications of our mechanistic understanding of brain function for society... integrating neuroscientific knowledge with ethical and social thought". [ 3 ] Some neuroethics problems are not fundamentally different from those encountered in bioethics . Others are unique to neuroethics because the brain, as the organ of the mind, has implications for broader philosophical problems, such as the nature of free will , moral responsibility , self-deception , and personal identity . [ 4 ] Examples of neuroethics topics are given later in this article ( see "Key issues in neuroethics" below ). The origin of the term "neuroethics" has occupied some writers. Rees and Rose (as cited in "References" on page 9) [ inconsistent ] claim neuroethics is a neologism that emerged only at the beginning of the 21st century, largely through the oral and written communications of ethicists and philosophers . According to Racine (2010), the term was coined by the Harvard physician Anneliese A. Pontius in 1973 in a paper entitled "Neuro-ethics of 'walking' in the newborn" for the Perceptual and Motor Skills . The author reproposed the term in 1993 in her paper for Psychological Report , often wrongly mentioned as the first title containing the word "neuroethics". Before 1993, the American neurologist Ronald Cranford had used the term (see Cranford 1989). Illes (2003) records uses, from the scientific literature, from 1989 and 1991. Writer William Safire is widely credited with giving the word its current meaning in 2002, defining it as "the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain". [ 5 ] Neuroethics encompasses the myriad ways in which developments in basic and clinical neuroscience intersect with social and ethical issues. The field is so young that any attempt to define its scope and limits now will undoubtedly be proved wrong in the future, as neuroscience develops and its implications continue to be revealed. At present, however, we can discern two general categories of neuroethical issue: those emerging from what we can do and those emerging from what we know. In the first category are the ethical problems raised by advances in functional neuroimaging , psychopharmacology , brain implants and brain-machine interfaces . In the second category are the ethical problems raised by our growing understanding of the neural bases of behavior, personality, consciousness, and states of spiritual transcendence. Primitive societies for the most part lacked a system of neuroethics to guide them in facing the problems of mental illness and violence as civilization advanced. Trepanation led through a tortuous course to " psychosurgery ". [ 6 ] [ 7 ] Basic neuroscience research and psychosurgery advanced in the first half of the 20th century in tandem, but neuroscience ethics was left behind science and technology. [ 8 ] Medical ethics in modern societies even in democratic governments, not to mention in authoritarian ones, has not kept pace with the advances of technology despite the announced social "progress"; and ethics continues to lag behind science in dealing with the problem of mental illness in association with human violence . [ 9 ] [ 10 ] Unprovoked "pathological" aggression persists, reminding us daily that civilization is a step away from relapsing into barbarism. Neuroscience ethics (neuroethics) must keep up with advances in neuroscience research and remain separate from state-imposed mandates to face this challenge. [ 11 ] A recent writer on the history of psychosurgery as it relates to neuroethics concludes: "The lessons of history sagaciously reveal wherever the government has sought to alter medical ethics and enforce bureaucratic bioethics, the results have frequently vilified medical care and research. In the 20th century in both the communist USSR and Nazi Germany , medicine regressed after these authoritarian systems corrupted the ethics of the medical profession and forced it to descend to unprecedented barbarism. The Soviet psychiatrists ' and Nazi doctors' dark descent into barbarism was a product of physicians willingly cooperating with the totalitarian state , purportedly in the name of the " collective good ", at the expense of their individual patients." This must be kept in mind when establishing new guidelines in neuroscience research and bioethics. [ 11 ] There is no doubt that people were thinking and writing about the ethical implications of neuroscience for many years before the field adopted the label "neuroethics", and some of this work remains of great relevance and value. However, the early 21st century saw a tremendous surge in interest concerning the ethics of neuroscience, as evidenced by numerous meetings, publications, and organizations dedicated to this topic. In 2002, there were several meetings that drew together neuroscientists and ethicists to discuss neuroethics: the American Association for the Advancement of Science with the journal Neuron , the University of Pennsylvania , the Royal Society , Stanford University , and the Dana Foundation . This last meeting was the largest, and resulted in a book, Neuroethics: Mapping the Field , edited by Steven J. Marcus and published by Dana Press. That same year, the Economist ran a cover story entitled "Open Your Mind: The Ethics of Brain Science", Nature published the article "Emerging ethical issues in neuroscience". [ 12 ] Further articles appeared on neuroethics in Nature Neuroscience , Neuron , and Brain and Cognition . Thereafter, the number of neuroethics meetings, symposia, and publications continued to grow. The over 38,000 members of the Society for Neuroscience recognized the importance of neuroethics by inaugurating an annual "special lecture" on the topic, first given by Donald Kennedy , editor-in-chief of Science Magazine. Several overlapping networks of scientists and scholars began to coalesce around neuroethics-related projects and themes. For example, the American Society for Bioethics and Humanities established a Neuroethics Affinity Group, students at the London School of Economics established the Neuroscience and Society Network linking scholars from several different institutions, and a group of scientists and funders from around the world began discussing ways to support international collaboration in neuroethics through what came to be called the International Neuroethics Network. Stanford began publishing the monthly Stanford Neuroethics Newsletter, Penn developed the informational website neuroethics.upenn.edu, and the Neuroethics and Law Blog was launched. Several relevant books were published during this time as well: Sandra Ackerman's Hard Science, Hard Choices: Facts, Ethics and Policies Guiding Brain Science Today (Dana Press), Michael Gazzaniga 's The Ethical Brain (Dana Press), Judy Illes' edited volume, Neuroethics: Defining the Issues in Theory, Practice and Policy (both Oxford University Press), Dai Rees and Steven Rose's edited volume The New Brain Sciences: Perils and Prospects (Cambridge University Press) and Steven Rose's The Future of the Brain (Oxford University Press). 2006 marked the founding of the International Neuroethics Society (INS) (originally the Neuroethics Society), an international group of scholars, scientists, clinicians, and other professionals who share an interest in the social, legal, ethical and policy implications of advances in neuroscience. The mission of the International Neuroethics Society "is to promote the development and responsible application of neuroscience through interdisciplinary and international research, education, outreach and public engagement for the benefit of people of all nations, ethnicities, and cultures". [ 13 ] The first President of the INS was Steven Hyman (2006–2014), succeeded by Barbara Sahakian (2014–2016). Judy Illes is the current President, who like Hyman and Sahakian, was also a pioneer in the field of neuroethics and a founder member of the INS. Over the next several years many centers for neurotics were established. A 2014 review of the field lists 31 centers and programs around the world; [ 14 ] some of the longest-running include the Neuroethics Research Unit at the Institut de recherches cliniques de Montreal (IRCM), the National Core for Neuroethics at the University of British Columbia in 2007, the Center for Neurotechnology Studies of the Potomac Institute for Policy Studies , the Wellcome Centre for Neuroethics at the University of Oxford ; and the Center for Neuroscience & Society at the University of Pennsylvania . Since 2017, neuroethics working groups across multiple organizations have published a spate of reports and guiding principles. In 2017, the Global Neuroethics Summit Delegates prepared a set of ethical questions to guide research in brain science, published in Neuron . [ 15 ] In December 2018, The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative proposed incorporating Neuroethics Guiding Principles into the research advanced by the Initiative. [ 16 ] In December 2019, the Organisation for Economic Co-operation and Development (OECD) confirmed a set of neuroethics principles and recommendations; now this interdisciplinary group is developing a toolkit for implementation, moving from the theoretical to the practical. [ 17 ] In early 2020, the Institute of Electrical and Electronics Engineers (IEEE) developed a neuroethical framework to facilitate the development of guidelines for engineers working on new neurotechnologies. [ 18 ] The books, articles and websites mentioned above are by no means a complete list of good neuroethics information sources. For example, readings and websites that focus on specific aspects of neuroethics, such as brain imaging or enhancement, are not included. Nor are more recent sources, such as Walter Glannon's book Bioethics and the Brain (Oxford University Press) and his reader, entitled Defining Right and Wrong in Brain Science (Dana Press). We should also here mention a book that was in many ways ahead of its time, Robert Blank's Brain Policy (published in 1999 by Georgetown University Press). The scholarly literature on neuroethics has grown so quickly that one cannot easily list all of the worthwhile articles, and several journals are now soliciting neuroethics submissions for publication, including the American Journal of Bioethics – Neuroscience , BioSocieties , the Journal of Cognitive Neuroscience , and Neuroethics . The web now has many sites, blogs, and portals offering information about neuroethics. A list can be found at the end of this entry. Neuroethics encompasses a wide range of issues, which can only be sampled here. Some have close ties to traditional biomedical ethics, in that different versions of these issues can arise in connection with organ systems other than the brain. [ citation needed ] The ethics of neurocognitive enhancement, that is the use of drugs and other brain interventions to make normal people "better than well", is an example of a neuroethical issue with both familiar and novel aspects. On the one hand, we can be informed by previous bioethical work on physical enhancements such as doping for strength in sports and the use of human growth hormone for normal boys of short stature . On the other hand, there are also some arguably novel ethical issues that arise in connection with brain enhancement, because these enhancements affect how people think and feel, thus raising the relatively new issues of " cognitive liberty ". The growing role of psychopharmacology in everyday life raises a number of ethical issues, for example the influence of drug marketing on our conceptions of mental health and normalcy , and the increasingly malleable sense of personal identity that results from what Peter D. Kramer called "cosmetic psychopharmacology". Transhumanist philosophers such as David Pearce and Mark Alan Walker have argued that advancements in neuroscience could eventually make it feasible to artificially eliminate all physical and psychological suffering and artificially induce states of perpetual bliss. Pearce has stated that: "It is predicted that the world's last unpleasant experience will be a precisely dateable event." [ 19 ] Pearce argues that physical pain could be replaced with "gradients of bliss" that provide the same functionality of pain, e.g. avoiding injury, but without the suffering. [ 20 ] Walker coined the term " biohappiness " to describe the idea of directly manipulating the biological roots of happiness in order to increase it. [ 21 ] However, it is also possible that brain intervention technologies could also increase the possible hedonic range in the opposite direction, and make it feasible to create "hyperpain" or "dolorium" that involves experiencing levels of suffering beyond the human range . [ 22 ] Nonpharmacologic methods of altering brain function are currently enjoying a period of rapid development, with a resurgence of psychosurgery for the treatment of medication refractory mental illnesses and promising new therapies for neurological and psychiatric illnesses based on deep brain stimulation as well as relatively noninvasive transcranial stimulation methods. Research on brain-machine interfaces is primarily in a preclinical phase but promises to enable thought-based control of computers and robots by paralyzed patients. As the tragic history of frontal lobotomy reminds us, permanent alteration of the brain cannot be undertaken lightly. Although nonpharmacologic brain interventions are exclusively aimed at therapeutic goals, the US military sponsors research in this general area (and more specifically in the use of transcranial direct current stimulation ) that is presumably aimed at enhancing the capabilities of soldiers. [ 23 ] In addition to the important issues of safety and incidental findings, mentioned above, some arise from the unprecedented and rapidly developing ability to correlate brain activation with psychological states and traits. One of the most widely discussed new applications of imaging is based on correlations between brain activity and intentional deception . Intentional deception can be thought of in the context of a lie detector . This means that scientists use brain imaging to look at certain parts of the brain during moments when a person is being deceptive. A number of different research groups have identified fMRI correlates of intentional deception in laboratory tasks, and despite the skepticism of many experts, the technique has already been commercialized. A more feasible application of brain imaging is " neuromarketing ", whereby people's conscious or unconscious reaction to certain products can purportedly be measured. Researchers are also finding brain imaging correlates of myriad psychological traits, including personality, intelligence, mental health vulnerabilities, attitudes toward particular ethnic groups, and predilection for violent crime. Unconscious racial attitudes may be manifest in brain activation. These capabilities of brain imaging, actual and potential, raise a number of ethical issues. The most obvious concern involves privacy . For example, employers, marketers, and the government all have a strong interest in knowing the abilities, personality, truthfulness and other mental contents of certain people. This raises the question of whether, when, and how to ensure the privacy of our own minds . Another ethical problem is that brain scans are often viewed as more accurate and objective than in fact they are. Many layers of signal processing, statistical analysis and interpretation separate imaged brain activity from the psychological traits and states inferred from it. There is a danger that the public (including judges and juries, employers, insurers, etc.) will ignore these complexities and treat brain images as a kind of indisputable truth. A related misconception is called neuro-realism: In its simplest form, this line of thought says that something is real because it can be measured with electronic equipment. A person who claims to have pain, or low libido, or unpleasant emotions is "really" sick if these symptoms are supported by a brain scan, and healthy or normal if correlates cannot be found in a brain scan. [ 24 ] [ 25 ] The case of phantom limbs demonstrate the inadequacy of this approach. While complete memory erasure is still an element of science-fiction, certain neurological drugs have been proven to dampen the strength and emotional association of a memory. Propranolol, an FDA-approved drug, has been suggested to effectively dull the painful effects of traumatic memories if taken within 6 hours after the event occurs. [ 26 ] This has begun the discussion of ethical implications, assuming the technology for memory erasure will only improve. Originally, propranolol was reserved for hypertension patients. However, doctors are permitted to use the drug for off-label purposes—leading to the question of whether they actually should . There are numerous reasons for skepticism; for one, it may prevent us from coming to terms with traumatic experiences, it may tamper with our identities and lead us to an artificial sense of happiness, demean the genuineness of human life, and/or encourage some to forget memories they are morally obligated to keep. Whether or not it is ethical to fully or partially erase the memory of a patient, it is certainly becoming a more relevant topic as this technology improves in our society. [ 27 ] Additionally, the "humanization" of animal models has been raised as a topic of concern in transplantation of human stem cell derived organoids into other animal models. [ 32 ] Wetware computers may have substantial ethical implications, [ 36 ] for instance related to possible potentials to sentience and suffering and dual-use technology. [ citation needed ] Moreover, in some cases the human brain itself may be connected as a kind of "wetware" to other information technology systems which may also have large social and ethical implications, [ 37 ] including issues related to intimate access to people's brains. [ 38 ] For example, in 2021 Chile became the first country to approve neurolaw that establishes rights to personal identity, free will and mental privacy. [ 39 ] The concept of artificial insects [ 40 ] may raise substantial ethical questions, including questions related to the decline in insect populations . In general, cognitive diversity – or some "optimum range of diversity" – was found highly valuable. Science and technology such as gene editing technology may raise related ethical issues. [ 43 ] There have also been speculations that cognitive enhancement technologies (CETs) may increase population-level cognitive diversity, e.g. as different people will choose to enhance different aspects of their cognition . Moral enhancement is also a topic in neuroethics. [ 44 ] Most of the issues concerning the uses of stem cells in the brain are the same as any of the bioethical or purely ethical questions one will find regarding the use and research of stem cells. The field of stem cell research is a very new field that poses many ethical questions concerning the allocation of stem cells as well as their possible uses. Since most stem cell research is still in its preliminary phase, most of the neuroethical issues surrounding stem cells are the same as stem cell ethics in general. More specifically the way that stem cell research has been involved in neuroscience is through the treatment of neurodegenerative diseases and brain tumors. In these cases scientists are using neural stem cells to regenerate tissue and to be used as carriers for gene therapy . In general, neuroethics revolves around a cost benefit approach to find techniques and technologies that are most beneficial to patients. There has been progress in certain fields that have been shown to be beneficial when using stem cells to treat certain neurodegenerative diseases such as Parkinson's disease . [ 45 ] A study done in 2011 showed that induced pluripotent stem cells (iPSCs) can be used to aid in Parkinson's research and treatment. The cells can be used to study the progression of Parkinson's as well as used in regenerative treatment. Animal studies have shown that the use of iPSCs can improve motor skills and dopamine release of test subjects with Parkinson's. This study shows a positive outcome in the use of stem cells for neurological purposes. [ 46 ] In another study done in 2011 used stem cells to treat cerebral palsy . This study, however, was not as successful as the Parkinson's treatment. In this case stem cells were used to treat animal models who had been injured in a way that mimicked CP. This brings up a neuroethical issue of animal models used in science. Since most of their "diseases" are inflicted and do not occur naturally, they can not always be reliable examples of how a person with the actual disease would respond to treatment. The stem cells used did survive implantation, but did not show significant nerve regeneration. However, studies are ongoing in this area. [ 47 ] As discussed, stem cells are used to treat degenerative diseases. One form of a degenerative disease that can occur in the brain as well as throughout the body is an autoimmune disease . Autoimmune diseases cause the body to "attack" its own cells and therefore destroys those cells as well as whatever functional purpose those cells have or contribute to. One form of an autoimmune disease that affects the central nervous system is multiple sclerosis . In this disease the body attacks the glial cells that form myelin coats around the axons on neurons. This causes the nervous system to essentially "short circuit" and pass information very slowly. Stem cells therapy has been used to try to cure some of the damage caused by the body in MS. Hematopoietic stem cell transplantation has been used to try and cure MS patients by essentially "reprogramming" their immune system. The main risk encountered with this form of treatment is the possibility of rejection of the stem cells. If the hematopoietic stem cells can be harvested from the individual, risk of rejection is much lower. But, there can be the risk of those cells being programmed to induce MS. However, if the tissue is donated from another individual there is high risk of rejection leading to possibly fatal toxicity in the recipient's body. Considering that there are fairly good treatments for MS, the use of stem cells in this case may have a higher cost than the benefits they produce. However, as research continues perhaps stem cells will truly become a viable treatment for MS as well as other autoimmune diseases. [ 48 ] These are just some examples of neurological diseases in which stem cell treatment has been researched. In general, the future looks promising for stem cell application in the field of neurology. However, possible complications lie in the overall ethics of stem cell use, possible recipient rejection, as well as over-proliferation of the cells causing possible brain tumors. Ongoing research will further contribute in the decision of whether stem cells should be used in the brain and whether their benefits truly outweigh their costs. The primary ethical dilemma that is brought up in stem cell research is concerning the source of embryonic stem cells (hESCs). As the name states, hESCs come from embryos. To be more specific, they come from the inner cell mass of a blastophere, which is the beginning stage of an embryo. However, that mass of cells could have the potential to give rise to human life, and there in lies the problem. Often, this argument leads back to a similar moral debate held around abortion. The question is: when does a mass of cells gain personhood and autonomy? [ 49 ] Some individuals believe that an embryo is in fact a person at the moment of conception and that using an embryo for anything other than creating a baby would essentially be killing a baby. On the other end of the spectrum, people argue that the small ball of cells at that point only has the potential to become a fetus, and that potentiality, even in natural conception, is far from guaranteed. According to a study done by developmental biologists, between 75–80% of embryos created through intercourse are naturally lost before they can become fetuses. [ 50 ] This debate is not one that has a right or wrong answer, nor can it be clearly settled. Much of the ethical dilemma surrounding hESCs relies on individual beliefs about life and the potential for scientific advancement versus creating new human life. Patients in coma, vegetative, or minimally conscious state pose ethical challenges. The patients are unable to respond, therefore the assessment of their needs can only be approached by adopting a third person perspective. They are unable to communicate their pain levels, quality of life, or end of life preferences. Neuroscience and brain imaging have allowed us to explore the brain activity of these patients more thoroughly. Recent findings from studies using functional magnetic resonance imaging have changed the way we view vegetative patients. The images have shown that aspects of emotional processing, language comprehension, and even conscious awareness might be retained in patients whose behavior suggests a vegetative state. If this is the case, it is unethical to allow a third party to dictate the life and future of the patient. [ 51 ] For example, defining death is an issue that comes with patients with severe traumatic brain injuries. The decision to withdraw life-sustaining care from these patients can be based on uncertain assessments about the individual's conscious awareness. Case reports have shown that these patients in a persistent vegetative state can recover unexpectedly. This raises the ethical question about the premature termination of care by physicians. The hope is that one day, neuroimaging technologies can help us to define these different states of consciousness and enable us to communicate with patients in vegetative states in a way that was never before possible. [ 52 ] [ 53 ] The clinical translation of these advanced technologies is of vital importance for the medical management of these challenging patients. In this situation, neuroscience has both revealed ethical issues and possible solutions. [ 54 ] Cosmetic neuro-pharmacology, the use of drugs to improve cognition in normal healthy individuals, is highly controversial. Some case reports with the antidepressant Prozac indicated that patients seemed "better than well", and authors hypothesized that this effect might be observed in individuals not afflicted with psychiatric disorders. [ 55 ] Following these case reports much controversy arose over the veracity and ethics of the cosmetic use of these antidepressants. Opponents of cosmetic pharmacology believe that such drug usage is unethical and that the concept of cosmetic pharmacology is a manifestation of naive consumerism. Proponents, such as philosopher Arthur Caplan , state that it is an individual's (rather than government's, or physician's) right to determine whether to use a drug for cosmetic purposes. [ 56 ] Anjan Chatterjee , a neurologist at the University of Pennsylvania , has argued that western medicine stands on the brink of a neuro-enhancement revolution in which people will be able to improve their memory and attention through pharmacological means. Jacob Appel, a Brown University bioethicist, has raised concerns about the possibility of employers mandating such enhancement for their workers. [ 57 ] [ 58 ] The ethical concerns regarding pharmacological enhancement are not limited to Europe and North America; indeed, there is increasing attention given to cultural and regulatory contexts for this phenomenon, around the globe. [ 59 ] The politics of neuromarketing is this idea of using advertisements to convince the mind of a voter to vote for a certain party. This has already been happening within the elections throughout the years. In the 2006 reelection of Governor Arnold Schwarzenegger , he was double digits off in the voting in comparison to his Democratic opponent. However, Schwarzenegger's theme in this campaign was whether or not the voters would want to continue Schwarzenegger's reforms or go back to the days of the recalled governor, Gray Davis . In normal marketing, voters would use "detail, numbers, facts and figures to prove we were better off under the new governor". [ 60 ] However, with neuromarketing, voters followed powerful advertisement visuals and used these visuals to convince themselves that Schwarzenegger was the better candidate. Now, with political neuromarketing, there exists a lot of controversy. The ethics behind political neuromarketing are debatable. Some argue that political neuromarketing will cause voters to make rash decisions while others argue that these messages are beneficial because they depict what the politicians can do. However, control over political decisions could make voters not see the reality of things. Voters may not look into the details of the reforms, personality, and morality each person brings to their political campaign and may be swayed by how powerful the advertisements seem to be. However, there are also people that may disagree with this idea. Darryl Howard, "a consultant to two Republican winners on November 2, says he crafted neuromarketing-based messages for TV, direct mail and speeches for Senate, Congressional and Gubernatorial clients in 2010". He says that these advertisements that were presented, show honesty and continues to say how he and other politicians decide which advertisements are the most effective. [ 61 ] Neuroscience has led to a deeper understanding of the chemical imbalances present in a disordered brain. In turn, this has resulted in the creation of new treatments and medications to treat these disorders. When these new treatments are first being tested , the experiments prompt ethical questions. First, because the treatment is affecting the brain, the side effects can be unique and sometimes severe. A special kind of side effect that many subjects have claimed to experience in neurological treatment tests is changes in " personal identity ". Although this is a difficult ethical dilemma because there are no clear and undisputed definitions of personality, self, and identity, neurological treatments can result in patients losing parts of "themselves" such as memories or moods. Yet another ethical dispute in neurological treatment research is the choice of patients . From a perspective of justice, priority should be given to those who are most seriously impaired and who will benefit most from the intervention. However, in a test group, scientists must select patients to secure a favorable risk-benefit ratio. Setting priority becomes more difficult when a patient's chance to benefit and the seriousness of their impairment do not go together. For example, many times an older patient will be excluded despite the seriousness of their disorder simply because they are not as strong or as likely to benefit from the treatment. [ 62 ] The main ethical issue at the heart of neurological treatment research on human subjects is promoting high-quality scientific research in the interest of future patients, while at the same time respecting and guarding the rights and interests of the research subjects. This is particularly difficult in the field of neurology because damage to the brain is often permanent and will change a patient's way of life forever. Neuroethics also encompasses the ethical issues raised by neuroscience as it affects our understanding of the world and of ourselves in the world. For example, if everything we do is physically caused by our brains, which are in turn a product of our genes and our life experiences, how can we be held responsible for our actions? A crime in the United States requires a " guilty act " and a " guilty mind ". As neuropsychiatry evaluations have become more commonly used in the criminal justice system and neuroimaging technologies have given us a more direct way of viewing brain injuries, scholars have cautioned that this could lead to the inability to hold anyone criminally responsible for their actions. In this way, neuroimaging evidence could suggest that there is no free will and each action a person makes is simply the product of past actions and biological impulses that are out of our control. [ 63 ] The question of whether and how personal autonomy is compatible with neuroscience ethics and the responsibility of neuroscientists to society and the state is a central one for neuroethics. [ 54 ] However, there is some controversy over whether autonomy entails the concept of 'free will' or is a 'moral-political' principle separate from metaphysical quandaries. [ 64 ] In late 2013 U.S. President Barack Obama made recommendations to the Presidential Commission for the Study of Bioethical Issues as part of his $100 million Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. This Spring discussion resumed in a recent interview and article sponsored by Agence France-Presse (AFP): "It is absolutely critical... to integrate ethics from the get-go into neuroscience research," and not "for the first time after something has gone wrong", said Amy Gutmann , Bioethics Commission Chair." [ 65 ] But no consensus has been reached. Miguel Faria , a Professor of Neurosurgery and an Associate Editor in Chief of Surgical Neurology International , who was not involved in the Commission's work said, "any ethics approach must be based upon respect for the individual, as doctors pledge according to the Hippocratic Oath which includes vows to be humble, respect privacy and doing no harm; and pursuing a path based on population-based ethics is just as dangerous as having no medical ethics at all". [ 66 ] Why the danger of population-based bioethics? [ 65 ] Faria asserts, "it is centered on utilitarianism , monetary considerations, and the fiscal and political interests of the state, rather than committed to placing the interest of the individual patient or experimental subject above all other considerations". [ 67 ] For her part, Gutmann believes the next step is "to examine more deeply the ethical implications of neuroscience research and its effects on society". [ 65 ] Main Editor: Adrian Carter , Monash University & Katrina Sifferd , Elmhurst University Neuroethics is an international peer-reviewed journal dedicated to academic articles on the ethical, legal, political, social and philosophical issues provoked by research in the contemporary sciences of the mind, especially, but not only, neuroscience, psychiatry and psychology. The journal publishes high-quality reflections on questions raised by the sciences of the mind, and on the ways in which the sciences of the mind illuminate longstanding debates in ethics. Main Editor: Veljko Dubljevic , North Carolina State University AJOB Neuroscience, the official journal of the International Neuroethics Society , is devoted to covering critical topics in the emerging field of neuroethics. [ 68 ] The journal is a new avenue in bioethics and strives to present a forum in which to: foster international discourse on topics in neuroethics, provide a platform for debating current issues in neuroethics, and enable the incubation of new emerging priorities in neuroethics. AJOB-Neuroscience launched in 2007 as a section of the American Journal of Bioethics and became an independent journal in 2010, publishing four issues a year. [ 69 ] Issues in Neuroscience
https://en.wikipedia.org/wiki/Neuroethics
4747 18039 ENSG00000277586 ENSMUSG00000022055 P07196 P08551 NM_006158 NM_010910 NP_006149 NP_035040 Neurofilament light polypeptide is a protein that in humans is encoded by the NEFL gene . [ 5 ] Neurofilament light polypeptide is a member of the intermediate filament protein family. This protein family consists of over 50 human proteins divided into 5 major classes, the Class I and II keratins , Class III vimentin , GFAP , desmin and the others, the Class IV neurofilaments and the Class V nuclear lamins . There are four major neurofilament subunits, NF-L, NF-M, NF-H and α-internexin. These form heteropolymers which assemble to produce 10 nm neurofilaments which are only expressed in neurons where they are major structural proteins, particularly concentrated in large projection axons. The NF-L protein is encoded by the NEFL gene . [ 6 ] [ 5 ] These neurofilament heteropolymers assemble into the cytoskeleton of axons, where they provide structural support and help regulate axonal diameter and conduction velocity. Axons are particularly sensitive to mechanical and metabolic compromise and as a result axonal degeneration is a significant problem in many neurological disorders. Neurofilament light chain is a biomarker that can be measured with immunoassays in cerebrospinal fluid and plasma and reflects axonal damage in a wide variety of neurological disorders. [ 7 ] NF-L antibodies employed in the most widely used NF-L assays are specific for cleaved forms of NF-L generated by proteolysis induced by cell death. [ 8 ] Methods used in different studies for NfL measurement are sandwich enzyme-linked immunosorbent assay (ELISA), electrochemiluminescence, and high-sensitive single molecule array (SIMOA). [ 9 ] The detection of neurofilament subunits in CSF and blood has become widely used as a biomarker of ongoing axonal compromise. It is a useful marker for disease monitoring in amyotrophic lateral sclerosis , [ 10 ] multiple sclerosis , [ 11 ] Alzheimer's disease , [ 12 ] [ 13 ] and more recently Huntington's disease . [ 14 ] It is also a promising marker for follow-up of patients with brain tumors. [ 15 ] Higher levels of blood or CSF NF-L have been associated with increased mortality, as would be expected as release of this protein reflects ongoing axonal loss. [ 16 ] It is associated with Charcot–Marie–Tooth disease 1F and 2E. [ 6 ] Neurofilament light polypeptide (NF-L) is a key structural component of the neuronal cytoskeleton, assembling into neurofilaments along with other intermediate filament proteins such as NF-M, NF-H, and α-internexin. [ 17 ] [ 18 ] These proteins form obligate heteropolymers that organize into 10 nm diameter filaments, [ 17 ] [ 18 ] which are selectively expressed in neurons and are particularly concentrated in axons[9]. Neurofilaments provide essential structural support, help maintain axonal diameter, [ 19 ] [ 18 ] and contribute to the efficient conduction of nerve impulses. [ 19 ] The localization and organization of NF-L in neurons can be visualized using immunohistochemical techniques. In tissue culture preparations of rat brain cells, antibodies specific to NF-L label large neurons prominently in green, revealing their extensive cytoskeletal architecture. [ 20 ] In the same cultures, staining for α-internexin in red highlights surrounding neuronal stem cells, indicating the differential expression of these intermediate filament proteins during neural development and differentiation. [ 18 ] In histological sections of human brain tissue, NF-L can also be visualized using immunostaining. For example, in formalin-fixed and paraffin-embedded sections of the human cerebellum, an antibody specific to NF-L reveals its presence throughout various neuronal compartments[7]. The brown-stained antibody binding highlights the axonal processes of basket cells, the parallel fibers of granule cells, [ 21 ] [ 18 ] the perikarya of Purkinje cells, [ 21 ] and other axonal elements. Counterstaining with a blue dye allows for the visualization of cell nuclei, delineating the granular layer on the left side of the section and the molecular layer on the right. [ 21 ] These staining patterns underscore the widespread and structurally critical role of NF-L in both developing and mature neurons. [ 19 ] [ 18 ] Neurofilament light polypeptide has been shown to interact with:
https://en.wikipedia.org/wiki/Neurofilament_light_polypeptide
Neurofurans are 22-carbon compounds formed nonenzymatically by free radical mediated peroxidation of docosahexaenoic acid (DHA), an ω-3 essential fatty acid . The neurofurans are similar to the isofurans and are formed under similar conditions of oxidative stress , containing a substituted tetrahydrofuran ring. Measurement of the neurofurans may ultimately prove useful in diagnosis, timing, and selection of dosages in the treatment and chemoprevention of neurodegenerative disease . [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neurofuran
Neurogenesis is the process by which nervous system cells, the neurons , are produced by neural stem cells (NSCs). [ 1 ] This occurs in all species of animals except the porifera (sponges) and placozoans . [ 2 ] Types of NSCs include neuroepithelial cells (NECs), radial glial cells (RGCs), basal progenitors (BPs), intermediate neuronal precursors (INPs), subventricular zone astrocytes , and subgranular zone radial astrocytes , among others. [ 2 ] Neurogenesis is most active during embryonic development and is responsible for producing all the various types of neurons of the organism, but it continues throughout adult life in a variety of organisms. [ 2 ] Once born, neurons do not divide (see mitosis ), and many will live the lifespan of the animal, except under extraordinary and usually pathogenic circumstances. [ 3 ] During embryonic development, the mammalian central nervous system (CNS; brain and spinal cord ) is derived from the neural tube , which contains NSCs that will later generate neurons . [ 3 ] However, neurogenesis doesn't begin until a sufficient population of NSCs has been achieved. These early stem cells are called neuroepithelial cells (NEC)s, but soon take on a highly elongated radial morphology and are then known as radial glial cells (RGC)s. [ 3 ] RGCs are the primary stem cells of the mammalian CNS, and reside in the embryonic ventricular zone , which lies adjacent to the central fluid-filled cavity ( ventricular system ) of the neural tube . [ 5 ] [ 6 ] Following RGC proliferation, neurogenesis involves a final cell division of the parent RGC, which produces one of two possible outcomes. First, this may generate a subclass of neuronal progenitors called intermediate neuronal precursors (INP)s, which will divide one or more times to produce neurons. Alternatively, daughter neurons may be produced directly. Neurons do not immediately form neural circuits through the growth of axons and dendrites. Instead, newborn neurons must first migrate long distances to their final destinations, maturing and finally generating neural circuitry. For example, neurons born in the ventricular zone migrate radially to the cortical plate , which is where neurons accumulate to form the cerebral cortex . [ 5 ] [ 6 ] Thus, the generation of neurons occurs in a specific tissue compartment or 'neurogenic niche' occupied by their parent stem cells. The rate of neurogenesis and the type of neuron generated (broadly, excitatory or inhibitory) are principally determined by molecular and genetic factors. These factors notably include the Notch signaling pathway , and many genes have been linked to Notch pathway regulation . [ 7 ] [ 8 ] The genes and mechanisms involved in regulating neurogenesis are the subject of intensive research in academic, pharmaceutical , and government settings worldwide. The amount of time required to generate all the neurons of the CNS varies widely across mammals, and brain neurogenesis is not always complete by the time of birth. [ 3 ] For example, mice undergo cortical neurogenesis from about embryonic day (post-conceptional day) (E)11 to E17, and are born at about E19.5. [ 9 ] Ferrets are born at E42, although their period of cortical neurogenesis does not end until a few days after birth. [ 10 ] In contrast, neurogenesis in humans generally begins around gestational week (GW) 10 and ends around GW 25 with birth about GW 38–40. [ 11 ] As embryonic development of the mammalian brain unfolds, neural progenitor and stem cells switch from proliferative divisions to differentiative divisions . This progression leads to the generation of neurons and glia that populate cortical layers . Epigenetic modifications play a key role in regulating gene expression in the cellular differentiation of neural stem cells . Epigenetic modifications include DNA cytosine methylation to form 5-methylcytosine and 5-methylcytosine demethylation . [ 12 ] [ 13 ] These modifications are critical for cell fate determination in the developing and adult mammalian brain. DNA cytosine methylation is catalyzed by DNA methyltransferases (DNMTs) . Methylcytosine demethylation is catalyzed in several stages by TET enzymes that carry out oxidative reactions (e.g. 5-methylcytosine to 5-hydroxymethylcytosine ) and enzymes of the DNA base excision repair (BER) pathway. [ 12 ] Neurogenesis can be a complex process in some mammals. In rodents for example, neurons in the central nervous system arise from three types of neural stem and progenitor cells: neuroepithelial cells, radial glial cells and basal progenitors, which go through three main divisions: symmetric proliferative division; asymmetric neurogenic division; and symmetric neurogenic division. Out of all the three cell types, neuroepithelial cells that pass through neurogenic divisions have a much more extended cell cycle than those that go through proliferative divisions, such as the radial glial cells and basal progenitors. [ 14 ] In the human, adult neurogenesis has been shown to occur at low levels compared with development, and in only three regions of the brain: the adult subventricular zone (SVZ) of the lateral ventricles , the amygdala and the dentate gyrus of the hippocampus . [ 15 ] [ 16 ] [ 17 ] In many mammals, including rodents, the olfactory bulb is a brain region containing cells that detect smell , featuring integration of adult-born neurons, which migrate from the SVZ of the striatum to the olfactory bulb through the rostral migratory stream (RMS). [ 15 ] [ 18 ] The migrating neuroblasts in the olfactory bulb become interneurons that help the brain communicate with these sensory cells. The majority of those interneurons are inhibitory granule cells , but a small number are periglomerular cells . In the adult SVZ, the primary neural stem cells are SVZ astrocytes rather than RGCs. Most of these adult neural stem cells lie dormant in the adult, but in response to certain signals, these dormant cells, or B cells, go through a series of stages, first producing proliferating cells, or C cells. The C cells then produce neuroblasts , or A cells, that will become neurons. [ 16 ] Significant neurogenesis also occurs during adulthood in the hippocampus of many mammals, from rodents to some primates , although its existence in adult humans is debated. [ 19 ] [ 20 ] [ 21 ] The hippocampus plays a crucial role in the formation of new declarative memories, and it has been theorized that the reason human infants cannot form declarative memories is because they are still undergoing extensive neurogenesis in the hippocampus and their memory-generating circuits are immature. [ 22 ] Many environmental factors, such as exercise, stress, and antidepressants have been reported to change the rate of neurogenesis within the hippocampus of rodents. [ 23 ] [ 24 ] Some evidence indicates postnatal neurogenesis in the human hippocampus decreases sharply in newborns for the first year or two after birth, dropping to "undetectable levels in adults." [ 19 ] Neurogenesis has been best characterized in model organisms such as the fruit fly Drosophila melanogaster . Neurogenesis in these organisms occur in the medulla cortex region of their optic lobes. These organisms can represent a model for the genetic analysis of adult neurogenesis and brain regeneration. There has been research that discuss how the study of “damage-responsive progenitor cells” in Drosophila can help to identify regenerative neurogenesis and how to find new ways to increase brain rebuilding. Recently, a study was made to show how “low-level adult neurogenesis” has been identified in Drosophila, specifically in the medulla cortex region, in which neural precursors could increase the production of new neurons, making neurogenesis occur. [ 25 ] [ 26 ] [ 27 ] In Drosophila, Notch signaling was first described, controlling a cell-to-cell signaling process called lateral inhibition , in which neurons are selectively generated from epithelial cells . [ 28 ] [ 29 ] In some vertebrates, regenerative neurogenesis has also been shown to occur. [ 30 ] An in vitro and in vivo study found that DMT present in the ayahuasca infusion promotes neurogenesis on the subgranular zone of the dentate gyrus in the hippocampus. [ 31 ] A study showed that a low dose (0.1 mg/kg) of psilocybin given to mice increased neurogenesis in the hippocampus 2 weeks after administration, while a high dose (1 mg/kg) significantly decreased neurogenesis. [ 32 ] No orally-available drugs are known to elicit neurogenesis outside of the already neurogenic niches. There is evidence that new neurons are produced in the dentate gyrus of the adult mammalian hippocampus, the brain region important for learning, motivation, memory, and emotion. A study reported that newly made cells in the adult mouse hippocampus can display passive membrane properties, action potentials and synaptic inputs similar to the ones found in mature dentate granule cells. These findings suggested that these newly made cells can mature into more practical and useful neurons in the adult mammalian brain. [ 33 ] Recent studies confirm that microglia , the resident immune cell of the brain, establish direct contacts with the cell bodies of developing neurons, and through these connections, regulate neurogenesis, migration, integration and the formation of neuronal networks. [ 34 ]
https://en.wikipedia.org/wiki/Neurogenesis
Neurogenetics studies the role of genetics in the development and function of the nervous system . It considers neural characteristics as phenotypes (i.e. manifestations, measurable or not, of the genetic make-up of an individual), and is mainly based on the observation that the nervous systems of individuals, even of those belonging to the same species , may not be identical. As the name implies, it draws aspects from both the studies of neuroscience and genetics, focusing in particular how the genetic code an organism carries affects its expressed traits . Mutations in this genetic sequence can have a wide range of effects on the quality of life of the individual. Neurological diseases, behavior and personality are all studied in the context of neurogenetics. The field of neurogenetics emerged in the mid to late 20th century with advances closely following advancements made in available technology. Currently, neurogenetics is the center of much research utilizing cutting edge techniques. The field of neurogenetics emerged from advances made in molecular biology, genetics and a desire to understand the link between genes, behavior, the brain, and neurological disorders and diseases. The field started to expand in the 1960s through the research of Seymour Benzer , considered by some to be the father of neurogenetics. [ 1 ] His pioneering work with Drosophila helped to elucidate the link between circadian rhythms and genes, which led to further investigations into other behavior traits. He also started conducting research in neurodegeneration in fruit flies in an attempt to discover ways to suppress neurological diseases in humans. Many of the techniques he used and conclusions he drew would drive the field forward. [ 2 ] Early analysis relied on statistical interpretation through processes such as LOD (logarithm of odds) scores of pedigrees and other observational methods such as affected sib-pairs, which looks at phenotype and IBD (identity by descent) configuration. Many of the disorders studied early on including Alzheimer's , Huntington's and amyotrophic lateral sclerosis (ALS) are still at the center of much research to this day. [ 3 ] By the late 1980s new advances in genetics such as recombinant DNA technology and reverse genetics allowed for the broader use of DNA polymorphisms to test for linkage between DNA and gene defects. This process is referred to sometimes as linkage analysis. [ 4 ] [ 5 ] By the 1990s ever advancing technology had made genetic analysis more feasible and available. This decade saw a marked increase in identifying the specific role genes played in relation to neurological disorders. Advancements were made in but not limited to: Fragile X syndrome , Alzheimer's, Parkinson's , epilepsy and ALS. [ 6 ] While the genetic basis of simple diseases and disorders has been accurately pinpointed, the genetics behind more complex, neurological disorders is still a source of ongoing research. New developments such as the genome wide association studies (GWAS) have brought vast new resources within grasp. With this new information genetic variability within the human population and possibly linked diseases can be more readily discerned. [ 7 ] Neurodegenerative diseases are a more common subset of neurological disorders, with examples being Alzheimer's disease and Parkinson's disease . Currently no viable treatments exist that actually reverse the progression of neurodegenerative diseases; however, neurogenetics is emerging as one field that might yield a causative connection. The discovery of linkages could then lead to therapeutic drugs, which could reverse brain degeneration. [ 8 ] One of the most noticeable results of further research into neurogenetics is a greater knowledge of gene loci that show linkage to neurological diseases. The table below represents a sampling of specific gene locations identified to play a role in selected neurological diseases based on prevalence in the United States . [ 9 ] [ 10 ] [ 11 ] [ 12 ] Logarithm of odds (LOD) is a statistical technique used to estimate the probability of gene linkage between traits. LOD is often used in conjunction with pedigrees, maps of a family's genetic make-up, in order to yield more accurate estimations. A key benefit of this technique is its ability to give reliable results in both large and small sample sizes, which is a marked advantage in laboratory research. [ 14 ] [ 15 ] Quantitative trait loci (QTL) mapping is another statistical method used to determine the chromosomal positions of a set of genes responsible for a given trait. By identifying specific genetic markers for the genes of interest in a recombinant inbred strain , the amount of interaction between these genes and their relation to the observed phenotype can be determined through complex statistical analysis. In a neurogenetics laboratory, the phenotype of a model organisms is observed by assessing the morphology of their brain through thin slices. [ 16 ] QTL mapping can also be carried out in humans, though brain morphologies are examined using nuclear magnetic resonance imaging (MRI) rather than brain slices. Human beings pose a greater challenge for QTL analysis because the genetic population cannot be as carefully controlled as that of an inbred recombinant population, which can result in sources of statistical error. [ 17 ] Recombinant DNA is an important method of research in many fields, including neurogenetics. It is used to make alterations to an organism's genome, usually causing it to over- or under-express a certain gene of interest, or express a mutated form of it. The results of these experiments can provide information on that gene's role in the organism's body, and it importance in survival and fitness. The hosts are then screened with the aid of a toxic drug that the selectable marker is resistant to. The use of recombinant DNA is an example of a reverse genetics, where researchers create a mutant genotype and analyze the resulting phenotype. In forward genetics , an organism with a particular phenotype is identified first, and its genotype is then analyzed. [ 18 ] [ 19 ] Model organisms are an important tool in many areas of research, including the field of neurogenetics. By studying creatures with simpler nervous systems and with smaller genomes, scientists can better understand their biological processes and apply them to more complex organisms, such as humans. Due to their low-maintenance and highly mapped genomes, mice, Drosophila , [ 20 ] and C. elegans [ 21 ] are very common. Zebrafish [ 22 ] and prairie voles [ 23 ] have also become more common, especially in the social and behavioral scopes of neurogenetics. In addition to examining how genetic mutations affect the actual structure of the brain, researchers in neurogenetics also examine how these mutations affect cognition and behavior. One method of examining this involves purposely engineering model organisms with mutations of certain genes of interest. These animals are then classically conditioned to perform certain types of tasks, such as pulling a lever in order to gain a reward. The speed of their learning, the retention of the learned behavior, and other factors are then compared to the results of healthy organisms to determine what kind of an effect – if any – the mutation has had on these higher processes. The results of this research can help identify genes that may be associated with conditions involving cognitive and learning deficiencies. [ 24 ] Many research facilities seek out volunteers with certain conditions or illnesses to participate in studies. Model organisms, while important, cannot completely model the complexity of the human body, making volunteers a key part to the progression of research. Along with gathering some basic information about medical history and the extent of their symptoms, samples are taken from the participants, including blood, cerebrospinal fluid , and/or muscle tissue. These tissue samples are then genetically sequenced, and the genomes are added to current database collections. The growth of these data bases will eventually allow researchers to better understand the genetic nuances of these conditions and bring therapy treatments closer to reality. Current areas of interest in this field have a wide range, spanning anywhere from the maintenance of circadian rhythms , the progression of neurodegenerative disorders, the persistence of periodic disorders, and the effects of mitochondrial decay on metabolism. [ 25 ] Such databases are used in genome-wide association studies (GWAS). Examples of phenotypes investigated by notable neurogenetics GWAS include: Advances in molecular biology techniques and the species-wide genome project have made it possible to map out an individual's entire genome. Whether genetic or environmental factors are primarily responsible for an individual's personality has long been a topic of debate. [ 28 ] [ 29 ] Thanks to the advances being made in the field of neurogenetics, researchers have begun to tackle this question by beginning to map out genes and correlate them to different personality traits. [ 28 ] There is little to no evidence to suggest that the presence of a single gene indicates that an individual will express one style of behavior over another; rather, having a specific gene could make one more predisposed to displaying this type of behavior. It is starting to become clear that most genetically influenced behaviors are due to the effects of many variants within many genes, in addition to other neurological regulating factors like neurotransmitter levels. Due to fact that many behavioral characteristics have been conserved across species for generations, researchers are able to use animal subjects such as mice and rats, but also fruit flies, worms, and zebrafish, [ 20 ] [ 21 ] to try to determine specific genes that correlate to behavior and attempt to match these with human genes. [ 30 ] While it is true that variation between species can appear to be pronounced, at their most basic they share many similar behavior traits which are necessary for survival. Such traits include mating, aggression, foraging, social behavior and sleep patterns. This conservation of behavior across species has led biologists to hypothesize that these traits could possibly have similar, if not the same, genetic causes and pathways. Studies conducted on the genomes of a plethora of organisms have revealed that many organisms have homologous genes , meaning that some genetic material has been conserved between species. If these organisms shared a common evolutionary ancestor, then this might imply that aspects of behavior can be inherited from previous generations, lending support to the genetic causes – as opposed to the environmental causes – of behavior. [ 29 ] Variations in personalities and behavioral traits seen amongst individuals of the same species could be explained by differing levels of expression of these genes and their corresponding proteins. [ 30 ] There is also research being conducted on how an individual's genes can cause varying levels of aggression and aggression control [ citation needed ] . Throughout the animal kingdom, varying styles, types and levels of aggression can be observed leading scientists to believe that there might be a genetic contribution that has conserved this particular behavioral trait. [ 31 ] For some species varying levels of aggression have indeed exhibited direct correlation to a higher level of Darwinian fitness . [ 32 ] A great deal of research has been done on the effects of genes and the formation of the brain and the central nervous system. The following wiki links may prove helpful: There are many genes and proteins that contribute to the formation and development of the central nervous system, many of which can be found in the aforementioned links. Of particular importance are those that code for BMPs , BMP inhibitors and SHH . When expressed during early development, BMP's are responsible for the differentiation of epidermal cells from the ventral ectoderm . Inhibitors of BMPs, such as NOG and CHRD , promote differentiation of ectoderm cells into prospective neural tissue on the dorsal side. If any of these genes are improperly regulated, then proper formation and differentiation will not occur. BMP also plays a very important role in the patterning that occurs after the formation of the neural tube . Due to the graded response the cells of the neural tube have to BMP and Shh signaling, these pathways are in competition to determine the fate of preneural cells. BMP promotes dorsal differentiation of pre-neural cells into sensory neurons and Shh promotes ventral differentiation into motor neurons . There are many other genes that help to determine neural fate and proper development include, RELN , SOX9 , WNT , Notch and Delta coding genes , HOX , and various cadherin coding genes like CDH1 and CDH2 . [ 33 ] Some recent research has shown that the level of gene expression changes drastically in the brain at different periods throughout the life cycle. For example, during prenatal development the amount of mRNA in the brain (an indicator of gene expression) is exceptionally high, and drops to a significantly lower level not long after birth. The only other point of the life cycle during which expression is this high is during the mid- to late-life period, during 50–70 years of age. While the increased expression during the prenatal period can be explained by the rapid growth and formation of the brain tissue, the reason behind the surge of late-life expression remains a topic of ongoing research. [ 34 ] Neurogenetics is a field that is rapidly expanding and growing. The current areas of research are very diverse in their focuses. One area deals with molecular processes and the function of certain proteins, often in conjunction with cell signaling and neurotransmitter release, cell development and repair, or neuronal plasticity. Behavioral and cognitive areas of research continue to expand in an effort to pinpoint contributing genetic factors. As a result of the expanding neurogenetics field a better understanding of specific neurological disorders and phenotypes has arisen with direct correlation to genetic mutations . With severe disorders such as epilepsy , brain malformations, or mental retardation a single gene or causative condition has been identified 60% of the time; however, the milder the intellectual handicap the lower chance a specific genetic cause has been pinpointed. Autism for example is only linked to a specific, mutated gene about 15–20% of the time while the mildest forms of mental handicaps are only being accounted for genetically less than 5% of the time. Research in neurogenetics has yielded some promising results, though, in that mutations at specific gene loci have been linked to harmful phenotypes and their resulting disorders. For instance a frameshift mutation or a missense mutation at the DCX gene location causes a neuronal migration defect also known as lissencephaly . Another example is the ROBO3 gene where a mutation alters axon length negatively impacting neuronal connections. Horizontal gaze palsy with progressive scoliosis (HGPPS) accompanies a mutation here. [ 35 ] These are just a few examples of what current research in the field of neurogenetics has achieved. [ 36 ]
https://en.wikipedia.org/wiki/Neurogenetics
Neurogenomics is the study of how the genome of an organism influences the development and function of its nervous system. [ 1 ] This field intends to unite functional genomics and neurobiology in order to understand the nervous system as a whole from a genomic perspective. The nervous system in vertebrates is made up of two major types of cells – neuroglial cells and neurons . Hundreds of different types of neurons exist in humans, with varying functions – some of them process external stimuli; others generate a response to stimuli; others organize in centralized structures ( brain , spinal ganglia ) that are responsible for cognition, perception, and regulation of motor functions. Neurons in these centralized locations tend to organize in giant networks and communicate extensively with each other. Prior to the availability of expression arrays and DNA sequencing methodologies , researchers sought to understand the cellular behaviour of neurons (including synapse formation and neuronal development and regionalization in the human nervous system) in terms of the underlying molecular biology and biochemistry, without any understanding of the influence of a neuron's genome on its development and behaviour. As our understanding of the genome has expanded, the role of networks of gene interactions in the maintenance of neuronal function and behaviour has garnered interest in the neuroscience research community. Neurogenomics allows scientists to study the nervous system of organisms in the context of these underlying regulatory and transcriptional networks. This approach is distinct from neurogenetics , which emphasizes the role of single genes without a network-interaction context when studying the nervous system. [ 2 ] In 1999, Cirelli & Tononi [ 3 ] first reported the association of genome-wide brain gene expression profiling (using microarrays ) with a behavioural phenotype in mice. Since then, global brain gene expression data, derived from microarrays, has been aligned to various behavioural quantitative trait loci (QTLs) and reported in several publications. [ 4 ] [ 5 ] [ 6 ] However, microarray based approaches have their own problems that confound analysis – probe saturation can result in very small measurable variance of gene expression between genetically unique individuals, [ 7 ] and the presence of single nucleotide polymorphisms (SNPs) can result in hybridization artifacts. [ 8 ] [ 9 ] Furthermore, due to their probe-based nature, microarrays can miss out on many types of transcripts ( ncRNAs , miRNAs , and mRNA isoforms ). Probes can also have species-specific binding affinities that can confound comparative analysis. Notably, the association between behavioural patterns and high penetrance single gene loci falls under the purview of neurogenetics research, wherein the focus is to identify a simple causative relationship between a single, high penetrance gene and an observed function/behaviour. However, it has been shown that several neurological diseases tend to be polygenic , being influenced by multiple different genes and regulatory regions instead of one gene alone. There has hence been a shift from single gene approaches to network approaches for studying neurological development and diseases, a shift that has been greatly propelled by the advent of next generation sequencing methodologies. Twin studies have revealed that schizophrenia , [ 10 ] bipolar disorder , [ 11 ] autism spectrum disorder (ASD), [ 12 ] [ 13 ] and attention deficit hyperactivity disorder [ 14 ] (ADHD) are highly heritable, genetically complex psychiatric disorders . However, linkage studies have largely failed at identifying causative variants for psychiatric disorders such as these, primarily because of their complex genetic architecture. Multiple low penetrance risk variants can be aggregated in affected individuals and families, and sets of causative variants could vary across families. Studies along these lines have determined a polygenic basis for several psychiatric disorders . [ 15 ] Several independently occurring de novo mutations in patients Alzheimer's disease have been found to disrupt a shared set of functional pathways involved with neuronal signalling, for example. [ 16 ] The quest to understand the causative biology of psychiatric disorders is hence greatly assisted by the ability to analyse entire genomes of affected and unaffected individuals in an unbiased manner. [ 17 ] With the availability of massively parallel next generation sequencing methodologies, scientists have been able to look beyond the probe based captures of expressed genes. RNA-seq , for example, identifies 25-60% more expressed genes than microarrays do. In the upcoming field of neurogenomics, it is hoped that by understanding the genomic profiles of different parts of the brain, we might be able to improve our understanding of how the interactions between genes and pathways influence cellular function and development. This approach is expected to be able to identify the secondary gene networks that are disrupted in neurological disorders, subsequently assisting drug development stratagems for brain diseases. [ 18 ] The BRAIN initiative launched in 2013, for example, seeks to " inform the development of future treatments for brain disorders, including Alzheimer's disease, epilepsy, and traumatic brain injury " . Rare variant association studies (RVAS) have highlighted the role of de novo mutations in several congenital and early-childhood-onset disorders like autism . [ 19 ] [ 20 ] Several of these protein disrupting mutations have been able to be identified only with the aid of whole genome sequencing efforts, and validated with RNA-Seq . Additionally, these mutations are not statistically enriched in individual genes, but rather, exhibit patterns of statistical enrichment in groups of genes associated with networks regulating neurological development and maintenance. Such a discovery would have been impossible with prior gene-centric approaches ( neurogenetics , behavioural neuroscience ). Neurogenomics allows for a high-throughput system-based approach for understanding the polygenic basis of neuropsychiatric disorders. [ 17 ] When autism was identified as a distinct biological disorder in the 1980s, researchers found that autistic individuals showed a brain growth abnormality in the cerebellum in their early developmental years. [ 21 ] Subsequent research has indicated that 90% of autistic children have a larger brain volume than their peers by 2 to 4 years of age, and show an expansion in the white and gray matter content in the cerebrum . [ 22 ] The white and gray matter in the cerebrum is associated with learning and cognition respectively, and the formation of amyloid plaques in the white matter has been associated with Alzheimer's disease . These findings highlighted the influence of structural variance in the brain on psychiatric disorders, and have motivated the use of imaging technologies to map regions of divergence between healthy and diseased brains. Furthermore, while it may not always be possible to retrieve biological specimens from different areas live human brains, neuroimaging techniques offer a noninvasive means to understanding the biological basis of neurological disorders. It is hoped that an understanding of localization patterns of different psychiatric diseases could in turn inform network analysis studies in neurogenomics. Structural Magnetic Resonance Imaging (MRI) can be used to identify the structural composition of the brain. Particularly in the context of neurogenomics, MRI has played an extensive role in the study of Alzheimer's disease over the past four decades. It was initially used to rule out other causes of dementia , [ 16 ] but recent studies indicated the presence of characteristic changes in patients with Alzheimer's disease. As a result, MRI scans are currently being used as a neuroimaging tool to help identify the temporal and spatial pathophysiology of Alzheimer's disease, such as specific cerebral alterations and amyloid imaging . [ 16 ] The ease and non-invasive nature of MRI scans has motivated research projects that trace the development and onset of psychiatric diseases in the brain. Alzheimer disease has become a key candidate in this topographical approach to psychiatric diseases. For example, MRI scans are currently being used to track the resting and task-dependent functional profiles of brains in children with autosomal dominant Alzheimer disease. [ 23 ] These studies have found indications of early onset brain alterations in at-risk individuals for Alzheimer's disease. [ 16 ] The Autism Center of Excellence at University of California, San Diego, is also conducting MRI studies with children between 12 and 42 months, in the hopes of characterizing brain development abnormalities in children who present behavioural symptoms of autism. [ 24 ] Additional research has indicated that there are specifics patterns of atrophy in the cerebrum (as a repercussion of neurodegeneration ) in different neurological disorders and diseases. These disease-specific patterns of progression of atrophy can be identified with MRI scans, and provide a clinical phenotype context to neurogenomic research. The temporal information about disease progression provided by this approach can also potentially inform the interpretation of gene network-level perturbations in psychiatric diseases. [ 16 ] One prohibitive feature of 2nd generation sequencing methodologies is the upper limit on the genomic range accessible by mate-pairing. Optical mapping is an emerging methodology used to span large-scale variants that cannot usually be detected using paired end reads. This approach has been successfully applied to detect structural variants in oligodendroglioma , a type of brain cancer. [ 25 ] Recent work has also highlighted the versatility of optical maps in improving existing genome assemblies. Chromosomal rearrangements , microdeletions , and large-scale translocations have been associated with impaired neurological and cognitive function , for example in hereditary neuropathy and neurofibromatosis . Optical mapping can significantly improve variant detection and inform gene interaction network models for the diseased state in neurological disorders. Apart from neurological disorders, there are additional diseases that manifest in the brain and have formed exemplar use-case scenarios for the application of brain imaging in network analysis. In a classic example of imaging-genomic analyses, a research study in 2012 compared MRI scans and gene expression profiles of 104 glioma patients in order to distinguish treatment outcomes and identify novel targetable genomic pathways in Glioblastoma Multiforme (GBM). Researchers found two distinct groups of patients with significantly different organization of white matter (invasive vs non-invasive). Subsequent pathway analysis of the gene expression data indicated mitochondrial dysfunction as the top canonical pathway in an aggressive, low-mortality GBM phenotype. [ 26 ] Expansion of brain imaging approaches to other diseases can be used to rule out other medical illnesses while diagnosing psychiatric disorders, but cannot be used to inform the presence or absence of a psychiatric disorder. The current approaches in collecting gene expression data in human brains are to use either microarrays or RNA-seq . Currently, it is rare to gather "live" brain tissue – only when treatments involve brain surgery is there a chance that brain tissue is collected during the procedure. This is the case with epilepsy. Currently, gene expression data is usually collected on post mortem brains and this is often a barrier to neurogenomics research in humans. [ 27 ] [ 28 ] After death, the amount of time between death and when the data from the post mortem brain is collected is known as the post mortem interval (PMI). Since RNA degrades after death, a fresh brain is optimal – but not always available. This in turn can influence a variety of downstream analyses. Consideration should be taken of the following factors when working with 'omics data collected from post-mortem brains: Differential diagnosis also remains a critical pre-analytical confounder of cohort-wide studies of spectrum neurological disorders. Specifically, this has been noted to be a problem for Alzheimer's disease and autism spectrum disorder studies. Furthermore, as our understanding of the diverse symptoms and genomic underpinnings of various neurogenomic disorders improves, the diagnostic criteria itself undergoes rearrangements and review. [ 32 ] Ongoing genomics research in neurological disorders tends to use animal models (and corresponding gene homologs ) to understand the network interactions underlying a particular disorder due to ethical issues surrounding the retrieval of biological specimens from live human brains. This, too, is not without its roadblocks. Neurogenomic research with a model organism is contingent on the availability of a fully sequenced and annotated reference genome. Additionally, the RNA profiles ( miRNA , ncRNA , mRNA ) of the model organism need to be well catalogued, and any inferences applied from them to humans must have a basis in functional/sequence homology . [ 33 ] Zebrafish development relies on gene networks that are highly conserved among all vertebrates . [ 34 ] Additionally, with an extremely well annotated set of 12,000 genes and 1,000 early development mutants that are actually visible in the optically clear zebrafish embryos and larvae, zebrafish offer a sophisticated system for mutagenesis and real-time imaging of developing pathologies. This early development model has been employed to study the nervous system at cellular resolution. [ 35 ] [ 36 ] The zebrafish model system has already been used to study neuroregeneration [ 37 ] and severe polygenic human diseases like cancer and heart disease. [ 38 ] Several zebrafish mutants with behavioural variations in response to cocaine and alcohol dosage have been isolated and can also form a basis for studying the pathogenesis of behavioural disorders. [ 39 ] [ 40 ] Rodent models have been preeminent in studying human disorders. These models have been extensively annotated with gene homologs of several monogenic disorders in humans. Knockout studies of these homologs have led to expansion of our understanding of network interactions of genes in human tissues. For example, the FMR1 gene has been implicated with autism from a number of network studies. [ 41 ] [ 42 ] Using a knockout of FMR 1 in mice creates the model for Fragile X Syndrome , one of the disorders in the Autism spectrum . [ 43 ] Mice xenografts are particularly useful for drug discovery, [ 44 ] and were extremely important in the discovery of early anti-psychotic drugs. The development of animal models for complex psychiatric diseases has also improved over the last few years. Rodent models have demonstrated behavioural phenotype changes resembling a positive schizophrenia state, either after genetic manipulation or after treatment with drugs that target the areas of the brain suspected to influence hyperactivity or neurodevelopment. [ 45 ] Interest has been generated in identifying the network disruptions mediated by these laboratory manipulations, and collection of genomic data from rodent studies has contributed significantly to a better understanding of the genomics of psychiatric diseases. The first mouse brain transcriptome was generated in 2008. [ 46 ] Since then, extensive work has been done with building social-stress mice models to study the pathway level expression signatures of various psychiatric diseases. A recent paper simulated features of Post Traumatic Stress Disorder (PTSD) in mice, and profiled the entire transcriptome of these mice. [ 47 ] The authors found differential regulation in many biological pathways, some of which were implicated in anxiety disorders ( hyperactivity , fear response ), mood disorders, and impaired cognition. These findings are backed by extensive transcriptomic analyses of anxiety disorders, and expression level changes in biological pathways involved with fear learning and memory are thought to contribute to the behavioural manifestations of these disorders. [ 47 ] It is thought that functional enrichment of genes involved in long term synaptic potentiation, depression, and plasticity has an important role to play in the acquisition, consolidation, and maintenance of traumatic memories underlying anxiety disorders. [ 47 ] [ 48 ] A common approach to using a mouse model is to apply an experimental treatment to a pregnant mouse in order to affect a whole litter. However, a key issue in the field is the treatment of litters in a statistical analysis. Most studies consider the total number of offspring produced as that may lead to an increase in statistical power. However, the correct way is to count by the number of litters and to normalize based on litter size. It was found that several autism studies incorrectly performed their statistical analyses based on total number of offspring instead of number of litters. [ 49 ] Several anxiety disorders such as post-traumatic stress disorder (PTSD) involve heterogeneous changes in several different brain regions, such as the hippocampus , amygdala , and nucleus accumbens . The cellular encoding of traumatic events and the behavioral responses triggered by such events has been shown to lie primarily in changes in signaling molecules associated with synaptic transmission . Global gene expression profiling of the various gene regions implicated in fear and anxiety processing, using mice models, has led to the identification of temporally and spatially distinct sets of differentially expressed genes. Pathway analysis of these genes has indicated possible roles in neurogenesis and anxiety-related behavioural responses, alongside other functional and phenotypic observations. [ 47 ] Mice models for brain research have contributed significantly to drug development and increased our understanding of the genomic underpinnings of several neurological diseases in the last generation. Chlorpromazine , the first antipsychotic drug (discovered in 1951), was identified as a viable treatment option after it was shown to suppress response to aversive stimuli in rats in a behavioural screen. The modelling and assessment of latent symptoms (thoughts, verbal learning, social interactions, cognitive behaviour) remains a challenge when using model organisms to study psychiatric disorders with a complex genetic pathology . For example, a given genotype+phenotype in a mouse model must imitate the genomic underpinnings of a phenotype observed in a human. This is a particularly crucial item of consideration in spectrum disorders such as autism . Autism is a disorder whose symptoms can be divided into two categories: (i) deficits of social interactions and (ii) repetitive behaviours and restricted interests. Since mice tend to be more social creatures amongst all members of the order Rodentia currently being used as model organisms, mice are generally used to model human psychiatric disorders as closely as possible. Particularly for autism, the following work-arounds are currently in place to emulate human behavioural symptoms: In any of these experiments, the 'autistic' mice have a 'normal' socializing partner and the scientists observing the mice are unaware ("blind") to the genotypes of the mice. The gene expression profile of the central nervous system (CNS) is unique. Eighty percent of all human genes are expressed in the brain; 5,000 of these genes are solely expressed in the CNS. The human brain has the highest amount of gene expression of all studied mammalian brains. In comparison, tissues outside of the brain will have more similar expression levels in comparison to their mammalian counterparts. One source of the increased expression levels in the human brain is from the non-protein coding region of the genome. Numerous studies have indicated that the human brain have a higher level of expression in regulatory regions in comparison to other mammalian brains. There is also notable enrichment for more alternative splicing events in the human brain. [ 2 ] Gene expression profiles also vary within specific regions of the brain. A microarray study showed that the transcriptome profile of the CNS clusters together based on region. A different study characterized the regulation of gene expression across 10 different regions based on their eQTL signals. [ 52 ] The cause of the varying expression profiles relates to function, neuron migration and cellular heterogeneity of the region. Even the three layers of the cerebral cortex have distinct expression profiles. [ 53 ] A study completed at Harvard Medical School in 2014 was able to identify developmental lineages stemming from single base neuronal mutations. The researchers sequenced 36 neurons from the cerebral cortex of three normal individuals, and found that highly expressed genes, and neural associated genes, were significantly enriched for single-neuron SNVs . These SNVs, in turn, were found to be correlated with chromatin markers of transcription from fetal brain. [ 54 ] Gene expression of the brain changes throughout the different phases of life. The most significant levels of expression are found during early development, with the rate of gene expression being highest during fetal development. This results from the rapid growth of neurons in the embryo. Neurons at this stage are undergoing neuronal differentiation , cell proliferation , migration events and dendritic and synaptic development . [ 55 ] Gene expression patterns shift closer towards specialized functional profiles during embryonic development, however, certain developmental steps are still ongoing at parturition. Consequently, gene expression profiles of the two brain hemispheres appear asymmetrical at birth. At birth, gene expression profiles appear asymmetrical between brain hemispheres . As development continues, the gene expression profiles become similar between the hemispheres. Given a healthy adult, expression profiles stay relatively consistent from the late twenties into the late forties. From the fifties onwards, there is significant decrease in the expression of genes important for regular function. Despite this, there is an increase in the diversity of genes being expressed across the brain. This age related change in expression may be correlated with GC content . At later stages of life, there is an increase in the induction of low GC-content pivotal genes as well as an increase in the repression of high GC-content pivotal genes. [ 53 ] Another cause of the shift in gene diversity is the accumulation of mutations and DNA damage. Gene expression studies show that genes that accrue these age-related mutations are consistent between individuals in the aging population. Genes that are highly expressed at development decrease significantly at late stages in life, whereas genes that are highly repressed at development increase significantly at the late stages. [ 54 ] The evolution of Homo sapiens since the divergence from the primate common ancestor has shown a marked expansion in the size and complexity of the brain, especially in the cerebral cortex . [ 56 ] [ 57 ] [ 58 ] [ 59 ] In comparison to primates , the human cerebral cortex has a larger surface area but differs only slightly in thickness. Many large scale studies in understanding the differences of the human brain from other species have indicated expansion of gene families and changes in alternative splicing to be responsible for the corollary increase in cognitive capabilities and cooperative behaviour in humans. [ 60 ] [ 61 ] However, we are yet to determine the exact phenotypic consequences of all these changes. One difficulty is that only primates have developed subdivisions in their cerebral cortex, making the modeling of human specific neurological problems difficult to mimic in rodents . [ 58 ] [ 62 ] [ 63 ] Sequence data is used to understand the evolutionary genetic changes which led to the development of the human CNS. We can then understand how the neurological phenotypes differ between species. Comparative genomics entails comparison of sequence data across a phylogeny to pinpoint the genotypic changes that occur within specific lineages, and understand how these changes might have arisen. The increase in high quality mammalian reference sequences generally makes comparative analysis better as it increases statistical power . However, the increase in number of species in a phylogeny does risk adding unnecessary noise as the alignments of the orthologous sequences usually decrease in quality. Furthermore, different classes of species will have significant differences in their phenotypes. [ 64 ] Despite this, comparative genomics has allowed us to connect the genetic changes found in a phylogeny to specific pathways. In order to determine this, lineages are tested for the functional changes that accrue over time. This is often measured as a ratio of nonsynonymous substitutions to synonymous substitutions or the dN/dS ratio (sometimes, further abbreviated to ω). When the dN/dS ratio is greater than 1, this indicates positive selection . A dN/dS ratio equal to 1 is evidence of no selective pressures. A dN/dS ratio less than 1 indicates negative selection . For example, the conserved regions of the genome will generally have a dN/dS ratio of less than 1 since any changes to those positions will likely be detrimental. [ 65 ] Of the genes expressed in the human brain, it is estimated that 342 of them have a dN/dS ratio greater than 1 in the human lineage in comparison to other primate lineages. [ 64 ] This indicates positive selection on the human lineage for brain phenotypes. Understanding the significance of the positive selection is generally the next step. For example, ASPM , CDK5RAP2 and NIN are genes that are positively selected for on the human lineage and have been directly correlated with brain size. This finding may help elucidate why human brains are larger than other mammalian brains. [ 65 ] It is thought that gene expression changes, being the ultimate response for any genetic changes, are a good proxy for understanding phenotypic differences within biological samples. Comparative studies have revealed a range of differences in the transcriptional controls between primates and rodents . For example, the gene CNTNAP2 is specifically enriched for in the prefrontal cortex . The mouse homolog of CNTNAP2 is not expressed in the mouse brain. CNTNAP2 has been implicated in cognitive functions of language as well as neurodevelopmental disorders such as Autism Spectrum Disorder. This suggests that the control of expression plays a significant role in the development in unique human cognitive function. As a consequence, a number of studies have investigated the brain specific enhancers. Transcription factors such as SOX5 have been found to be positively selected for on the human lineage. Gene expression studies in humans, chimpanzees and rhesus macaques , have identified human specific co-expression networks, and an elevation in gene expression in the human cortex in comparison to primates. [ 66 ] Neurogenomic disorders manifest themselves as neurological disorders with a complex genetic architecture and a non-Mendelian -like pattern of inheritance. [ 18 ] Some examples of these disorders include Bipolar disorder and Schizophrenia . [ 15 ] Several genes may be involved in the manifestation of the disorder, and mutations in such disorders are generally rare and de novo. Hence it becomes extremely unlikely to observe the same (potentially causative) variant in two unrelated individuals affected with the same neurogenomic disorder. [ 15 ] Ongoing research has implicated several de novo exonic variations and structural variations in Autism Spectrum Disorder (ASD), for example. [ 15 ] The allelic spectrum of the rare and common variants in neurogenomic disorders therefore necessitates a need for large cohort studies in order to effectively exclude low effect variants and identify the overarching pathways frequently mutated in the different disorders, rather than specific genes and specific high penetrance mutations. Whole genome sequencing (WGS) and whole exome sequencing (WES) has been used in Genome Wide Association Studies (GWAS) to characterize genetic variants associated with neurogenomic disorders. However, the impact of these variants cannot always be verified because of the non-Mendelian inheritance patterns observed in several of these disorders. [ 15 ] Another prohibitive feature in network analysis is the lack of large-scale datasets for many psychiatric (neurogenomic) diseases. Since several diseases with neurogenomic underpinnings tend to have a polygenic basis, several nonspecific, rare, and partially penetrant de novo mutations in different patients can contribute to the same observed range of phenotypes, as is the case with Autism Spectrum Disorder and schizophrenia. [ 67 ] Extensive research in alcohol dependence has also highlighted the need for high-quality genomic profiling of large sample sets [ 68 ] [ 69 ] when studying polygenic , spectrum disorders . The 1000 Genomes Project was a successful demonstration of how a concerted effort to acquire representative genomic data from the broad spectrum of humans can result in identification of actionable biological insights for different diseases. [ 70 ] However, a large-scale initiative like this is still lacking in the field of neurogenomic disorders specifically. One major GWAS study identified 13 new risk loci for schizophrenia . [ 71 ] Studying the impact of these candidates would ideally demonstrate a schizophrenia phenotype in animal models, which is usually difficult to observe due to its manifestation as a latent personality. This approach is able to determine the molecular impact the candidate gene . Ideally the candidate genes would have a neurological impact, which in turn would suggest that it plays a role in the neurological disorder. For example, in the aforementioned schizophrenia GWAS study, Ripke and colleagues [ 71 ] determined that these candidate genes were all involved in calcium signalling. Alternatively, one can study these variants in model organisms in the context of affected neurological function. It is important to note that the high penetrance variants of these disorders tend to be de novo mutations. A further complication to studying neurogenomic disorders is the heterogeneous nature of the disorder. In many of these disorders, the mutations observed from case to case do not stay consistent. In autism, an affected individual may experience a large amount of deleterious mutations in gene X. A different affected individual may not have any significant mutations on gene X but have a large amount of mutations in gene Y. The alternative is to determine if gene X and gene Y impact the same biochemical pathway—one that influences a neurological function. A bioinformatics network analysis is one approach to this problem. Network analyses methodologies provide a generalized, systems overview of a molecular pathway. One final complication to consider is the comorbidity of neurogenomic genes. Several disorders, especially at the more severe ends of the spectrum tend to be comorbid with each other. For example, more severe cases of ASD tend to be associated with intellectual disability (ID). This raises the question of whether or not there are true, unique ASD genes and unique ID genes or if there are just genes just associated with neurological function that can be mutated into an abnormal phenotype. One confounding factor may be the actual diagnostic category and methods of the spectrum disorders as symptoms between severe disorders may be similar. One study investigated the comorbid symptoms between groups of ID and ASD, and found no significant difference between the symptoms of ID children, ASD children with ID and ASD children without ID. Future research may help establish a more stringent genetic basis for the diagnoses of these disorders. The main goal of network analysis in neurogenomics is to identify statistically significant nonrandom associations between genes that contain risk variants. [ 15 ] While several algorithm implementations of this approach already exist, [ 72 ] [ 73 ] the general steps for network analysis remain the same. The underlying principle of this approach is that the genes that cluster together, will also jointly affect the same molecular pathway. Again, they would ideally be part of a neurological function. The candidate genes can then be used to prioritize variants for wet lab validation. Historically, due to the behavioural stimulation manifested as a symptom in several the neurogenomic disorders, the therapies would rely mostly on anti-psychotics or antidepressants. These classes of medications would suppress common symptoms of the disorders, but with questionable efficacy. The biggest barrier to neruopharmacogenomic research was the cohort sizes. Given newly available large-cohort sequencing data, there has been a recent push to expand therapeutic options. The heterogenous nature of neurological diseases is the key motivation for personalized medicine approaches to their therapies. It is rare to find single high penetrance causative genes in neurological diseases. The genomic profiles understandably vary between cases, and logically, the therapies would need to vary between cases. Further complicating the issue is that many of these disorders are spectrum disorders. Their genetic etiology will vary within this spectrum. For example, severe ASD is associated with high penetrance de novo mutations. Milder forms of ASD is usually associated with a mixture of common variants. The key issue then is the translation of these newly identified genetic variants (from Copy Number Variant studies, candidate gene sequencing and high throughput sequencing technologies) into an intervention for patients with neurogenomic disorders. One aspect will be if the neurological disorder are medically actionable (i.e. is there a simple metabolic pathway that a therapy can target). For example, specific cases of ASD have been associated with microdeletions on TMLHE gene. This gene codes for the enzyme of carnitine biosynthesis . Supplements to elevate carnitine levels appeared to alleviate certain ASD symptoms but the study was confounded by many influencing factors. As mentioned earlier, using a gene network approach will help identify relevant pathways of interest. Many neuropharmacogenomic approaches have focused on targeting the downstream products of these pathways. [ 75 ] [ 76 ] Studies in animal models for several brain diseases has shown that the blood brain barrier (BBB) undergoes modification at many levels; for example, the surface glycoprotein composition can influence the types of HIV-1 strains transported by the BBB. The BBB has been found to be key in the onset of Alzheimer's disease. [ 77 ] It is extremely difficult, however, to be able to study this in humans due to obvious restrictions with accessing the brain and retrieving biological specimens for sequencing or morphological analysis. Mice models of the BBB and models of disease states have served well in conceptualizing the BBB as a regulatory interface between disease and good health in the brain. The heterogenous nature of neurological diseases is the key motivation for personalized medicine approaches to their therapies. [ 75 ] Genomic samples of individual patients could be used to identify predictive factors, or to better understand the specific prognosis of a neurogenomic disease, and use this information to guide treatment options. [ 78 ] While there is a clear clinical utility to this approach, the adaptation of this approach is still nonexistent. There are various issues prohibiting the application of personalized genomics to the assessment, diagnosis, and treatment of psychiatric disorders.
https://en.wikipedia.org/wiki/Neurogenomics
Neuroheuristics (or neuristics ) studies the dynamic relations within neuroscientific knowledge, using a transdisciplinary studies approach. It was proposed by Alessandro Villa in 2000. [ 1 ] The word comes from the Greek νεύρον ( neuron , which refers to the nerve cell [ 2 ] ) and εύρισκω ("euriskein", heuristic , which refers to problem-solving procedures characterized by informal, intuitive and speculative features [ 3 ] ). Neuroheuristics defines a scientific paradigm aimed to develop strategies that can be enabled to understand brain and mind following subsequent problems emerging from transdisciplinary studies including philosophy , psychology , neuroscience , pharmacology , physics , artificial intelligence , engineering , computer science , economics and mathematics . The research framework introduced by the neuroheuristic paradigm appears as an essential step for the investigation of the information processing effected by the brain because it is the outcome of nature and nurture , at the crossing of top-down and bottom-up design . Neurobiologists apply a bottom-up research strategy in their studies. This strategy has been able to describe a simple organism's nervous system , such as Caenorhabditis elegans . [ 4 ] [ 5 ] [ 6 ] [ 7 ] However, it would be impossible to simultaneously examine all neurons and all variables. This limits the value experimentation using this method could provide. The top-down strategy with the assistance of black box theory appears easier to complete, but inappropriate for understanding the mechanisms which coordinate neurons. The paradigm offers a needed and possibly distinct approach to the study of brain and mind. [ 8 ] [ 9 ] [ 10 ] [ 11 ] In this framework, a result cannot be simply positive or negative because the process itself cannot be reduced to proficiency as such. [ clarification needed ] Dynamics is an essential feature of the neuroheuristic paradigm, but it is more than just the neurobiological facet of holism as opposed to reductionism . [ clarification needed ]
https://en.wikipedia.org/wiki/Neuroheuristics
Neuroimmunology is a field combining neuroscience , the study of the nervous system , and immunology , the study of the immune system . Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis , and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases , some of which have no clear etiology . In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis. Neural targets that control thermogenesis , behavior , sleep , and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes. From the US National Institute of Health: [ 1 ] "Despite the brain 's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune system in both health and disease. Immune cells and neuroimmune molecules such as cytokines, chemokines , and growth factors modulate brain function through multiple signaling pathways throughout the lifespan. Immunological, physiological and psychological stressors engage cytokines and other immune molecules as mediators of interactions with neuroendocrine , neuropeptide, and neurotransmitter systems. For example, brain cytokine levels increase following stress exposure, while treatments designed to alleviate stress reverse this effect. "Neuroinflammation and neuroimmune activation have been shown to play a role in the etiology of a variety of neurological disorders such as stroke, Parkinson's and Alzheimer's disease, multiple sclerosis , pain , and AIDS-associated dementia . However, cytokines and chemokines also modulate CNS function in the absence of overt immunological, physiological, or psychological challenges. For example, cytokines and cytokine receptor inhibitors affect cognitive and emotional processes. Recent evidence suggests that immune molecules modulate brain systems differently across the lifespan. Cytokines and chemokines regulate neurotrophins and other molecules critical to neurodevelopmental processes, and exposure to certain neuroimmune challenges early in life affects brain development. In adults, cytokines and chemokines affect synaptic plasticity and other ongoing neural processes, which may change in aging brains. Finally, interactions of immune molecules with the hypothalamic-pituitary-gonadal system indicate that sex differences are a significant factor determining the impact of neuroimmune influences on brain function and behavior." Recent research demonstrates that reduction of lymphocyte populations can impair cognition in mice, and that restoration of lymphocytes restores cognitive abilities. [ 2 ] Epigenetic medicine encompasses a new branch of neuroimmunology that studies the brain and behavior, and has provided insights into the mechanisms underlying brain development , evolution , neuronal and network plasticity and homeostasis, senescence , the etiology of diverse neurological diseases and neural regenerative processes. It is leading to the discovery of environmental stressors that dictate initiation of specific neurological disorders and specific disease biomarkers . The goal is to "promote accelerated recovery of impaired and seemingly irrevocably lost cognitive, behavioral, sensorimotor functions through epigenetic reprogramming of endogenous regional neural stem cells ". [ 3 ] Several studies have shown that regulation of stem cell maintenance and the subsequent fate determinations are quite complex. The complexity of determining the fate of a stem cell can be best understood by knowing the "circuitry employed to orchestrate stem cell maintenance and progressive neural fate decisions". [ 4 ] Neural fate decisions include the utilization of multiple neurotransmitter signal pathways along with the use of epigenetic regulators. The advancement of neuronal stem cell differentiation and glial fate decisions must be orchestrated timely to determine subtype specification and subsequent maturation processes including myelination. [ 5 ] Neurodevelopmental disorders result from impairments of growth and development of the brain and nervous system and lead to many disorders. Examples of these disorders include Asperger syndrome , traumatic brain injury , communication , speech and language disorders, genetic disorders such as fragile-X syndrome , Down syndrome , ADHD , epilepsy , and fetal alcohol syndrome . Studies have shown that autism spectrum disorders (ASDs) may present due to basic disorders of epigenetic regulation. [ 6 ] Other neuroimmunological research has shown that deregulation of correlated epigenetic processes in ASDs can alter gene expression and brain function without causing classical genetic lesions which are more easily attributable to a cause and effect relationship. [ 7 ] These findings are some of the numerous recent discoveries in previously unknown areas of gene misexpression. Increasing evidence suggests that neurodegenerative diseases are mediated by erroneous epigenetic mechanisms. Neurodegenerative diseases include Huntington's disease and Alzheimer's disease . Neuroimmunological research into these diseases has yielded evidence including the absence of simple Mendelian inheritance patterns, global transcriptional dysregulation, multiple types of pathogenic RNA alterations , and many more. [ 8 ] In one of the experiments, a treatment of Huntington’s disease with histone deacetylases (HDAC), an enzyme that removes acetyl groups from lysine, and DNA/RNA binding anthracylines that affect nucleosome positioning, showed positive effects on behavioral measures, neuroprotection, nucleosome remodeling, and associated chromatin dynamics. [ 9 ] Another new finding on neurodegenerative diseases involves the overexpression of HDAC6 suppresses the neurodegenerative phenotype associated with Alzheimer’s disease pathology in associated animal models. [ 10 ] Other findings show that additional mechanisms are responsible for the "underlying transcriptional and post-transcriptional dysregulation and complex chromatin abnormalities in Huntington's disease". [ 11 ] The nervous and immune systems have many interactions that dictate overall body health. The nervous system is under constant monitoring from both the adaptive and innate immune system . Throughout development and adult life, the immune system detects and responds to changes in cell identity and neural connectivity. [ 12 ] Deregulation of both adaptive and acquired immune responses, impairment of crosstalk between these two systems, as well as alterations in the deployment of innate immune mechanisms can predispose the central nervous system (CNS) to autoimmunity and neurodegeneration. [ 13 ] Other evidence has shown that development and deployment of the innate and acquired immune systems in response to stressors on functional integrity of cellular and systemic level and the evolution of autoimmunity are mediated by epigenetic mechanisms . [ 14 ] Autoimmunity has been increasingly linked to targeted deregulation of epigenetic mechanisms, and therefore, use of epigenetic therapeutic agents may help reverse complex pathogenic processes. [ 15 ] Multiple sclerosis (MS) is one type of neuroimmunological disorder that affects many people. MS features CNS inflammation, immune-mediated demyelination and neurodegeneration. Myalgic Encephalomyelitis (also known as Chronic fatigue syndrome ), is a multi-system disease that causes dysfunction of neurological, immune, endocrine and energy-metabolism systems. Though many patients show neuroimmunological degeneration, the correct roots of ME/CFS are unknown. Symptoms of ME/CFS include significantly lowered ability to participate in regular activities, stand or sit straight, inability to talk, sleep problems, excessive sensitivity to light, sound or touch and/or thinking and memory problems (defective cognitive functioning). Other common symptoms are muscle or joint pain, sore throat or night sweats . There is no treatment but symptoms may be treated. Patients that are sensitive to mold may show improvement in symptoms having moved to drier areas. Some patients in general have less severe ME, whereas others may be bedridden for life. [ 16 ] PTSD has been linked to neuroimmunity dysfunction with this being greater in individuals with worse anhedonia . [ 17 ] The interaction of the CNS and immune system are fairly well known. Burn-induced organ dysfunction using vagus nerve stimulation has been found to attenuate organ and serum cytokine levels. Burns generally induce abacterial cytokine generation and perhaps parasympathetic stimulation after burns would decrease cardiodepressive mediator generation. Multiple groups have produced experimental evidence that support proinflammatory cytokine production being the central element of the burn-induced stress response. [ 18 ] Still other groups have shown that vagus nerve signaling has a prominent impact on various inflammatory pathologies. These studies have laid the groundwork for inquiries that vagus nerve stimulation may influence postburn immunological responses and thus can ultimately be used to limit organ damage and failure from burn induced stress. Basic understanding of neuroimmunological diseases has changed significantly during the last ten years. New data broadening the understanding of new treatment concepts has been obtained for a large number of neuroimmunological diseases, none more so than multiple sclerosis, since many efforts have been undertaken recently to clarify the complexity of pathomechanisms of this disease. Accumulating evidence from animal studies suggests that some aspects of depression and fatigue in MS may be linked to inflammatory markers. [ 19 ] Studies have demonstrated that Toll like-receptor (TLR4) is critically involved in neuroinflammation and T cell recruitment in the brain, contributing to exacerbation of brain injury. [ 20 ] Research into the link between smell, depressive behavior, and autoimmunity has turned up interesting findings including the facts that inflammation is common in all of the diseases analyzed, depressive symptoms appear early in the course of most diseases, smell impairment is also apparent early in the development of neurological conditions, and all of the diseases involved the amygdale and hippocampus. Better understanding of how the immune system functions and what factors contribute to responses are being heavily investigated along with the aforementioned coincidences. Neuroimmunology is also an important topic to consider during the design of neural implants. Neural implants are being used to treat many diseases, and it is key that their design and surface chemistry do not elicit an immune response. The nervous system and immune system require the appropriate degrees of cellular differentiation, organizational integrity, and neural network connectivity. These operational features of the brain and nervous system may make signaling difficult to duplicate in severely diseased scenarios. There are currently three classes of therapies that have been utilized in both animal models of disease and in human clinical trials. These three classes include DNA methylation inhibitors, HDAC inhibitors, and RNA-based approaches. DNA methylation inhibitors are used to activate previously silenced genes. HDACs are a class of enzymes that have a broad set of biochemical modifications and can affect DNA demethylation and synergy with other therapeutic agents. The final therapy includes using RNA-based approaches to enhance stability, specificity, and efficacy, especially in diseases that are caused by RNA alterations. Emerging concepts concerning the complexity and versatility of the epigenome may suggest ways to target genomewide cellular processes. Other studies suggest that eventual seminal regulator targets may be identified allowing with alterations to the massive epigenetic reprogramming during gametogenesis. Many future treatments may extend beyond being purely therapeutic and may be preventable perhaps in the form of a vaccine. Newer high throughput technologies when combined with advances in imaging modalities such as in vivo optical nanotechnologies may give rise to even greater knowledge of genomic architecture, nuclear organization, and the interplay between the immune and nervous systems. [ 21 ]
https://en.wikipedia.org/wiki/Neuroimmunology
Neuroinclusive design or neuro-inclusive design is a human-centered approach of designing products, services, or environments in a way that enables individuals of all sensory profiles to coexist within the same space. Neuroinclusive design create spaces and experiences that are accessible and user-friendly for everyone covering the entire " neurodiversity " spectrum. [ 1 ] [ 2 ] A key criticism in Human-Computer Interaction (HCI) is that research often excludes neurodivergent people from being actively involved in the design process. Instead of highlighting their strengths and unique experiences, the technologies typically focus on perceived deficits and behaviors deemed disruptive by non-autistic standards. Consequently, the outcomes overlook the emotional and practical needs of neurodivergent users and perpetuate harmful stereotypes and stigmas. [ 3 ] This design -related article is a stub . You can help Wikipedia by expanding it . This psychology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neuroinclusive_design
Neurological reparative therapy ( NRT ) is a new model of treatment synthesized from a compilation of literature and research on how to better the lives of individuals who have a wide range of mental , emotional , and behavioral disturbances – particularly children and adolescents. Although the term "neurological reparative therapy" is new, the foundation of this model is not. NRT is not a proscriptive model, in that it does not outline how the therapy is to be conducted. A preponderance of evidence shows that the most important component of successful treatment is not the approach or techniques, but the professional caring and competence of the therapist engaging with the client in a supportive, cathartic, and healing manner. The NRT roadmap provides the best route to healthy brain development, attachment, and resiliency, but relies on the helper to use his or her own skills, experience, and techniques to take the journey. Neurological reparative therapy is not a program or a technique, but is better explained as a roadmap describing the journey of healing and repair of the mental processes preventing the individual from achieving personal goals and personal contentment. NRT outlines the goal and ultimate destination of treatment, and provides a roadmap with the mileposts along the way. NRT does not proscribe the methods, the techniques, or the approaches that are the vehicle for the client and therapist to travel the road together. NRT is specifically designed to optimize the skills and abilities of the therapist by taking advantage of the therapist's knowledge , training , and experience in joining with the clients on the healing journey. The model outlines what needs to be addressed in treatment – the therapy components critical to internal change – and points out that "knowing where you are going" is the key to arriving successfully at the destination. Although the term neurological reparative therapy is relatively new, the history of the model goes back many decades. NRT has its foundation in the integrated treatment of extreme mental , emotional , and behavioral disturbances in very young children. The term was first used in the writings of a psychologist, D.L. Ziegler , working with children who had experienced severe trauma and abuse . [ 1 ] Ziegler found that integrating brain research, attachment theory , and resiliency work within trauma treatment resulted in an unexpected outcome – significant functional improvement following the cessation of treatment. This led him to the conclusion that the change was internal (i.e. neurological) in nature, and was not conditional upon a place, an individual, or the environmental circumstances. After studying years of practice-based evidence, it was found that the treatment orientation or approach of the therapist was not the determining factor in positive outcomes, but rather the integration of components of positive brain change (attachment, resiliency, and trauma treatment within the framework of how the brain functions) that frequently resulted in change "from the inside out". The four pillars of neurological reparative therapy are Brain Development , Attachment , Resiliency, and Trauma Treatment. The integration of all these components provides the synergy that leads to optimal positive brain change, which provides lasting change. Of these pillars, the research and literature on attachment is the oldest of the four, going back nearly a half-century. The ability of an individual to successfully bond and attach to others has been growing in importance as new information is learned about how the human body – and specifically how the brain – modulates and copes with stress . There is now a new resurgence in focusing on attachment in the treatment process of individuals, couples, and families. Resiliency research is showing it is not the negative experiences everyone encounters in life that is key, but rather the ability of the individual to cope with and bounce back from adverse situations. Resiliency is growing in importance not only in mental health settings but also in educational, occupational, and medical areas. The pillar that has seen the most influx over the last two decades has been brain development. The explosion of new information has been facilitated by sophisticated brain scanning technology, to determine in real time how the brain is processing and responding to interventions. At this point, our knowledge of the brain and how it works is doubling every ten years. NRT works to integrate new information on the brain into the healing process. While the field of trauma treatment is as old as is attachment, it takes significant advantage of the new research on brain development to better target the brain with positive change. For damaged individuals, trauma treatment is the container, while attachment, resiliency, and brain development are the included ingredients of the NRT process. On the continuum of evidence-based practices and practice-based evidence, the neurological reparative therapy model is in the middle. Causal research has limitations when it comes to isolating variables in an intervention with integrated components. NRT allows for the use of any and all evidence-based practices as the vehicle of treatment, since it does not restrict or specify any therapeutic approach or method. NRT places the responsibility for how the therapeutic process is conducted on the therapist. Evidence-based practices should be a major consideration of all treatment. Because neurological reparative therapy is an integration of an amalgam of components and a wide range of methods, the limitations of causal research make it difficult to determine if one ingredient of the process is more significant than any others. NRT primarily relies on the research and literature of the four pillars that form the conceptual foundation of the Model. There is significant literature and research concerning how the human brain functions. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Additional research has been conducted on the neurobiological process of attachment. [ 6 ] [ 7 ] The impact of traumatic experience on the brain has abundant literature and research, as well. [ 5 ] [ 8 ] [ 9 ] The concept of resiliency is built upon a number of skills and attitudes, and has been expanded by the recent interest in psychology with respect to finding out what is right with individuals rather than what is wrong with them. The foundation of resiliency is built upon believing in yourself, having personal confidence , being able to connect in a positive way with others, and allowing others to support you. The area of resiliency has strong research support. [ 10 ] [ 11 ] [ 12 ] [ 13 ] The ability to bounce back from adversity has been found to be a key element of personal contentment . [ 14 ] [ 15 ] [ 16 ] Attachment may have the greatest amount of literature developed over the longest period of time. The early work of John Bowlby , and subsequent work of Mary Ainsworth and Mary Main , represent the foundations of attachment theory. Research has supported the link between initial attachment of the child and subsequent social success or failure throughout the developmental years and into adulthood. [ 17 ] [ 18 ] [ 19 ] Attachment has been found to be a major aspect of the development of the brain related to social adjustment, mood control, drive, responsibility, and defining the personality. [ 20 ] As the brain matures, the emotional and sensory areas of the brain develop based upon the quality of attachment. [ 2 ] [ 6 ] A poor attachment early in life has been associated with a wide range of problems, including poor self-regulation, [ 9 ] poor coping, [ 6 ] undeveloped resiliency, [ 21 ] abnormal social and moral development, [ 22 ] and an increased risk of psychopathology. [ 23 ] Trauma treatment has a significant research base as well. A classic text in the field is Traumatic Stress, the Effects of Overwhelming Experiences on the Mind, Body and Society . [ 24 ] For a number of decades, the connection between trauma and medical problems [ 25 ] psychological problems [ 26 ] and psychiatric difficulties [ 27 ] has been documented. The impact of trauma on the ability of individuals to handle stress has also been shown repeatedly. [ 6 ] [ 28 ] [ 29 ] Practice-based evidence has been collected in the outcomes research of Jasper Mountain with regard to the use of NRT in seriously traumatized young children. [ 30 ]
https://en.wikipedia.org/wiki/Neurological_reparative_therapy
Neurolysis is the application of physical or chemical agents to a nerve in order to cause a temporary degeneration of targeted nerve fibers. When the nerve fibers degenerate, an interruption in the transmission of nerve signals occurs. In the medical field, neurolysis is commonly used to alleviate pain, such as in people with various forms of cancer , chronic osteoarthritis or spasticity . [ 1 ] [ 2 ] Different types of neurolysis include celiac plexus neurolysis, endoscopic ultrasound guided neurolysis, and lumbar sympathetic neurolysis. [ 1 ] Chemodenervation and nerve blocks are other forms of neurolysis. [ 1 ] Neurotomy may refer to the application of heat (as in radiofrequency nerve lesioning ), chemical ablation, or freezing of sensory nerves with the intent of a longer term (months or years) ablation or partial denervation of one or more peripheral nerves, usually to relieve chronic pain. [ 1 ] [ 3 ] [ 4 ] Early neurolysis techniques were used in the 1900s for pain relief by the surgeon- neurologist Mathieu Jaboulay for vasospastic disorders , such as arterial occlusive disease before the introduction of endovascular procedures. [ 5 ] Neurolysis is a chemical ablation technique that is used to alleviate pain. Neurolysis is used when the disease has progressed to a point where other pain treatments are deemed ineffective. [ 6 ] [ 2 ] A neurolytic agent such as alcohol, phenol, or glycerol is typically injected into specific sensory nerves assessed to be transmitting pain signals. Chemical neurolysis is used to denervate specific sensory nerves, reducing pain signals. [ 1 ] [ 5 ] The effects generally last for three to six months. [ 6 ] [ 2 ] Neurotomy is a nerve block procedure performed in cases, such as for severe knee arthritis, in an outpatient procedure. [ 1 ] [ 3 ] [ 4 ] The term neurotomy may be used as a synonym for neurectomy – the surgical cutting or removal of nervous tissue. [ 7 ] Radiofrequency ablation (RFA) uses heat generated from radio waves to disrupt sensory nerve function in anatomical structures transmitting pain sensation to the brain, such as from the back, hip, neck, or knee. [ 1 ] [ 3 ] [ 4 ] [ 8 ] RFA is an alternative for eligible people who have comorbidities or do not want to undergo more extensive surgery, such as hip or knee arthroplasty . [ 3 ] [ 4 ] [ 8 ] Clinical studies from 2023-25 reported that local injection of phenol was effective as a neurolytic treatment of sensory knee nerves to relieve chronic pain associated with osteoarthritis . [ 2 ] [ 9 ] Peripheral nerves move (glide) across bones and muscles. A peripheral nerve can be trapped by scarring of surrounding tissue which may lead to potential nerve damage or pain. An external neurolysis may be performed when scar tissue is removed from around the nerve without entering the nerve itself. [ 10 ] Celiac plexus neurolysis (CPN) is the chemical ablation of the celiac plexus. This type of neurolysis is mainly used to treat pain associated with advanced pancreatic cancer . Traditional opioid medications used to treat pancreatic cancer patients may yield inadequate pain relief in the most advanced stages of pancreatic cancer, so the goal of CPN is to increase the efficiency of the medication. This in turn may lead to a decreased dosage, thereby decreasing the severity of the side effects. [ 5 ] CPN is also used to decrease the chances of a patient developing an addiction for opioid medications due to the large doses commonly used in treatment. [ 5 ] CPN can be performed by percutaneous injection either anterior or posterior to the celiac plexus. [ 11 ] CPN is generally performed complementary to nerve blocks , due to the severe pain associated with the injection itself. Neurolysis is commonly performed only after a successful celiac plexus block. [ 11 ] CPN and celiac plexus block (CPB) are different in that CPN is permanent ablation whereas CPB is temporal pain inhibition. [ 11 ] There are multiple posterior percutaneous approaches, but no clinical evidence suggests that any one technique is more efficient than the rest. The posterior approaches generally utilize two needles, one at each side of the L1 vertebral body pointing towards the T12 vertebral body . [ 5 ] Increasing the spread of the injection may increase the efficacy of the neurolysis. [ 5 ] Endoscopic ultrasound (EUS)-guided neurolysis is a technique that performs neurolysis using a linear-array echoendoscope. [ 12 ] The EUS technique is minimally invasive and is believed to be safer than the traditional percutaneous approaches. EUS-guided neurolysis technique can be used to target the celiac plexus , the celiac ganglion , or the broad plexus in the treatment of pancreatic cancer-associated pain. [ 12 ] EUS-guided celiac plexus neurolysis (EUS-CPN) is performed with either an oblique-viewing or forward-viewing echoendoscope and is passed through the mouth into the esophagus . From the gastroesophageal junction , EUS imaging allows the doctor to visualize the aorta , which can then be traced to the origin of the celiac artery . The celiac plexus itself cannot be identified, but is located relative to the celiac artery. The neurolysis is then performed with a spray needle that disperses a neurolytic agent, such as alcohol or phenol, into the celiac plexus. [ 12 ] EUS-CPN can be performed unilaterally (centrally) or bilaterally, however, there is no clinical evidence supporting the superiority of one over the other. [ 12 ] EUS-guided neurolysis can also be performed on the celiac ganglion and the broad plexus in a similar fashion to the EUS-CPN. The celiac ganglion neurolysis (EUS-CGN) is more effective than EUS-CPN and broad plexus neurolysis (EUS-BPN) is more effective than EUS-CGN. [ 12 ] Lumbar sympathetic neurolysis is typically used on patients with ischemic rest pain, generally associated with nonreconstructable arterial occlusive disease . Although the disease is the basis for this type of neurolysis, other diseases such as peripheral neuralgia or vasospastic disorders can receive lumbar sympathetic neurolysis for pain treatment. [ 13 ] Lumbar sympathetic neurolysis is performed between the L1-L4 vertebrae with separate injections at each vertebra junction. The chemicals used for neurolysis of the nerves cause destructive fibrosis and cause a disruption of the sympathetic ganglia . The vasomotor tone is decreased in the area affected by the neurolysis, which in addition to arteriovenous shunting, create a light pink appearance within the affected area. Lumbar sympathetic neurolysis alters the ischemic rest pain transmission by changing norepinephrine and catecholamine levels or by disturbing afferent fibers. This procedure is mainly used only when other feasible approaches to pain management are unable to be used. [ 13 ] Lumbar sympathetic neurolysis is performed by using absolute alcohol, but other chemicals such as phenol, or other techniques such as radiofrequency or laser ablation have been studied. To aid in the procedure, fluoroscopy or CT guidance is used. Fluoroscopic guidance is the most frequent, giving better real-time monitoring of the needle. The general technique of administering lumbar sympathetic neurolysis involves using three separate needles rather than one because it allows for better longitudinal spread of the chemicals. [ 13 ] Complications can arise from this procedure such as nerve root injury, bleeding, paralysis , and more. Complications have been seen to be diminished when using the aforementioned radiofrequency or laser ablation techniques in comparison to the injection of alcohol or phenol. Generally, approximately two-thirds of patients can expect a favorable outcome (pain relief with minimal complications). Overall, the minimally invasive technique of lumbar sympathetic neurolysis is important in the relief of ischemic rest pain. [ 13 ] Chemodenervation is a process used to manage pain through the use of phenol , alcohol, or a botulinum toxin (botox). [ 6 ] [ 2 ] [ 14 ] The agent of choice is injected into or adjacent to a specific sensory nerve or into muscle fibers to dull neuronal pain signaling. [ 6 ] [ 2 ] As chemical denervation agents, phenol and alcohol are inexpensive, fast-acting, and can be readministered or boosted within months, while also possibly causing scarring or fibrosis. [ 2 ] Cryoneurolysis is the use of ultracold miniature probes to inhibit sensory nerve function causing pain. [ 15 ] [ 16 ] The method involves compressing a gas ( carbon dioxide or nitrous oxide ) through a small aperture into a larger outer tube (1.4-2 mm diameter) at a lower pressure, enabling the gas to expand rapidly at the ablation tip. [ 16 ] The rapid expansion of gas moving from a high to a low pressure through the narrow probe aperture causes a rapid, substantial decrease in temperature ( Joule–Thomson effect ) to around −70 °C (−94 °F). [ 16 ] Applied for two-three minutes at each targeted nerve site, the ultracold gas produces ice crystals which cause edema at the nerve site, blocking nerve transmission and pain signals. [ 15 ] [ 16 ] Under research and limited clinical use as of 2024, chronic pain conditions treated by cryoneurolysis include knee osteoarthritis, neuropathies , post-mastectomy pain syndrome , phantom limb pain , headaches, leg and shoulder pain, and sacroiliac joint pain. [ 16 ] [ 17 ] The efficacy of cryoneurolysis compared to other more common neurolytic methods for pain conditions is under study. [ 15 ] [ 16 ] [ 17 ] Among possible clinical complications are infection at the injection site, inflammation and pain at the injection or catheter site, bleeding or bruising from injury of small blood vessels, nerve injury, allergic reaction from a local anesthetic or neurolytic medication, or tinnitus and flushing from an agent like phenol. [ 1 ]
https://en.wikipedia.org/wiki/Neurolysis
Neuromechanics is an interdisciplinary field that combines biomechanics and neuroscience to understand how the nervous system interacts with the skeletal and muscular systems to enable animals to move. [ 1 ] [ 2 ] In a motor task, like reaching for an object, neural commands are sent to motor neurons to activate a set of muscles, called muscle synergies. Given which muscles are activated and how they are connected to the skeleton, there will be a corresponding and specific movement of the body. [ 3 ] In addition to participating in reflexes , neuromechanical process may also be shaped through motor adaptation and learning . [ 4 ] The inverted pendulum theory of gait is a neuromechanical approach to understand how humans walk. As the name of the theory implies, a walking human is modeled as an inverted pendulum consisting of a center of mass (COM) suspended above the ground via a support leg (Fig. 2). As the inverted pendulum swings forward, ground reaction forces occur between the modeled leg and the ground. Importantly, the magnitude of the ground reaction forces depends on the COM position and size. The velocity vector of the center of mass is always perpendicular to the ground reaction force. [ 5 ] Walking consists of alternating single-support and double-support phases. The single-support phase occurs when one leg is in contact with the ground while the double-support phase occurs when two legs are in contact with the ground. [ 6 ] The inverted pendulum is stabilized by constant feedback from the brain and can operate even in the presence of sensory loss . In animals who have lost all sensory input to the moving limb, the variables produced by gait (center of mass acceleration, velocity of animal, and position of the animal) remain constant between both groups. [ 7 ] During postural control, delayed feedback mechanisms are used in the temporal reproduction of task-level functions such as walking. The nervous system takes into account feedback from the center of mass acceleration, velocity, and position of an individual and utilizes the information to predict and plan future movements. Center of mass acceleration is essential in the feedback mechanism as this feedback takes place before any significant displacement data can be determined. [ 8 ] The inverted pendulum theory directly contradicts the six determinants of gait , another theory for gait analysis. [ 9 ] The six determinants of gait predict very high energy expenditure for the sinusoidal motion of the Center of Mass during gait, while the inverted pendulum theory offers the possibility that energy expenditure can be near zero; the inverted pendulum theory predicts that little to no work is required for walking. [ 5 ] Electromyography (EMG) is a tool used to measure the electrical outputs produced by skeletal muscles upon activation. Motor nerves innervate skeletal muscles and cause contraction upon command from the central nervous system. This contraction is measured by EMG and is typically measured on the scale of millivolts (mV). Another form of EMG data that is analyzed is integrated EMG (iEMG) data. iEMG measures the area under the EMG signal which corresponds to the overall muscle effort rather than the effort at a specific instant. There are four instrumentation components used to detect these signals: (1) the signal source, (2) the transducer used to detect the signal, (3) the amplifier, and (4) the signal processing circuit. [ 10 ] The signal source refers to the location at which the EMG electrode is place. EMG signal acquisition is dependent on distance from the electrode to the muscle fiber, so placement is imperative. The transducer used to detect the signal is an EMG electrode than transforms the bioelectric signal from the muscle to a readable electric signal. [ 10 ] The amplifier reproduces an undistorted bioelectric signal and also allows for noise reduction in the signal. [ 10 ] Signal processing involves taking the recorded electrical impulses, filtering them, and enveloping the data. [ 10 ] Latency is a measure of the time span between the activation of a muscle and its peak EMG value. Latency is used as a means to diagnose disorders of the nervous system such as a herniated disc , amyotrophic lateral sclerosis (ALS), or myasthenia gravis (MG). [ 11 ] These disorders may cause a disruption of the signal at the muscle, the nerve, or the junction between the muscle and the nerve. The use of EMG to identify nervous systems disorders is known as a nerve conduction study (NCS). Nerve conduction studies can only diagnose diseases on the muscular and nerve level. They cannot detect disease in the spinal cord or the brain. In most disorders of the muscle, nerve, or neuromuscular junction , the latency time is increased. [ 12 ] This is a result of decreased nerve conduction or electrical stimulation at the site of the muscle. In 50% of patients with cerebral atrophy cases, the M3 spinal reflex latency, was increased and on occasion separated from the M2 spinal reflex response. [ 13 ] [ 14 ] The separation between the M2 and M3 spinal reflex responses is typically 20 milliseconds, but in patients with cerebral atrophy, the separation was increased to 50 ms. In some cases, however, other muscles can compensate for the muscle suffering from decreased electrical stimulation. In the compensatory muscle, the latency time is actually decreased in order to substitute for the function of the diseased muscle. [ 15 ] These kinds of studies are used in neuromechanics to identify motor disorders and their effects on a cellular and electrical level rather than a system motion level. A muscle synergy is a group of synergistic muscles and agonists that work together to perform a motor task. A muscle synergy is composed of agonist and synergistic muscles. An agonist muscle is a muscle that contracts individually, and it can cause a cascade of motion in neighboring muscles. Synergistic muscles aid the agonist muscles in motor control tasks, but they act against excess motion that the agonists may create. The muscle synergy hypothesis is based on the assumption that the central nervous system controls muscle groups independently rather than individual muscles . [ 16 ] [ 17 ] The muscle synergy hypothesis presents motor control as a three-tiered hierarchy. In tier one, a motor task vector is created by the central nervous system. The central nervous system then transforms the muscle vector to act upon a group of muscle synergies in tier two. Then in tier three, muscle synergies define a specific ratio of the motor task for each muscle and assign it to its respective muscle to act upon the joint to perform the motor task. Redundancy plays a large role in muscle synergy. Muscle redundancy is a degrees of freedom problem on the muscular level. [ 18 ] The central nervous system is presented with the opportunity to coordinate muscle movements, and it must choose one out of many. The muscle redundancy problem is a result of more muscle vectors than dimensions in the task space. Muscles can only generate tension by pulling, not pushing. This results in many muscle force vectors in multiple directions rather than a push and pull in the same direction. One debate on muscle synergies is between the prime mover strategy and the cooperation strategy. [ 18 ] The prime mover strategy arises when a muscle's vector can act in the same direction as the mechanical action vector, the vector of the limb's motion. The cooperation strategy, however, takes place when no muscle can act directly in the vector direction of the mechanical action resulting in a coordination of multiple muscles to achieve the task. The prime mover strategy over time has declined in popularity as it has been found through electromyography studies that no one muscle consistently provides more force than other muscles that are acting to move about a joint. [ 19 ] The muscle synergy theory is difficult to falsify. [ 20 ] Though experimentation has shown that groups of muscles indeed work together to control motor tasks, neural connections allow for individual muscles to be activated. Though individual muscle activation may contradict muscle synergy, it also obscures it. Activation of individual muscles may override or block the input from and overall effect of muscle synergies. [ 20 ] Adaptation in the neuromechanical sense is the body's ability to change an action to better suit the situation or environment in which it is acting. Adaptation can be a result of injury, fatigue, or practice. Adaptation can be measured in a variety of ways: electromyography, three-dimensional reconstruction of joints, and changes in other variables pertaining to the specific adaptation being studied. Injury can cause adaptation in a number of ways. Compensation is a large factor in injury adaptation. Compensation is a result of one or more weakened muscles. The brain is given the task to perform a certain motor task, and once a muscle has been weakened, the brain computes energy ratios to send to other muscles to perform the original task in the desired fashion. Change in muscle contribution is not the only byproduct of a muscle-related injury. Change in loading of the joint is another result which, if prolonged, can be harmful for the individual. [ 21 ] Muscle fatigue is the neuromuscular adaptation to challenges over a period of time. The use of motor units over a period of time can result in changes in the motor command from the brain. Since the force of contraction cannot be changed, the brain instead recruits more motor units to achieve maximal muscle contraction. [ 22 ] Recruitment of motor units varies from muscle to muscle depending on the upper limit of motor recruitment in the muscle. [ 22 ] Adaptation due to practice can be a result of intended practice such as sports or unintended practice such as wearing an orthosis . In athletes, repetition results in muscle memory . The motor task becomes a long-term memory that can be repeated without much conscious effort. This allows the athlete to focus on fine-tuning their motor task strategy. Resistance to fatigue also comes with practice as the muscle is strengthened, but the speed at which an athlete can complete a motor task is also increased with practice. [ 23 ] Volleyball players compared to non-jumpers show more repeatable control of muscles surrounding the knee that is controlled by co-activation in the single jump condition. [ 23 ] In the repeated jump condition, both volleyball players and non-jumpers have a linear decrease in normalized jump flight time. [ 23 ] Though the normalized linear decrease is the same for athletes and non-athletes, athletes consistently have higher flight times. There is also adaptation associated with use of a prosthesis or an orthosis. This operates similarly to adaptation due to fatigue; however, muscles can actually be fatigued or alter their mechanical contribution to a motor task as a result of wearing the orthosis. An ankle foot orthosis is a common solution to injury of the lower limb, specifically around the ankle joint. An ankle foot orthosis can be assistive or resistive. An assistive ankle orthosis encourages ankle movement, and a resistive ankle orthosis inhibits ankle movement. [ 24 ] Upon wearing an assistive ankle foot orthosis, individuals have decreased EMG amplitude and joint stiffness over time while the opposite occurs for resistive ankle foot orthoses. [ 24 ] Additionally, not only can electromyography readings differ, but the physical path that joints travel along can be altered as well. [ 25 ]
https://en.wikipedia.org/wiki/Neuromechanics
Neuromelanin ( NM ) is a dark pigment found in the brain which is structurally related to melanin . It is a polymer of 5,6-dihydroxyindole monomers. [ 1 ] Neuromelanin is found in large quantities in catecholaminergic cells of the substantia nigra pars compacta and locus coeruleus , giving a dark color to the structures. [ 2 ] Neuromelanin gives specific brain sections, such as the substantia nigra or the locus coeruleus, distinct color. It is a type of melanin and similar to other forms of peripheral melanin. It is insoluble in organic compounds, and can be labeled by silver staining . It is called neuromelanin because of its function and the color change that appears in tissues containing it. It contains black/brown pigmented granules. Neuromelanin is found to accumulate during aging, noticeably after the first 2–3 years of life. It is believed to protect neurons in the substantia nigra from iron-induced oxidative stress . It is considered a true melanin due to its stable free radical structure and it avidly chelates metals. [ 3 ] Neuromelanin is directly biosynthesized from L-DOPA , precursor to dopamine , by tyrosine hydroxylase (TH) and aromatic acid decarboxylase (AADC). Alternatively, synaptic vesicles and endosomes accumulate cytosolic dopamine (via vesicular monoamine transporter 2 (VMAT2) and transport it to mitochondria where it is metabolized by monoamine oxidase . Excess dopamine and DOPA molecules are oxidized by iron catalysis into dopaquinones and semiquinones which are then phagocytosed and are stored as neuromelanin. [ 4 ] Neuromelanin biosynthesis is driven by excess cytosolic catecholamines not accumulated by synaptic vesicles. [ 5 ] Neuromelanin is found in higher concentrations in humans than in other primates . [ 2 ] Neuromelanin concentration increases with age, suggesting a role in neuroprotection (neuromelanin can chelate metals and xenobiotics [ 6 ] ) or senescence . [ citation needed ] Neuromelanin-containing neurons in the substantia nigra degenerate during Parkinson's disease . [ citation needed ] Motor symptoms of Parkinson's disease are caused by cell death in the substantia nigra, which may be partly due to oxidative stress . [ citation needed ] This oxidation may be relieved by neuromelanin. [ citation needed ] Patients with Parkinson's disease had 50% the amount of neuromelanin in the substantia nigra as compared to similar patients of their same age, but without Parkinson's. [ citation needed ] The death of neuromelanin-containing neurons in the substantia nigra, pars compacta , and locus coeruleus have been linked to Parkinson's disease and also have been visualized in vivo with neuromelanin imaging . [ 7 ] Neuromelanin has been shown to bind neurotoxic and toxic metals that could promote neurodegeneration. [ 5 ] Dark pigments in the substantia nigra were first described in 1838 by Purkyně , [ 8 ] and the term neuromelanin was proposed in 1957 by Lillie, [ 9 ] though it has been thought to serve no function until recently. It is now believed to play a vital role in preventing cell death in certain parts of the brain. It has been linked to Parkinson's disease and because of this possible connection, neuromelanin has been heavily researched in the last decade. [ 10 ]
https://en.wikipedia.org/wiki/Neuromelanin
In neuroscience , a neurometric function is a mathematical formula relating the activity of brain cells to aspects of an animal's sensory experience or motor behavior. Neurometric functions provide a quantitative summary of the neural code of a particular brain region. In sensory neuroscience, neurometric functions measure the probability with which a sensory stimulus would be perceived based on decoding the activity of a given neuron or collection of neurons. The concept was introduced to investigate the visibility of visual stimuli, by applying Detection theory to the output of single neurons of visual cortex . [ 1 ] Comparing neurometric functions to psychometric functions (by recording from neurons in the brain of the observer) can reveal whether the neural representation in the recorded region constrains perceptual accuracy. [ 2 ] [ 3 ] In motor neuroscience, neurometric functions are used to predict body movements from the activity of neuronal populations in regions such as motor cortex . Such neurometric functions are used in the design of brain–computer interfaces . This neuroscience article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neurometric_function
Neuromodulation is the physiological process by which a given neuron uses one or more chemicals to regulate diverse populations of neurons. Neuromodulators typically bind to metabotropic , G-protein coupled receptors (GPCRs) to initiate a second messenger signaling cascade that induces a broad, long-lasting signal. This modulation can last for hundreds of milliseconds to several minutes. Some of the effects of neuromodulators include altering intrinsic firing activity, [ 1 ] increasing or decreasing voltage-dependent currents, [ 2 ] altering synaptic efficacy, increasing bursting activity [ 2 ] and reconfiguring synaptic connectivity. [ 3 ] Major neuromodulators in the central nervous system include: dopamine , serotonin , acetylcholine , histamine , norepinephrine , nitric oxide , and several neuropeptides . Cannabinoids can also be powerful CNS neuromodulators. [ 4 ] [ 5 ] [ 6 ] Neuromodulators can be packaged into vesicles and released by neurons, secreted as hormones and delivered through the circulatory system. [ 7 ] A neuromodulator can be conceptualized as a neurotransmitter that is not reabsorbed by the pre-synaptic neuron or broken down into a metabolite. Some neuromodulators end up spending a significant amount of time in the cerebrospinal fluid (CSF), influencing (or "modulating") the activity of several other neurons in the brain . [ 8 ] The major neurotransmitter systems are the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system. Drugs targeting the neurotransmitter of such systems affect the whole system, which explains the mode of action of many drugs. [ citation needed ] Most other neurotransmitters, on the other hand, e.g. glutamate , GABA and glycine , are used very generally throughout the central nervous system. The noradrenaline system consists of around 15,000 neurons, primarily in the locus coeruleus . [ 12 ] This is diminutive compared to the more than 100 billion neurons in the brain. As with dopaminergic neurons in the substantia nigra, neurons in the locus coeruleus tend to be melanin -pigmented. Noradrenaline is released from the neurons, and acts on adrenergic receptors . Noradrenaline is often released steadily so that it can prepare the supporting glial cells for calibrated responses. Despite containing a relatively small number of neurons, when activated, the noradrenaline system plays major roles in the brain including involvement in suppression of the neuroinflammatory response, stimulation of neuronal plasticity through LTP, regulation of glutamate uptake by astrocytes and LTD, and consolidation of memory. [ 13 ] The dopamine or dopaminergic system consists of several pathways, originating from the ventral tegmentum or substantia nigra as examples. It acts on dopamine receptors . [ 14 ] Parkinson's disease is at least in part related to dropping out of dopaminergic cells in deep-brain nuclei , primarily the melanin-pigmented neurons in the substantia nigra but secondarily the noradrenergic neurons of the locus coeruleus. Treatments potentiating the effect of dopamine precursors have been proposed and effected, with moderate success. [ citation needed ] The serotonin created by the brain comprises around 10% of total body serotonin. The majority (80-90%) is found in the gastrointestinal (GI) tract. [ 15 ] [ 16 ] It travels around the brain along the medial forebrain bundle and acts on serotonin receptors . In the peripheral nervous system (such as in the gut wall) serotonin regulates vascular tone. [ citation needed ] Although changes in neurochemistry are found immediately after taking these antidepressants, symptoms may not begin to improve until several weeks after administration. Increased transmitter levels in the synapse alone does not relieve the depression or anxiety. [ 17 ] [ 19 ] [ 22 ] The cholinergic system consists of projection neurons from the pedunculopontine nucleus , laterodorsal tegmental nucleus , and basal forebrain and interneurons from the striatum and nucleus accumbens. It is not yet clear whether acetylcholine as a neuromodulator acts through volume transmission or classical synaptic transmission, as there is evidence to support both theories. Acetylcholine binds to both metabotropic muscarinic receptors (mAChR) and the ionotropic nicotinic receptors (nAChR). The cholinergic system has been found to be involved in responding to cues related to the reward pathway, enhancing signal detection and sensory attention, regulating homeostasis, mediating the stress response, and encoding the formation of memories. [ 23 ] [ 24 ] Gamma-aminobutyric acid (GABA) has an inhibitory effect on brain and spinal cord activity. [ 17 ] GABA is an amino acid that is the primary inhibitory neurotransmitter for the central nervous system (CNS). It reduces neuronal excitability by inhibiting nerve transmission. GABA has a multitude of different functions during development and influences the migration, proliferation, and proper morphological development of neurons. It also influences the timing of critical periods and potentially primes the earliest neuronal networks. There are two main types of GABA receptors: GABAa and GABAb. GABAa receptors inhibit neurotransmitter release and/or neuronal excitability and are a ligand-gated chloride channel. GABAb receptors are slower to react due to a GCPR that acts to inhibit neurons. GABA can be the culprit for many disorders ranging from schizophrenia to major depressive disorder because of its inhibitory characteristics being dampened. [ 25 ] [ 26 ] [ 27 ] Neuropeptides are small proteins used for communication in the nervous system. Neuropeptides represent the most diverse class of signaling molecules, and vary considerably between animals. There are 90 known genes that encode human neuropeptide precursors. In the fruit fly Drosophila there are ~50 known genes encoding precursors, [ 28 ] and in the worm C. elegans 120 genes specify more than 250 neuropeptides. [ 29 ] Most neuropeptides bind to G-protein coupled receptors, however some neuropeptides directly gate ion channels [ 30 ] or act through kinase receptors. [ 31 ] Neuromodulators may alter the output of a physiological system by acting on the associated inputs (for instance, central pattern generators ). However, modeling work suggests that this alone is insufficient, [ 34 ] because the neuromuscular transformation from neural input to muscular output may be tuned for particular ranges of input. Stern et al. (2007) suggest that neuromodulators must act not only on the input system but must change the transformation itself to produce the proper contractions of muscles as output. [ 34 ] Neurotransmitter systems are systems of neurons in the brain expressing certain types of neurotransmitters , and thus form distinct systems. Activation of the system causes effects in large volumes of the brain, called volume transmission . [ 35 ] Volume transmission is the diffusion of neurotransmitters through the brain extracellular fluid released at points that may be remote from the target cells with the resulting activation of extra-synaptic receptors, and with a longer time course than for transmission at a single synapse. [ 36 ] Such prolonged transmitter action is called tonic transmission , in contrast to the phasic transmission that occurs rapidly at single synapses. [ 37 ] [ 38 ] There are three main components of tonic transmission: Continued release, sustained release, and baseline regulation. In the context of neuromodulation, continuous release is responsible for releasing neurotransmitters/neuromodulators at a constant low level from glial cells and tonic active neurons. Sustained Influence provides long-term stability to the entire process, and baseline regulation ensures that the neurons are in a continued state of readiness to respond to any signals. Acetylcholine, noradrenaline, dopamine, norepinephrine, and serotonin are some of the main components in tonic transmission to mediate arousal and attention. [ 39 ] There are three main components of phasic transmission: burst release, transient effects, and stimulus-driven effects. As the name suggests, burst release is in charge of releasing neurotransmitters/neuromodulators in intense, acute bursts. Transient effects create acute momentary adjustments in neural activity. Lastly, as the name suggests, stimulus-driven effects react to sensory input, external stressors, and reward stimuli, which involve dopamine, norepinephrine, and serotonin. [ 40 ] The term Neuromodulation is also known in medicine as a targeted artificial modification of neuronal activity through the delivery of chemical agents or electroceutical stimulation to specific neurological parts (see more in the wikiarticle Neuromodulation (medicine) ). [ 41 ] Invasive and non-invasive treatment methods form a field of medicine called neurotherapy. There are two main categories for neuromodulation therapy: chemical and electroceutical. The noninvasive electroceutical neurotherapy consists of five techniques: [ 42 ] Electrical neuromodulation has three subcategories: deep brain, spinal cord, and transcranial, each aiming to treat specific conditions. Deep brain stimulation involves electrodes being surgically implanted into specific sections of the brain that are commonly responsible for movement and motor control deficiencies and disorders like Parkinson's and tremors. Spinal cord stimulation works by being placed near the spinal cord to send electrical signals through the body to treat various forms of chronic pain like lower back pain and CRPS. This form of neuromodulator treatment is considered one of the more high-risk treatments because of its manipulation near the spinal cord. Transcranial magnetic stimulation is slightly different in that it utilizes a magnetic field to generate electrical currents throughout the brain. This treatment is widely used to remedy various mental health conditions like depression, obsessive-compulsive disorder, and other mood disorders. [ 50 ] Neuromodulation is often used as a treatment mechanism for moderate to severe migraines by way of nerve stimulation. These treatments work by utilizing the basic ascending pathways. There are three main modes. It works by connecting a device to the body that sends electrical pulses directly to the affected site (Transcutaneous Electrical Nerve Stimulation), directly to the brain (invasive electrical Neurotherapy techniques), or by holding a device close to the neck that works to block pain signals modulation from the PNS to the CNS. [ 51 ] and sends two of the most notable modes of that treatment, which are electrical and magnetic stimulation. Electrical nerve stimulation and some of the characterizations include transcranial alternating stimulation and transcranial direct current stimulation. The other is magnetic stimulation, which includes single pulse and repetitive transcranial stimulation. [ citation needed ] Chemical neuromodulation mostly consists of collaborating natural and artificial chemical substances to treat various conditions. It uses both invasive and non-invasive modes of treatment, including pumps, injections, and oral medications. This mode of treatment can be used to manage immune responses like inflammation, mood, and motor disorders. [ 52 ]
https://en.wikipedia.org/wiki/Neuromodulation
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. [ 1 ] [ 2 ] A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. [ 3 ] [ 4 ] In recent times, the term neuromorphic has been used to describe analog , digital , mixed-mode analog/digital VLSI , and software systems that implement models of neural systems (for perception , motor control , or multisensory integration ). Recent advances have even discovered ways to mimic the human nervous system through liquid solutions of chemical systems. [ 5 ] An article published by AI researchers at Los Alamos National Laboratory states that, "neuromorphic computing, the next generation of AI , will be smaller, faster, and more efficient than the human brain ." [ 6 ] A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology , physics , mathematics , computer science , and electronic engineering [ 4 ] to design artificial neural systems , such as vision systems , head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems. [ 7 ] One of the first applications for neuromorphic engineering was proposed by Carver Mead [ 8 ] in the late 1980s. Neuromorphic engineering is for now set apart by the inspiration it takes from what is known about the structure and operations of the brain . Neuromorphic engineering translates what we know about the brain's function into computer systems. Work has mostly focused on replicating the analog nature of biological computation and the role of neurons in cognition . [ citation needed ] The biological processes of neurons and their synapses are dauntingly complex, and thus very difficult to artificially simulate. A key feature of biological brains is that all of the processing in neurons uses analog chemical signals . This makes it hard to replicate brains in computers because the current generation of computers is completely digital. However, the characteristics of these chemical signals can be abstracted into mathematical functions that closely capture the essence of the neuron's operations. [ citation needed ] The goal of neuromorphic computing is not to perfectly mimic the brain and all of its functions, but instead to extract what is known of its structure and operations to be used in a practical computing system. No neuromorphic system will claim nor attempt to reproduce every element of neurons and synapses, but all adhere to the idea that computation is highly distributed throughout a series of small computing elements analogous to a neuron. While this sentiment is standard, researchers chase this goal with different methods. [ 9 ] Anatomical neural wiring diagrams that are being imaged by electron microscopy [ 10 ] and functional neural connection maps that could be potentially obtained via intracellular recording at scale [ 11 ] can be used to better inspire, if not exactly mimicked, neuromorphic computing systems with more details. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors , [ 12 ] spintronic memories, threshold switches, transistors , [ 13 ] [ 4 ] among others. The implementation details overlap with the concepts of Artificial Immune Systems. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g. using Python -based frameworks such as snnTorch, [ 14 ] or using canonical learning rules from the biological learning literature, e.g. using BindsNet. [ 15 ] As early as 2006, researchers at Georgia Tech published a field programmable neural array. [ 16 ] This chip was the first in a line of increasingly complex arrays of floating gate transistors that allowed programmability of charge on the gates of MOSFETs to model the channel-ion characteristics of neurons in the brain and was one of the first cases of a silicon programmable array of neurons. In November 2011, a group of MIT researchers created a computer chip that mimics the analog, ion-based communication in a synapse between two neurons using 400 transistors and standard CMOS manufacturing techniques. [ 17 ] [ 18 ] In June 2012, spintronic researchers at Purdue University presented a paper on the design of a neuromorphic chip using lateral spin valves and memristors . They argue that the architecture works similarly to neurons and can therefore be used to test methods of reproducing the brain's processing. In addition, these chips are significantly more energy-efficient than conventional ones. [ 19 ] Research at HP Labs on Mott memristors has shown that while they can be non- volatile , the volatile behavior exhibited at temperatures significantly below the phase transition temperature can be exploited to fabricate a neuristor , [ 20 ] a biologically-inspired device that mimics behavior found in neurons. [ 20 ] In September 2013, they presented models and simulations that show how the spiking behavior of these neuristors can be used to form the components required for a Turing machine . [ 21 ] Neurogrid , built by Brains in Silicon at Stanford University , [ 22 ] is an example of hardware designed using neuromorphic engineering principles. The circuit board is composed of 16 custom-designed chips, referred to as NeuroCores. Each NeuroCore's analog circuitry is designed to emulate neural elements for 65536 neurons, maximizing energy efficiency. The emulated neurons are connected using digital circuitry designed to maximize spiking throughput. [ 23 ] [ 24 ] A research project with implications for neuromorphic engineering is the Human Brain Project that is attempting to simulate a complete human brain in a supercomputer using biological data. It is made up of a group of researchers in neuroscience, medicine, and computing. [ 25 ] Henry Markram , the project's co-director, has stated that the project proposes to establish a foundation to explore and understand the brain and its diseases, and to use that knowledge to build new computing technologies. The three primary goals of the project are to better understand how the pieces of the brain fit and work together, to understand how to objectively diagnose and treat brain diseases and to use the understanding of the human brain to develop neuromorphic computers. Since the simulation of a complete human brain will require a powerful supercomputer, the current focus on neuromorphic computers is being encouraged. [ 26 ] $1.3 billion has been allocated to the project by The European Commission . [ 27 ] Other research with implications for neuromorphic engineering involve the BRAIN Initiative [ 28 ] and the TrueNorth chip from IBM . [ 29 ] Neuromorphic devices have also been demonstrated using nanocrystals, nanowires, and conducting polymers. [ 30 ] There also is development of a memristive device for quantum neuromorphic architectures. [ 31 ] In 2022, researchers at MIT have reported the development of brain-inspired artificial synapses , using the ion proton ( H + ), for 'analog deep learning '. [ 32 ] [ 33 ] Intel unveiled its neuromorphic research chip, called " Loihi ", in October 2017. The chip uses an asynchronous spiking neural network (SNN) to implement adaptive self-modifying event-driven fine-grained parallel computations used to implement learning and inference with high efficiency. [ 34 ] [ 35 ] IMEC , a Belgium-based nanoelectronics research center, demonstrated the world's first self-learning neuromorphic chip. The brain-inspired chip, based on OxRAM technology, has the capability of self-learning and has been demonstrated to have the ability to compose music. [ 36 ] IMEC released the 30-second tune composed by the prototype. The chip was sequentially loaded with songs in the same time signature and style. The songs were old Belgian and French flute minuets, from which the chip learned the rules at play and then applied them. [ 37 ] The Blue Brain Project , led by Henry Markram, aims to build biologically detailed digital reconstructions and simulations of the mouse brain. The Blue Brain Project has created in silico models of rodent brains, while attempting to replicate as many details about its biology as possible. The supercomputer-based simulations offer new perspectives on understanding the structure and functions of the brain. The European Union funded a series of projects at the University of Heidelberg, which led to the development of BrainScaleS (brain-inspired multiscale computation in neuromorphic hybrid systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University, Germany. It was developed as part of the Human Brain Project neuromorphic computing platform and is the complement to the SpiNNaker supercomputer (which is based on digital technology). The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) faster than that of their biological counterparts. [ 38 ] In 2019, the European Union funded the project "Neuromorphic quantum computing" [ 39 ] exploring the use of neuromorphic computing to perform quantum operations. Neuromorphic quantum computing [ 40 ] (abbreviated as 'n.quantum computing') is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. [ 41 ] [ 42 ] It was suggested that quantum algorithms , which are algorithms that run on a realistic model of quantum computation , can be computed equally efficiently with neuromorphic quantum computing. [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and do not follow the von Neumann architecture . They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation. [ 47 ] [ 48 ] Brainchip announced in October 2021 that it was taking orders for its Akida AI Processor Development Kits [ 49 ] and in January 2022 that it was taking orders for its Akida AI Processor PCIe boards, [ 50 ] making it the world's first commercially available neuromorphic processor. Neuromemristive systems are a subclass of neuromorphic computing systems that focuses on the use of memristors to implement neuroplasticity . While neuromorphic engineering focuses on mimicking biological behavior, neuromemristive systems focus on abstraction. [ 51 ] For example, a neuromemristive system may replace the details of a cortical microcircuit's behavior with an abstract neural network model. [ 52 ] There exist several neuron inspired threshold logic functions [ 12 ] implemented with memristors that have applications in high level pattern recognition applications. Some of the applications reported recently include speech recognition , [ 53 ] face recognition [ 54 ] and object recognition . [ 55 ] They also find applications in replacing conventional digital logic gates. [ 56 ] [ 57 ] For (quasi)ideal passive memristive circuits, the evolution of the memristive memories can be written in a closed form ( Caravelli–Traversa–Di Ventra equation ): [ 58 ] [ 59 ] as a function of the properties of the physical memristive network and the external sources. The equation is valid for the case of the Williams-Strukov original toy model, as in the case of ideal memristors, α = 0 {\displaystyle \alpha =0} . However, the hypothesis of the existence of an ideal memristor is debatable. [ 60 ] In the equation above, α {\displaystyle \alpha } is the "forgetting" time scale constant, typically associated to memory volatility, while χ = R off − R on R off {\displaystyle \chi ={\frac {R_{\text{off}}-R_{\text{on}}}{R_{\text{off}}}}} is the ratio of off and on values of the limit resistances of the memristors, S → {\displaystyle {\vec {S}}} is the vector of the sources of the circuit and Ω {\displaystyle \Omega } is a projector on the fundamental loops of the circuit. The constant β {\displaystyle \beta } has the dimension of a voltage and is associated to the properties of the memristor; its physical origin is the charge mobility in the conductor. The diagonal matrix and vector X = diag ⁡ ( X → ) {\displaystyle X=\operatorname {diag} ({\vec {X}})} and X → {\displaystyle {\vec {X}}} respectively, are instead the internal value of the memristors, with values between 0 and 1. This equation thus requires adding extra constraints on the memory values in order to be reliable. It has been recently shown that the equation above exhibits tunneling phenomena and used to study Lyapunov functions . [ 61 ] [ 59 ] The concept of neuromorphic systems can be extended to sensors (not just to computation). An example of this applied to detecting light is the retinomorphic sensor or, when employed in an array, the event camera . An event camera's pixels all register changes in brightness levels individually, which makes these cameras comparable to human eyesight in their theoretical power consumption. [ 62 ] In 2022, researchers from the Max Planck Institute for Polymer Research reported an organic artificial spiking neuron that exhibits the signal diversity of biological neurons while operating in the biological wetware, thus enabling in-situ neuromorphic sensing and biointerfacing applications. [ 63 ] [ 64 ] The Joint Artificial Intelligence Center , a branch of the U.S. military, is a center dedicated to the procurement and implementation of AI software and neuromorphic hardware for combat use. Specific applications include smart headsets/goggles and robots. JAIC intends to rely heavily on neuromorphic technology to connect "every sensor (to) every shooter" within a network of neuromorphic-enabled units. While the interdisciplinary concept of neuromorphic engineering is relatively new, many of the same ethical considerations apply to neuromorphic systems as apply to human-like machines and artificial intelligence in general. However, the fact that neuromorphic systems are designed to mimic a human brain gives rise to unique ethical questions surrounding their usage. However, the practical debate is that neuromorphic hardware as well as artificial "neural networks" are immensely simplified models of how the brain operates or processes information at a much lower complexity in terms of size and functional technology and a much more regular structure in terms of connectivity . Comparing neuromorphic chips to the brain is a very crude comparison similar to comparing a plane to a bird just because they both have wings and a tail. The fact is that biological neural cognitive systems are many orders of magnitude more energy- and compute-efficient than current state-of-the-art AI and neuromorphic engineering is an attempt to narrow this gap by inspiring from the brain's mechanism just like many engineering designs have bio-inspired features . Significant ethical limitations may be placed on neuromorphic engineering due to public perception. [ 65 ] Special Eurobarometer 382: Public Attitudes Towards Robots, a survey conducted by the European Commission, found that 60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure. The European Commission classifies these areas as notably "human." The report cites increased public concern with robots that are able to mimic or replicate human functions. Neuromorphic engineering, by definition, is designed to replicate the function of the human brain. [ 66 ] The social concerns surrounding neuromorphic engineering are likely to become even more profound in the future. The European Commission found that EU citizens between the ages of 15 and 24 are more likely to think of robots as human-like (as opposed to instrument-like) than EU citizens over the age of 55. When presented an image of a robot that had been defined as human-like, 75% of EU citizens aged 15–24 said it corresponded with the idea they had of robots while only 57% of EU citizens over the age of 55 responded the same way. The human-like nature of neuromorphic systems, therefore, could place them in the categories of robots many EU citizens would like to see banned in the future. [ 66 ] As neuromorphic systems have become increasingly advanced, some scholars [ who? ] have advocated for granting personhood rights to these systems. Daniel Lim, a critic of technology development in the Human Brain Project , which aims to advance brain-inspired computing, has argued that advancement in neuromorphic computing could lead to machine consciousness or personhood. [ 67 ] If these systems are to be treated as people , then many tasks humans perform using neuromorphic systems, including their termination, may be morally impermissible as these acts would violate their autonomy. [ 67 ] There is significant legal debate around property rights and artificial intelligence. In Acohs Pty Ltd v. Ucorp Pty Ltd , Justice Christopher Jessup of the Federal Court of Australia found that the source code for Material Safety Data Sheets could not be copyrighted as it was generated by a software interface rather than a human author. [ 68 ] The same question may apply to neuromorphic systems: if a neuromorphic system successfully mimics a human brain and produces a piece of original work, who, if anyone, should be able to claim ownership of the work? [ 69 ]
https://en.wikipedia.org/wiki/Neuromorphic_computing
Neuromuscular-blocking drugs , or Neuromuscular blocking agents ( NMBAs ), block transmission at the neuromuscular junction , [ 1 ] causing paralysis of the affected skeletal muscles . This is accomplished via their action on the post-synaptic acetylcholine (Nm) receptors. In clinical use, neuromuscular block is used adjunctively to anesthesia to produce paralysis , firstly to paralyze the vocal cords, and permit endotracheal intubation , [ 2 ] and secondly to optimize the surgical field by inhibiting spontaneous ventilation, and causing relaxation of skeletal muscles. Because the appropriate dose of neuromuscular-blocking drug may paralyze muscles required for breathing (i.e., the diaphragm), mechanical ventilation should be available to maintain adequate respiration . This class of medications helps to reduce patient movement, breathing, or ventilator dyssynchrony and allows lower insufflation pressures during laparoscopy. [ 3 ] [ 4 ] It has several indications for use in the intense care unit. It can help reduce hoarseness in voice as well as injury to the vocal cord during intubation. In addition, it plays an important role in facilitating mechanical ventilation in patients with poor lung function. Patients are still aware of pain even after full conduction block has occurred; hence, general anesthetics and/or analgesics must also be given to prevent anesthesia awareness . Neuromuscular blocking drugs are often classified into two broad classes: It is also common to classify them based on their chemical structure. Suxamethonium was synthesised by connecting two acetylcholine molecules and has the same number of heavy atoms between methonium heads as decamethonium . Just like acetylcholine, succinylcholine, decamethonium and other polymethylene chains, of the appropriate length and with two methonium, heads have small trimethyl onium heads and flexible links. They all exhibit a depolarizing block. Pancuronium , vecuronium , rocuronium , rapacuronium , dacuronium , malouètine , dihydrochandonium , dipyrandium , pipecuronium , chandonium (HS-310), HS-342 and other HS- compounds are aminosteroidal agents. They have in common the steroid structural base, which provides a rigid and bulky body. Most of the agents in this category would also be classified as non-depolarizing. Compounds based on the tetrahydroisoquinoline moiety such as atracurium , mivacurium , and doxacurium would fall in this category. They have a long and flexible chain between the onium heads, except for the double bond of mivacurium . D-tubocurarine and dimethyltubocurarine are also in this category. Most of the agents in this category would be classified as non-depolarizing. Gallamine is a trisquaternary ether with three ethonium heads attached to a phenyl ring through an ether linkage. Many other different structures have been used for their muscle relaxant effect such as alcuronium (alloferin), anatruxonium , diadonium , fazadinium (AH8165) and tropeinium . In recent years much research has been devoted to new types of quaternary ammonium muscle relaxants. These are asymmetrical diester isoquinolinium compounds and bis-benzyltropinium compounds that are bistropinium salts of various diacids . These classes have been developed to create muscle relaxants that are faster and shorter acting. Both the asymmetric structure of diester isoquinolinium compounds and the acyloxylated benzyl groups on the bisbenzyltropiniums destabilizes them and can lead to spontaneous breakdown and therefore possibly a shorter duration of action. [ 6 ] These drugs fall into two groups: A neuromuscular non-depolarizing agent is a form of neuromuscular blocker that does not depolarize the motor end plate . [ 8 ] The quaternary ammonium muscle relaxants belong to this class. Quaternary ammonium muscle relaxants are quaternary ammonium salts used as drugs for muscle relaxation , most commonly in anesthesia . It is necessary to prevent spontaneous movement of muscle during surgical operations . Muscle relaxants inhibit neuron transmission to muscle by blocking the nicotinic acetylcholine receptor . What they have in common, and is necessary for their effect, is the structural presence of quaternary ammonium groups, usually two. Some of them are found in nature and others are synthesized molecules. [ 9 ] [ 5 ] Below are some more common agents that act as competitive antagonists against acetylcholine at the site of postsynaptic acetylcholine receptors. Tubocurarine , found in curare of the South American plant Pareira, Chondrodendron tomentosum , is the prototypical non-depolarizing neuromuscular blocker. It has a slow onset (<5 min) and a long duration of action (30 mins). Side-effects include hypotension , which is partially explained by its effect of increasing histamine release, a vasodilator , [ 10 ] as well as its effect of blocking autonomic ganglia . [ 11 ] It is excreted in the urine . This drug needs to block about 70–80% of the ACh receptors for neuromuscular conduction to fail, and hence for effective blockade to occur. At this stage, end-plate potentials (EPPs) can still be detected, but are too small to reach the threshold potential needed for activation of muscle fiber contraction. The speed of onset depends on the potency of the drug, greater potency is associated with slower onset of block. Rocuronium, with an ED 95 of 0.3 mg/kg IV has a more rapid onset than Vecuronium with an ED 95 of 0.05mg/kg. [ 12 ] Steroidal compounds, such as rocuronium and vecuronium, are intermediate-acting drugs while Pancuronium and pipecuronium are long-acting drugs. [ 12 ] (no hypotension) [ 13 ] (no hypotension) [ 13 ] In larger clinical dose, some of the blocking agent can access the pore of the ion channel and cause blockage. This weakens neuromuscular transmission and diminishes the effect of acetylcholinesterase inhibitors (e.g. neostigmine ). [ 14 ] Nondepolarizing NBAs may also block prejunctional sodium channels which interfere with the mobilization of acetylcholine at the nerve ending. [ 14 ] A depolarizing neuromuscular blocking agent is a form of neuromuscular blocker that depolarizes the motor end plate . [ 15 ] An example is succinylcholine . Depolarizing blocking agents work by depolarizing the plasma membrane of the muscle fiber, similar to acetylcholine . However, these agents are more resistant to degradation by acetylcholinesterase , the enzyme responsible for degrading acetylcholine, and can thus more persistently depolarize the muscle fibers. This differs from acetylcholine, which is rapidly degraded and only transiently depolarizes the muscle. These agents have two phases of block with notably different characteristics. During phase I ( depolarizing phase ), succinylcholine interacts with nicotinic receptor to open the channel and cause depolarization of the end plate , which later spread to and result in depolarization of adjacent membranes. As a result, there is a disorganised contraction of muscle motor unit. [ 14 ] This causes muscular fasciculations (muscle twitches) while they are depolarizing the muscle fibers. The muscle fiber is then held in a partially depolarised state leading to relaxation. Further administration of the agent leads to phase II block which has a similar clinical behaviour to non-depolarising blocking agents. Phase II block is characterised by complete membrane repolarisation however there is still ongoing neuromuscular blockade, the mechanism of phase II block is not fully understood. Phase I block effect is increased by cholinesterase inhibitors as increase in acetylcholine levels leads to deepening of the phase I block due to the membrane potential being pushed further away from repolarisation, however in phase II block cholinesterase inhibitors inhibit the block. [ 14 ] The prototypical depolarizing blocking drug is succinylcholine (suxamethonium). It is the only such drug used clinically. It has a rapid onset (30 seconds) but very short duration of action (5–10 minutes) because of hydrolysis by various cholinesterases (such as butyrylcholinesterase in the blood). The patient will experience fasciculation due to the depolarisation of muscle neurone fibres and seconds later, flaccid paralysis will occur. [ 12 ] Succinylcholine was originally known as diacetylcholine because structurally it is composed of two acetylcholine molecules joined with a methyl group. Decamethonium is sometimes, but rarely, used in clinical practice. It is indicated for rapid sequence intubation. IV dose 1-1.5mg/kg or 3 to 5 x ED 95 Paralysis occurs in one to two minutes. Clinical duration of action (time from drug administration to recovery of single twich to 25% of baseline) is 7-12 minutes. If IV access is unavailable, intramuscular administration 3-4mg/kg. Paralysis occurs at 4 minutes. Use of succinylcholine infusion or repeated bolus administration increase the risk of Phase II block and prolonged paralysis. Phase II block occurs after large doses (>4mg/kg). This occurs when the post-synaptic membrane action potential returns to baseline in spite of the presence of succinylcholine and causes continued activation of nicotinic acetylcholine receptors. [ 12 ] The main difference is in the reversal of these two types of neuromuscular-blocking drugs. The tetanic fade is the failure of muscles to maintain a fused tetany at sufficiently high frequencies of electrical stimulation. This discrepancy is diagnostically useful in case of intoxication of an unknown neuromuscular-blocking drug. [ 16 ] (fade) (not fade) (fade) Neuromuscular blocking agents exert their effect by modulating the signal transmission in skeletal muscles. An action potential is, in other words, a depolarisation in neurone membrane due to a change in membrane potential greater than the threshold potential leads to an electrical impulse generation. The electrical impulse travels along the pre-synaptic neurone axon to synapse with the muscle at the neuromuscular junction (NMJ) to cause muscle contraction. [ 17 ] When the action potential reaches the axon terminal, it triggers the opening of the calcium ion gated channels , which causes the influx of Ca 2+ . Ca 2+ will stimulate the release of neurotransmitter in the neurotransmitter containing vesicles by exocytosis (vesicle fuses with the pre-synpatic membrane). [ 17 ] The neurotransmitter, acetylcholine(ACh) binds to the nicotinic receptors on the motor end plate, which is a specialised area of the muscle fibre's post-synaptic membrane. This binding causes the nicotinic receptor channels to open and allow the influx of Na + into the muscle fibre. [ 17 ] Fifty percent of the released ACh is hydrolysed by acetylcholinesterase (AChE) and the remaining bind to the nicotinic receptors on the motor end plate. When ACh is degraded by AChE, the receptors are no longer stimulated and the muscle cannot be depolarized. [ 17 ] If enough Na + enter the muscle fibre, it causes an increase in the membrane potential from its resting potential of -95mV to -50mV (above the threshold potential -55mV) which causes an action potential to spread throughout the fibre. This potential travels along the surface of the sarcolemma . The sarcolemma is an excitable membrane that surrounds the contractile structures known as myofibrils that are located deep in the muscle fibre. For the action potential to reach the myofibrils, the action potential travels along the transverse tubules (T-tubules) that connects the sarcolemma and center of the fibre. [ 17 ] Later, action potential reaches the sarcoplasmic reticulum which stores the Ca 2+ needed for muscle contraction and causes Ca 2+ to be released from the sarcoplasmic reticulum. [ 17 ] Quaternary muscle relaxants bind to the nicotinic acetylcholine receptor and inhibit or interfere with the binding and effect of ACh to the receptor . Each ACh-receptor has two receptive sites and activation of the receptor requires binding to both of them. Each receptor site is located at one of the two α-subunits of the receptor. Each receptive site has two subsites, an anionic site that binds to the cationic ammonium head and a site that binds to the blocking agent by donating a hydrogen bond . [ 5 ] Non-depolarizing agents A decrease in binding of acetylcholine leads to a decrease in its effect and neuron transmission to the muscle is less likely to occur. It is generally accepted that non-depolarizing agents block by acting as reversible competitive inhibitors . That is, they bind to the receptor as antagonists and that leaves fewer receptors available for acetylcholine to bind. [ 5 ] [ 18 ] Depolarizing agents Depolarizing agents produce their block by binding to and activating the ACh receptor, at first causing muscle contraction, then paralysis. They bind to the receptor and cause depolarization by opening channels just like acetylcholine does. This causes repetitive excitation that lasts longer than a normal acetylcholine excitation and is most likely explained by the resistance of depolarizing agents to the enzyme acetylcholinesterase . The constant depolarization and triggering of the receptors keeps the endplate resistant to activation by acetylcholine. Therefore, a normal neuron transmission to muscle cannot cause contraction of the muscle because the endplate is depolarized and thereby the muscle paralysed. [ 5 ] [ 18 ] Binding to the nicotinic receptor Shorter molecules like acetylcholine need two molecules to activate the receptor, one at each receptive site. Decamethonium congeners, which prefer straight line conformations (their lowest energy state), usually span the two receptive sites with one molecule (binding inter-site). Longer congeners must bend when fitting receptive sites. The greater energy a molecule needs to bend and fit usually results in lower potency. [ 19 ] Conformational study on neuromuscular blocking drugs is relatively new and developing. Traditional SAR studies do not specify environmental factors on molecules. Computer -based conformational searches assume that the molecules are in vacuo , which is not the case in vivo . Solvation models take into account the effect of a solvent on the conformation of the molecule. However, no system of solvation can mimic the effect of the complex fluid composition of the body. [ 20 ] The division of muscle relaxants to rigid and non-rigid is at most qualitative. The energy required for conformational changes may give a more precise and quantitative picture. Energy required for reducing onium head distance in the longer muscle relaxant chains may quantify their ability to bend and fit its receptive sites. [ 19 ] Using computers it is possible to calculate the lowest energy state conformer and thus most populated and best representing the molecule. This state is referred to as the global minimum. The global minimum for some simple molecules can be discovered quite easily with certainty. Such as for decamethonium the straight line conformer is clearly the lowest energy state. Some molecules, on the other hand, have many rotatable bonds and their global minimum can only be approximated. [ 20 ] Neuromuscular blocking agents need to fit in a space close to 2 nanometres, which resembles the molecular length of decamethonium. [ 19 ] Some molecules of decamethonium congeners may bind only to one receptive site. Flexible molecules have a greater chance of fitting receptive sites. However, the most populated conformation may not be the best-fitted one. Very flexible molecules are, in fact, weak neuromuscular inhibitors with flat dose-response curves. On the other hand, stiff or rigid molecules tend to fit well or not at all. If the lowest-energy conformation fits, the compound has high potency because there is a great concentration of molecules close to the lowest-energy conformation. Molecules can be thin but yet rigid. [ 20 ] Decamethonium for example needs relatively high energy to change the N - N distance. [ 19 ] In general, molecular rigidity contributes to potency, while size affects whether a muscle relaxant shows a polarizing or a depolarizing effect. [ 6 ] Cations must be able to flow through the trans-membrane tube of the ion-channel to depolarize the endplate. [ 20 ] Small molecules may be rigid and potent but unable to occupy or block the area between the receptive sites. [ 6 ] Large molecules, on the other hand, may bind to both receptive sites and hinder depolarizing cations independent of whether the ion-channel is open or closed below. Having a lipophilic surface pointed towards the synapse enhances this effect by repelling cations. The importance of this effect varies between different muscle relaxants and classifying depolarizing from non-depolarizing blocks is a complex issue. The onium heads are usually kept small and the chains connecting the heads usually keep the N-N distance at 10 N or O atoms. Keeping the distance in mind the structure of the chain can vary (double bonded, cyclohexyl, benzyl, etc.) [ 20 ] Succinylcholine has a 10- atom distance between its N atoms, like decamethonium. Yet it has been reported that it takes two molecules, as with acetylcholine, to open one nicotinic ion channel . The conformational explanation for this is that each acetylcholine moiety of succinylcholine prefers the gauche (bent, cis) state. The attraction between the N and O atoms is greater than the onium head repulsion. In this most populated state, the N-N distance is shorter than the optimal distance of ten carbon atoms and too short to occupy both receptive sites. This similarity between succinyl- and acetyl-choline also explains its acetylcholine-like side-effects. [ 20 ] Comparing molecular lengths, the pachycurares dimethyltubocurarine and d-tubocurarine both are very rigid and measure close to 1.8 nm in total length. Pancuronium and vecuronium measure 1.9 nm, whereas pipecuronium is 2.1 nm. The potency of these compounds follows the same rank of order as their length. Likewise, the leptocurares prefer a similar length. Decamethonium, which measures 2 nm, is the most potent in its category, whereas C11 is slightly too long. Gallamine despite having low bulk and rigidity is the most potent in its class, and it measures 1.9 nm. [ 6 ] [ 19 ] Based on this information one can conclude that the optimum length for neuromuscular blocking agents, depolarizing or not, should be 2 to 2.1 nm. [ 20 ] The CAR for long-chain bisquaternary tetrahydroisoquinolines like atracurium, cisatracurium, mivacurium, and doxacurium is hard to determine because of their bulky onium heads and large number of rotatable bonds and groups . These agents must follow the same receptive topology as others, which means that they do not fit between the receptive sites without bending. [ 19 ] Mivacurium for example has a molecular length of 3.6 nm when stretched out, far from the 2 to 2.1 nm optimum. Mivacurium, atracurium, and doxacurium have greater N-N distance and molecular length than d-tubocurarine even when bent. To make them fit, they have flexible connections that give their onium heads a chance to position themselves beneficially. This bent N-N scenario probably does not apply to laudexium and decamethylene bisatropium , which prefer a straight conformation. [ 20 ] It has been concluded that acetylcholine and related compounds must be in the gauche (bent) configuration when bound to the nicotinic receptor. [ 21 ] Beers and Reich's studies on cholinergic receptors in 1970 showed a relationship affecting whether a compound was muscarinic or nicotinic . They showed that the distance from the centre of the quaternary N atom to the van der Waals extension of the respective O atom (or an equivalent H-bond acceptor) is a determining factor. If the distance is 0.44 nm, the compound shows muscarinic properties—and if the distance is 0.59 nm, nicotinic properties dominate. [ 22 ] ) Pancuronium remains one of the few muscle relaxants logically and rationally designed from structure-action / effects relationship data. A steroid skeleton was chosen because of its appropriate size and rigidness. Acetylcholine moieties were inserted to increase receptor affinity. Although having many unwanted side-effects, a slow onset of action and recovery rate it was a big success and at the time the most potent neuromuscular drug available. Pancuronium and some other neuromuscular blocking agents block M2-receptors and therefore affect the vagus nerve , leading to hypotension and tachycardia . This muscarinic blocking effect is related to the acetylcholine moiety on the A ring on pancuronium. Making the N atom on the A ring tertiary, the ring loses its acetylcholine moiety, and the resulting compound, vecuronium, has nearly 100 times less affinity to muscarin receptors while maintaining its nicotinic affinity and a similar duration of action. Vecuronium is, therefore, free from cardiovascular effects. [ 5 ] The D ring shows excellent properties validating Beers and Reich's rule with great precision. As a result, vecuronium has the greatest potency and specificity of all mono-quaternary compounds. [ 20 ] Two functional groups contribute significantly to aminosteroidal neuromuscular blocking potency, it is presumed to enable them to bind the receptor at two points. A bis-quaternary two point arrangement on A and D-ring (binding inter-site) or a D-ring acetylcholine moiety (binding at two points intra-site) are most likely to succeed. A third group can have variable effects. [ 20 ] The quaternary and acetyl groups on the A and D ring of pipecuronium prevent it from binding intra-site (binding to two points at the same site). Instead, it must bind as bis-quaternary (inter-site). [ 6 ] These structures are very dissimilar from acetylcholine and free pipecuronium from nicotinic or muscarinic side-effects linked to acetylcholine moiety. Also, they protect the molecule from hydrolysis by cholinesterases, which explain its nature of kidney excretion. The four methyl-groups on the quaternary N atoms make it less lipophilic than most aminosteroids. This also affects pipecuroniums metabolism by resisting hepatic uptake, metabolism, and biliary excretion. The length of the molecule (2.1 nm, close to ideal) and its rigidness make pipecuronium the most potent and clean one-bulk bis-quaternary. Even though the N-N distance (1.6 nm) is far away from what is considered ideal, its onium heads are well-exposed, and the quaternary groups help to bring together the onium heads to the anionic centers of the receptors without chirality issues. [ 20 ] Adding more than two onium heads in general does not add to potency. Though the third onium head in gallamine seems to help position the two outside heads near the optimum molecular length, it can interfere unfavorably and gallamine turns out to be a weak muscle relaxant, like all multi-quaternary compounds. Considering acetylcholine a quaternizing group larger than methyl and an acyl group larger than acetyl would reduce the molecule's potency. The charged N and the carbonyl O atoms are distanced from structures they bind to on receptive sites and, thus, decrease potency. The carbonyl O in vecuronium for example is thrust outward to appose the H-bond donor of the receptive site. This also helps explain why gallamine, rocuronium, and rapacuronium are of relatively low potency. [ 20 ] In general, methyl quaternization is optimal for potency but, opposing this rule, the trimethyl derivatives of gallamine are of lower potency than gallamine. The reason for this is that gallamine has a suboptimal N-N distance. Substituting the ethyl groups with methyl groups would make the molecular length also shorter than optimal. Methoxylation of tetrahydroisoquinolinium agents seems to improve their potency. How methoxylation improves potency is still unclear. Histamine release is a common attribute of benzylisoquinolinium muscle relaxants. This problem generally decreases with increased potency and smaller doses. The need for larger doses increases the degree of this side-effect. Conformational or structural explanations for histamine release are not clear. [ 20 ] Metabolism and Hofmann elimination Deacetylating vecuronium at position 3 results in a very active metabolite. [ 23 ] In the case of rapacuronium the 3- deacylated metabolite is even more potent than rapacuronium. As long as the D-ring acetylcholine moiety is unchanged they retain their muscle relaxing effect. Mono -quaternary aminosteroids produced with deacylation in position 17 on the other hand are generally weak muscle relaxants. [ 20 ] In the development of atracurium the main idea was to make use of Hofmann elimination of the muscle relaxant in vivo . When working with bisbenzyl-isoquinolinium types of molecules, inserting proper features into the molecule such as an appropriate electron withdrawing group then Hofmann elimination should occur at conditions in vivo . Atracurium, the resulting molecule , breaks down spontaneously in the body to inactive compounds and being especially useful in patients with kidney or liver failure. Cis -atracurium is very similar to atracurium except it is more potent and has a weaker tendency to cause histamine release. [ 5 ] Structure relations to onset time The effect of structure on the onset of action is not very well known except that the time of onset appears inversely related to potency. [ 24 ] In general mono -quaternary aminosteroids are faster than bis -quaternary compounds, which means they are also of lower potency. A possible explanation for this effect is that drug delivery and receptor binding are of a different timescale. Weaker muscle relaxants are given in larger doses so more molecules in the central compartment must diffuse into the effect compartment , which is the space within the mouth of the receptor, of the body. After delivery to the effect compartment then all molecules act quickly. [ 25 ] Therapeutically this relationship is very inconvenient because low potency, often meaning low specificity can decrease the safety margin thus increasing the chances of side-effects . In addition, even though low potency usually accelerates onset of action , it does not guaranty a fast onset. Gallamine , for example, is weak and slow. When fast onset is necessary then succinylcholine or rocuronium are usually preferable. [ 20 ] Elimination Muscle relaxants can have very different metabolic pathways and it is important that the drug does not accumulate if certain elimination pathways are not active, for example in kidney failure. Administration of neuromuscular blocking agents (NMBA) during anesthesia can facilitate endotracheal intubation . [ 12 ] This can decrease the incidence of postintubation hoarseness and airway injury. [ 12 ] Short-acting neuromuscular blocking agents are chosen for endotracheal intubation for short procedures (< 30minutes), and neuromonitoring is required soon after intubation. [ 12 ] Options include succinylcholine , rocuronium or vecuronium if sugammadex is available for rapid reversal block. [ 12 ] Any short or intermediate acting neuromuscular blocking agents can be applied for endotracheal intubation for long procedures (≥ 30 minutes). [ 12 ] Options include succinylcholine, rocuronium, vecuronium, mivacurium , atracurium and cisatracurium . [ 12 ] The choice among these NMBA depends on availability, cost and patient parameters that affect drug metabolism . Intraoperative relaxation can be maintained as necessary with additional dose of nondepolarizing NMBA. [ 12 ] Among all NMBA, Succinylcholine establish the most stable and fastest intubating conditions, thus is considered as the preferred NMBA for rapid sequence induction and intubation (RSII). [ 12 ] Alternatives for succinylcholine for RSII include high dose rocuronium (1.2mg/kg which is a 4 X ED95 dose), or avoidance of NMBAs with a high dose remifentanil intubation. [ 12 ] Nondepolarizing NMBAs can be used to induce muscle relaxation that improves surgical conditions, including laparoscopic , robotic , abdominal and thoracic procedures. [ 12 ] It can reduce patient movement, muscle tone, breathing or coughing against ventilator and allow lower insufflation pressure during laparoscopy. [ 12 ] Administration of NMBAs should be individualized according to patient’s parameters. However, many operations can be performed without the need to apply any NMBAs as adequate anesthesia during surgery can achieve many of the theoretical benefits of neuromuscular blockage. [ 12 ] Since these drugs may cause paralysis of the diaphragm , mechanical ventilation should be at hand to provide respiration. In addition, these drugs may exhibit cardiovascular effects, since they are not fully selective for the nicotinic receptor and hence may have effects on muscarinic receptors . [ 11 ] If nicotinic receptors of the autonomic ganglia or adrenal medulla are blocked, these drugs may cause autonomic symptoms. Also, neuromuscular blockers may facilitate histamine release, which causes hypotension, flushing , and tachycardia. Succinylcholine may also trigger malignant hyperthermia in rare cases in patients who may be susceptible. In depolarizing the musculature, suxamethonium may trigger a transient release of large amounts of potassium from muscle fibers. This puts the patient at risk for life-threatening complications, such as hyperkalemia and cardiac arrhythmias . Other effects include myalgia , increased intragastric pressure, increased intraocular pressure, increased intracranial pressure, cardiac dysrhythmias ( bradycardia is the most common type) and allergic reactions . [ 12 ] As a result, it is contraindicated for patients with susceptibility to malignant hyperthermia, denervating conditions , major burns after 48 hours, and severe hyperkalemia. For nondepolarizing NMBAs except vecuronium, pipecuronium, doxacurium, cisatracurium, rocuronium and rapacuronium, they produce certain extent of cardiovascular effect. [ 14 ] Moreover, Tubocurarine can produce hypotension effect while Pancuronium can lead to moderate increase in heart rate and small increase in cardiac output with little or no increase in systemic vascular resistance , which is unique in nondeploarizing NMBAs. [ 14 ] Certain drugs such as aminoglycoside antibiotics and polymyxin and some fluoroquinolones also have neuromuscular blocking action as their side-effect. [ 26 ] Some drugs enhance or inhibit the response to NMBAs which require the dosage adjustment guided by monitoring. In some clinical circumstances, succinylcholine may be administered before and after a nondepolarising NMBA or two different nondepolarising NMBAs are administered in sequence. [ 12 ] Combining different NMBAs can result in different degrees of neuromuscular block and management should be guided with the use of a neuromuscular function monitor . The administration of nondepolarising neuromuscular blocking agent has an antagonistic effect on the subsequent depolarising block induced by succinylcholine. [ 12 ] If a nondepolarising NMBA is administered prior to succinycholine, the dose of succinylcholine must be increased. The administration of succinylcholine on the subsequent administration of a nondepolarising neuromuscular block depends on the drug used. Studies have shown that administration of succinylcholien before a nondepolarising NMBA does not affect the potency of mivacurium or rocuronium . [ 12 ] But for vecuronium and cisatracurium , it speeds up the onset, increases the potency and prolongs the duration of action. [ 12 ] Combining two nondepolarising NMBAs of the same chemical class (e.g. rocuronium and vecuronium) produces an additive effect, while combining two nondepolarising NMBAs of different chemical class (e.g. rocuronium and cisatracurium) produces a synergistic response. [ 12 ] Inhaled anesthetics inhibit nicotinic acetylcholine receptors (nAChRs) and potentiate neuromuscular blockage with nondepolarising NMBAs. [ 12 ] It depends on the type of volatile anesthetic ( desflurane > sevoflurane > isoflurane > nitrous oxide ), the concentration and the duration of exposure. [ 12 ] Tetracycline , aminoglycosides , polymyxins and clindamycin potentiate neuromuscular blockage by inhibiting ACh release or desensitisation of post-synpatic nAChRs to ACh. [ 12 ] This interaction happens mostly during maintenance of anesthesia. As antibiotics typically are given after a dose of NMBA, this interaction needs to be considered when re-dosing NMBA. [ 12 ] Patients receiving chronic treatment are relatively resistance to nondepolarising NMBAs due to the accelerated clearance . [ 12 ] Lithium is structurally similar to other cations such as sodium, potassium, magnesium and calcium, this causes lithium to activate potassium channels which inhibit neuromuscular transmission. [ 12 ] Patients who take lithium can have a prolonged response to both depolarising and nondepolarising NMBAs. Sertraline and amitriptyline inhibit butyrylcholinesterase and cause prolonged paralysis . [ 12 ] Mivacurium causes prolonged paralysis for patients chronically taking sertraline. [ 12 ] LAs may enhance the effects of depolarisation and nondepolarising NMBAs through pre and post-synaptic interactions at the NMJ. [ 12 ] It may result in blood levels high enough to potentiate NMBA-induced neuromuscular block. [ 12 ] Epidurally administered levobupivacaine and mepivacaine potentiate amino-steroidal NMBAs and delay recovery from neuromuscular blockade. [ 12 ] Methods for estimating the degree of neuromuscular block include valuation of muscular response to stimuli from surface electrodes, such as in the train-of-four test, wherein four such stimuli are given in rapid succession. With no neuromuscular blockade, the resultant muscle contractions are of equal strength, but gradually decrease in case of neuromuscular blockade. [ 27 ] It is recommended during use of continuous-infusion neuromuscular blocking agents in intensive care . [ 28 ] The effect of non-depolarizing neuromuscular-blocking drugs may be reversed with acetylcholinesterase inhibitors , neostigmine , and edrophonium , as commonly used examples. Of these, edrophonium has a faster onset of action than neostigmine, but it is unreliable when used to antagonize deep neuromuscular block. [ 29 ] Acetylcholinesterase inhibitors increase the amount of acetylcholine in the neuromuscular junction, so a prerequisite for their effect is that the neuromuscular block is not complete, because in case every acetylcholine receptor is blocked then it does not matter how much acetylcholine is present. Sugammadex is a newer drug for reversing neuromuscular block by rocuronium and vecuronium in general anaesthesia . It is the first selective relaxant binding agent (SRBA). [ 30 ] Curare is a crude extract from certain South American plants in the genera Strychnos and Chondrodendron , originally brought to Europe by explorers such as Walter Raleigh [ 31 ] Edward Bancroft , a chemist and physician in the 16th century brought samples of crude curare from South America back to the Old-World. The effect of curare was experimented with by Sir Benjamin Brodie when he injected small animals with curare , and found that the animals stopped breathing but could be kept alive by inflating their lungs with bellows . This observation led to the conclusion that curare can paralyse the respiratory muscles. It was also experimented by Charles Waterton in 1814 when he injected three donkeys with curare. The first donkey was injected in the shoulder and died afterward. The second donkey had a tourniquet applied to the foreleg and was injected distal to the tourniquet. The donkey lived while the tourniquet was in place but died after it was removed. The third donkey after injected with curare appeared to be dead but was resuscitated using bellows. Charles Waterton's experiment confirmed the paralytic effect of curare. It was known in the 19th century to have a paralysing effect , due in part to the studies of scientists like Claude Bernard . [ 32 ] D-tubocurarine a mono -quaternary alkaloid was isolated from Chondrodendron tomentosum in 1942, and it was shown to be the major constituent in curare responsible for producing the paralysing effect. At that time, it was known that curare and, therefore, d-tubocurarine worked at the neuromuscular junction . The isolation of tubocurarine and its marketing as the drug Intocostrin led to more research in the field of neuromuscular-blocking drugs. Scientists figured out that the potency of tubocurarine was related to the separation distance between the two quaternary ammonium heads. [ 9 ] [ 33 ] Neurologist Walter Freeman learned about curare and suggested to Richard Gill , a patient suffering from multiple sclerosis , that he try using it. Gill brought 25 pounds of raw curare from Ecuador. The raw curare was then given to Squibb and Sons to derive an effective antidote to curare. In 1942, Wintersteiner and Dutcher (two scientists working for Squibb and Sons) isolated the alkaloid d-tubocurarine . Soon after, they developed a preparation of curare called Intocostrin . At the same time in Montreal, Harold Randall Griffith and his resident Enid Johnson at the Homeopathic Hospital administered curare to a young patient undergoing appendectomy . This was the first use of NMBA as muscle relaxant in anesthesia. The 1940s, 1950s and 1960s saw the rapid development of several synthetic NMBA. Gallamine was the first synthetic NMBA used clinically. Further research led to the development of synthesized molecules with different curariform effects, depending on the distance between the quaternary ammonium groups. One of the synthesized bis -quaternaries was decamethonium a 10- carbon bis -quaternary compound. Following research with decamethonium, scientists developed suxamethonium , which is a double acetylcholine molecule that was connected at the acetyl end. The discovery and development of suxamethonium lead to a Nobel Prize in medicine in 1957. Suxamethonium showed different blocking effect in that its effect was achieved more quickly and augmented a response in the muscle before block. Also, tubocurarine effects were known to be reversible by acetylcholinesterase inhibitors , whereas decamethonium and suxamethonium block were not reversible. [ 9 ] [ 5 ] Another compound malouétine that was a bis -quaternary steroid was isolated from the plant Malouetia bequaertiana and showed curariform activity. This led to the synthetic drug pancuronium , a bis -quaternary steroid, and subsequently other drugs that had better pharmacological properties. [ 9 ] [ 34 ] Research on these molecules helped improve understanding of the physiology of neurons and receptors . Gallamine triethiodide is originally developed for preventing muscle contractions during surgical procedures. However, it is no longer marketed in the United States according to the FDA orange book .
https://en.wikipedia.org/wiki/Neuromuscular-blocking_drug
The Neuronal cell cycle represents the life cycle of the biological cell, its creation, reproduction and eventual death. The process by which cells divide into two daughter cells is called mitosis . Once these cells are formed they enter G1, the phase in which many of the proteins needed to replicate DNA are made. After G1, the cells enter S phase during which the DNA is replicated. After S, the cell will enter G2 where the proteins required for mitosis to occur are synthesized. Unlike most cell types however, neurons are generally considered incapable of proliferating once they are differentiated, as they are in the adult nervous system . Nevertheless, it remains plausible that neurons may re-enter the cell cycle under certain circumstances. Sympathetic and cortical neurons, for example, try to reactivate the cell cycle when subjected to acute insults such as DNA damage, oxidative stress, and excitotoxicity. This process is referred to as “abortive cell cycle re-entry” because the cells usually die in the G1/S checkpoint before DNA has been replicated. Transitions through the cell cycle from one phase to the next are regulated by cyclins binding their respective cyclin dependent kinases (Cdks) which then activate the kinases (Fisher, 2012). During G1, cyclin D is synthesized and binds to Cdk4/6, which in turn phosphorylates retinoblastoma (Rb) protein and induces the release of the transcription factor E2F1 which is necessary for DNA replication (Liu et al., 1998). The G1/S transition is regulated by cyclin E binding to Cdk2 which phosphorylates Rb as well (Merrick and Fisher, 2011). S phase is then driven by the binding of cyclin A with Cdk2. In late S phase, cyclin A binds with Cdk1 to promote late replication origins and also initiates the condensation of the chromatin in the late G2 phase. The G2/M phase transition is regulated by the formation of the Cdk1/cyclin B complex. Inhibition through the cell cycle is maintained by cyclin-dependent kinase inhibitors (CKIs) of the Ink and Cip/Kip families which inhibit the cyclin/CDK complex. CDK4/6 is inhibited by p15Ink4b, p16Ink4a, p18Ink4c, and p19Ink4d. These inhibitors prevent the binding of CDK4/6 with cyclin D (Cánepa et al., 2007). The Cip/Kip families (p21Cip1, p27 Kip1 , and p57Kip2) also bind to cyclin/CDK complexes and prohibit advancement through the cell cycle. The cell cycle uses these CDKs and CKIs to regulate the cell cycle through checkpoints. These checkpoints ensure that the cell has completed all of the tasks of the current phase before they can gain entry into the next phase of the cycle. The criteria for the checkpoints are met through a combination of activating and inhibiting cyclin/CDK complexes as the result of different signaling pathways (Besson et al., 2008; Cánepa et al., 2007; Yasutis and Kozminski, 2013). If the criteria are not met, the cell will arrest in the phase prior to the checkpoint until the criteria are met. Progression through a checkpoint without having first met the appropriate criteria can lead to cell death (Fisher, 2012; Williams and Stoeber, 2012). It is believed that neurons are permanently blocked from the cell cycle once they differentiate. As a result, neurons are typically found outside of the cell cycle in a G0 state. It has been found that various genes that encode the G1/S transition, such as D1, Cdk4, Rb proteins, E2Fs, and CKIs, can be detected in different areas of a normal human brain (Frade and Ovejero-Benito, 2015). The presence of these core cell cycle factors can be explained through their role in neuronal migration, maturation, and synaptic plasticity (Christopher L. Frank1 and Li-Huei Tsai1, 2009). However, it is also possible that, under certain conditions, these factors can induce cell cycle re-entry. Under conditions such as DNA damage, oxidative stress, and activity withdrawal these factors have been shown to be upregulated. However the cells usually die in the G1/S checkpoint before DNA has been replicated (Park et al., 1998). The process by which the cell re-enters the cell cycle and dies is called “abortive cell cycle re-entry” and is characterized by the upregulation of cyclin D-cdk4/6 and downregulation of E2F, followed by cell death (Frade and Ovejero-Benito, 2015). In cerebellar granule cells and cortical neurons, E2F1 can trigger neuronal apoptosis through activation of Bax/caspase-3 and the induction of the Cdk1/FOXO1/Bad pathway (Giovanni et al., 2000). The downregulation of p130/E2F4 (a complex which has been shown to maintain the post mitotic nature of neurons) induces neuronal apoptosis by upregulating B-myb and C-myb (Liu et al., 2005). Tetraploid neurons (neurons with 4C DNA content) are not restricted to retinal neurons, 10% of human cortical neurons have DNA higher than 2C (Frade and Ovejero-Benito, 2015). Typically differentiated neurons that replicate their DNA die. However, this is not always the case as exhibited by sensory and sympathetic neurons, which are able to replicate their DNA without neuronal death (Smith et al., 2000). Neurons that are Rb deficient have also been found to re-enter the cell cycle and survive in a 4C DNA state (Lipinski et al., 2001). Duplication of DNA can lead to neuronal diversification in vertebrates, as seen in observations in the developing chick retina. These neurons re-enter the cell cycle as they travel to the ganglion cell layer when they are activated by p75NTR. These neurons are unable to enter mitosis and are stuck in a 4C DNA content state. Cell cycle re-entry by p75NTR is not dependent on Cdk4/6 (Morillo et al., 2012) and, therefore, differs from other cell types that re-enter the cell cycle. In retinal ganglion cells, p75NTR is mediated by p38MAPK and then phosphorylates E2F4, before progressing the cell through the cell cycle. Tetraploid neurons in mice are made in a p75NTR dependent manner in cells that contain Rb during their migration to their differentiated neuronal layers (Morillo et al., 2012). It is still unknown why these neurons are able to pass through the G1/S checkpoint and not induce apoptosis through E2F1. Cell cycle re-entry usually causes apoptosis. However, in some neurodegenerative diseases, re-entry into the cell cycle occurs. The neurons that are able to re-enter the cell cycle are much more likely to undergo apoptosis and lead to the disease phenotypes. In Alzheimer’s disease, affected neurons show signs of DNA replication such as phosphorylated Mcm2 and cell cycle regulators cyclin D, Cdk4, phosphorylated Rb, E2F1, and cyclin E. Not much is currently known about the direct mechanism by which the cell cycle is reactivated, however it is possible that MiR26b may regulate the activation of cell cycle progression by upregulating cyclin E1 and downregulating p27Kip1 (Busser et al., 1998; Yang et al., 2003). Alzheimer diseased neurons rarely exhibit the ability to enter mitosis and, if they don’t undergo rapid mitosis, can survive for long periods of time in a tetraploid state. These neurons are able to enter the S phase and replicate their DNA, however they become blocked in the G2 state. In affected and unaffected tetraploid neurons, during development and during the progression of the disease, passing the G2/M checkpoint leads to cell death. This hints that the G2/M checkpoint aids in the survival of tetraploid neurons. This is supported by experiments in which the G2/M checkpoint is removed through addition of brain-derived neurotrophic factor (BDNF) blockers in tetraploid cells that resulted in cell death. BDNF prevents the G2/M transition through its receptor TrkB and their capacity to decrease cyclin B and Cdk1. The mechanism by which neurons undergo apoptosis after the G2/M transition is not yet fully understood, it is known that Cdk1 can activate the pro-apoptotic factor Bad by phosphorylating its Ser128 (Frade, 2000). Interkinetic nuclear migration is a feature of developing neuroepithelia and is characterized by the periodic movement of the cell’s nucleus with the progression of the cell cycle. Developing neuroepithelia are tissues composed of neural progenitor cells, each spanning the entire thickness of the epithelium from the ventricular surface to the laminal side. Cell nuclei occupy different positions along the apical–basal axis of the tissue. S phase occurs close to the basal side whereas mitosis exclusively occurs close to ventricular apical side. The nuclei then move to upper regions near the basal side where they proceed through S-phase. This nuclear movement is repeated at each cell cycle and is maintained by an apical-to-basal migration during G1- phase and a reverse basal-to-apical movement during G2- phase. It was proposed that the INM maximized the amount of mitotic events in the limited space and that, since neuronal progenitors have a basal body, they need to move their nucleus to the apical side in order to assemble the mitotic spindle used in mitosis. It has been reported that the INM is not required for the cell cycle since removing the INM doesn’t change the length of the cell cycle. Interestingly, blocking or delaying the cell cycle results in the arrest or reduction of the INM respectively. Nuclear migration is not necessary for cell cycle regulation, however cell cycle regulators have tight control over the INM (Del Bene, 2011).
https://en.wikipedia.org/wiki/Neuronal_cell_cycle
A neuronal lineage marker is an endogenous tag that is expressed in different cells along neurogenesis and differentiated cells such as neurons . It allows detection and identification of cells by using different techniques. A neuronal lineage marker can be either DNA, mRNA or RNA expressed in a cell of interest. It can also be a protein tag , as a partial protein, a protein or an epitope that discriminates between different cell types or different states of a common cell. An ideal marker is specific to a given cell type in normal conditions and/or during injury. Cell markers are very valuable tools for examining the function of cells in normal conditions as well as during disease. The discovery of various proteins specific to certain cells led to the production of cell-type-specific antibodies that have been used to identify cells. [ 1 ] The techniques used for its detection can be immunohistochemistry , immunocytochemistry , methods that utilize transcriptional modulators and site-specific recombinases to label specific neuronal population, [ 2 ] in situ hybridization or fluorescence in situ hybridization (FISH). [ 3 ] A neuronal lineage marker can be a neuronal antigen that is recognized by an autoantibody for example Hu , which is highly restricted to neuronal nuclei. By immunohistochemistry, anti-Hu stains the nuclei of neurons. [ 4 ] To localize mRNA in brain tissue, one can use a fragment of DNA or RNA as a neuronal lineage marker, a hybridization probe that detects the presence of nucleotide sequences that are complementary to the sequence in the probe. This technique is known as in situ hybridization . Its application have been carried out in all different tissues, but particularly useful in neuroscience . Using this technique, it is possible to locate gene expression to specific cell types in specific regions and observe how changes in this distribution occur throughout the development and correlate with the behavioral manipulations. [ 5 ] Although immunohistochemistry is the staple methodology for identifying neuronal cell types, since it is relatively low in cost and a wide range of immunohistochemical markers are available to help distinguish the phenotype of cells in the brain, sometimes it is time-consuming to produce a good antibody. [ 6 ] Therefore, one of the most convenient methods for the rapid assessment of the expression of a cloned ion channel could be in situ hybridization histochemistry. After cells are isolated from tissue or differentiated from pluripotent precursors, the resulting population needs to be characterized to confirm whether the target population has been obtained. Depending on the goal of a particular study, one can use neural stem cells markers, neural progenitor cell markers, neuron markers or PNS neuronal markers. The study of the nervous system dates back to ancient Egypt but only in the ninetieth century it became more detailed. With the invention of the microscope and a technique of staining developed by Camillo Golgi , it was possible to study individual neurons. This scientist started to impregnate nervous tissue with metal, as silver. The reaction consists in fixing particles of silver chromate to the neurilemma , and resulted in a stark black deposit in the soma, axon and dendrites of the neuron. Thus, it was possible to identify different types of neurons, as Golgi Cell , Golgi I and Golgi II . [ 7 ] In 1885 there was a German medical researcher called Franz Nissl who developed another staining technique now known by Nissl staining . This technique is slightly different from Golgi staining since it stains the cell body and the endoplasmic reticulum. [ 8 ] In 1887, a Spanish scientist called Santiago Ramon y Cajal learned the staining technique with Golgi and started his famous work of neuroanatomy . [ 9 ] With this technique he made an extensive study of several areas of the brain and in different species. He also described very precisely the purkinje cells , the chick cerebellum and the neuronal circuit of the rodent hippocampus . In 1941 Dr. Albert Coons used for the first time a revolutionary technique that uses the principle of antibodies binding specifically to antigens in the tissues. He created an immunoflorescent technique for labelling the antibodies. [ 10 ] This technique continues to be widely used in neuroscience studies for identifying different structures. The most important neural markers used nowadays are the GFAP, Nestin, NeuroD antibodies and others. [ 11 ] For the past years there are still creating new neural markers for immunocytochemistry or/and immunohistochemistry . In 1953 Heinrich Klüver invented a new staining technique called, Luxol Fast Blue stain or LFB, and with this technique it's possible to detect demyelination in the central nervous system. Myelin sheath will be stained blue, but other structures will be stained as well. The next revolutionary technique was invented in 1969 by an American scientist called Joseph G. Gall . [ 12 ] This technique is called in situ Hybridization and it is used in a large variety of studies but mainly used in developmental biology. With this technique it is possible to mark some genes expressed in determined areas of the animal. In neurobiology , it's very useful for understanding the formation of the nervous system. This is one of the most powerful techniques to mark cells. This method consists of hybridizing a labeled complementary DNA or RNA strand to a specific DNA or RNA in the tissue. By doing this hybridization we will be able to reveal the location of a specific mRNA , giving us information about the physiological process of organization, regulation and function of the genes. [ 13 ] Using this technique we can now know what are the genes and proteins that are behind a certain process, like the formation of the neural crest , or a specific behavior; and what is the location of that same genes. We can also see how changes in the distribution of these genes can affect the development of a tissue, and correlate it with behavioral manipulations. [ 13 ] Some examples are the use of, digoxigenin- or fluorophore-conjugated oligo- nucleotide probes, for the detection of localized mRNAs in dendrites, spines, axons, and growth cones of cultured neurons; or digoxigenin-labeled RNA probes and fluorescence tyramide amplification for the detection of less abundant mRNAs localized to dendrites in vivo. [ 14 ] These examples use FISH (Fluorescent in situ hybridization). With this technique we can understand the physiological processes and neurological diseases. Immunohistochemistry is a technique that uses antibodies with fluorescent staining tags that target a specific antigen present in a certain protein. This high specificity allows us to localize the peptidergic and classical transmitter compounds, their synthetic enzymes and other cell specific antigen in neuronal tissue. An example of the application of this technique in neuroscience is the immunolabeling of antigens like NGF-Inducible Large External glycoprotein ( NILE-GF ), choline acetyltransferase , parvalbumin , and neurofilament protein . [ 15 ] All of these antigens are present in specific neuronal cell types. With these we can define anatomical circuits with a high degree of resolution, and understand the role of some proteins and cells in the nervous system, as well as the location of that same proteins and cells. Although this is a very potent technique there are some drawbacks. The procedure has a nonquantifiable nature and has the occurrence of both false positives and false negatives. [ 16 ] Immunocytochemistry uses the same method that immunohistochemistry, but with the difference that this technique is used in isolated cells in culture, and the other is in tissues. The results are the same but with more resolution, once we are looking to one cell only. Neural stem cells are an example of somatic stem cell found in various tissues, both during development and in the adult. They have two fundamental characteristics: they are self-renewing and upon terminal division and differentiation, they can give rise to the full range of cells classes within the relevant tissue. Hence, a neural stem cell can give rise to another neural stem cell, or to any of the differentiated cell types found in the central and peripheral nervous systems (inhibitory and excitatory neurons , astrocytes and oligodendrocytes ). [ 17 ] The standard method of isolating neural stem cells in vitro is with the neurosphere culture system, the method originally used to identify NSCs. After some proliferation, the cells are either induced to differentiate by withdrawing the mitogens or by exposing the cells to another factor that induces some of the cells to develop into different lineages. [ 18 ] Cellular fates are analysed by staining with antibodies directed against antigens specific for astrocytes, oligodendrocytes, and neurons. In some cases, cells are plated at low density and monitored to determine if a single cell can give rise to the three phenotypes. [ 19 ] Immunomagnetic cell separation strategies using antibodies directed against cell surface markers present on stem cells, progenitors and mature CNS cells have been applied to the study of NSCs. Other non-immunological methods have been used to identify populations of cells from normal and tumorigenic CNS tissues, which demonstrate some of the in vitro properties of stem cells, including high aldehyde dehydrogenase (ALDH) enzyme activity. ALDH cells from embryonic rat and mouse CNS have been isolated and shown to have the ability to generate neurospheres, neurons, astrocytes and oligodendrocytes in vitro, as well as neurons in vivo when transplanted into the adult mouse cerebral cortex. [ 18 ] Once a stem cell divides asymmetrically, the more mature progenitor is born and migrates to regions of differentiation. As the progenitor migrates, it matures further until it reaches a site where it stops and either becomes quiescent or fully differentiates into a functioning cell. The major obstacle to identifying and discovering markers that define a stem cell is that the most primitive cells are probably in a quiescent state and do not express many unique antigens. Thus, as with other fields like haematopoiesis , a combination of positive and negative markers will be required to better define the central nervous system stem cell. [ 19 ] Nonetheless, changes in the expression levels of specific molecules can be used to indicate the presence of neural stem cells in studies focused on further differentiation toward specific neural lineages. Usual markers used for neural stem cells include Nestin and SOX2 . Although Nestin it is expressed predominantly in stem cells of the central nervous system (CNS), its expression is absent from nearly all mature CNS cells, thus it is an efficient marker for neural stem cells. [ 20 ] During neurogenesis , Sox2 is expressed throughout developing cells in the neural tube as well as in proliferating CNS progenitors, hence is thought to be centrally important for neural stem cell proliferation and differentiation. [ 20 ] In addition to intracellular molecules, products are available to study proteins which are expressed at the cell surface, including ABCG2 , FGF R4 , and Frizzled-9 . The differentiation of neural stem cells is controlled, in a context-dependent manner, by intrinsic factors and extracellular signalling molecules that act as positive or negative regulators that can be used as markers. [ 21 ] A neural progenitor cell is distinct from a neural stem cell since it is incapable of continuous self-renewal and usually has the capacity to give rise to only one class of differentiated progeny. They are tripotent cells which can give rise to neurons, astrocytes and oligodendrocytes. An oligodendroglial progenitor cell, for example, gives rise to oligodendrocytes until its mitotic capacity is exhausted. [ 17 ] Some neural progenitor markers are capable of tracking cells as they undergo expansion and differentiation from rosettes to neurons. The neural rosette is the developmental signature of neuroprogenitors in cultures of differentiating embryonic stem cells; rosettes are radial arrangements of columnar cells that express many of the proteins expressed in neuroepithelial cells in the neural tube. It has been shown that cells within rosettes express multiple cell markers, including among others Nestin , NCAM and Musashi-1 , a RNA-binding protein that is expressed in proliferating neural stems cells. [ 22 ] Neuroepithelial progenitors (NEP) are responsible for neurogenesis in the neural tube and also give rise to two other types of neural progenitor cell, radial glia and basal progenitors. Radial glia are the dominant progenitor cell type in the developing brain whereas basal progenitors are specifically located at the subventricular zone (SVZ) in the developing telencephalon . Although functional studies of radial glia are increasing, it is difficult to distinguish them from neuroprogenitors and astrocytes. Like neuroprogenitors, radial glia express intermediate filament proteins nestin as well as the transcription factor PAX6 that is expressed in some neuroprogenitors in the ventral half of the neural tube. Radial glia also express proteins characteristic of astrocytes, including the widely used glial fibrillary acidic protein ( GFAP ), among others. Cytological markers that might be unique to radial glia include modified forms of nestin identified by the RC1 and RC2 antibodies that recognize the murine antigens. [ 23 ] Markers can detect neurons in different stages of development from nuclear, cytoplasmic, membrane or perisynaptic products present in neurons. It is also possible to label specifically cholinergic , dopaminergic , serotonergic , GABAergic or glutamatergic neurons. [ 24 ] Pan neuron markers have multiple targets (somatic, nuclear, dendritic, spine and axonal proteins) and consequently label across all parts of the neuron. It is used to study neuronal morphology, although there are specific markers that label particular regions of the neuron. [ 25 ] Doublecortin (DCX) is a microtubule-associated protein that is widely expressed in the soma and leading processes of migrating neurons and in the axons of differentiation neurons. Its expression is downregulated with maturation [ 26 ] Neuron-specific Class III β-tubulin (TuJ1) is present in newly generated immature postmitotic neurons and differentiated neurons and in some mitotically active neuronal precursors. [ 11 ] Microtubule-associated protein 2 (MAP-2) is a cytoskeletal protein. Its expression is weak in neuronal precursors but it increases during neuron development process. In general, its expression is confined to neurons and reactive astrocytes. [ 15 ] Neuron specific enolase (NSE), also called as gamma-enolase or enolase 2, is a cytosolic protein that is expressed in mature neurons. NSE levels increase along the neuron development reaching higher level in later stages. It can be expressed in glial cells during oligodendrocyte differentiation with the same levels that have been found in neuron culture, but is repressed when cells become mature. In pathological conditions was also reported that glial neoplasms and reactive glial cells expressed this marker. [ 15 ] Calretinin is widely distributed in different neuronal populations of vertebrate retina, being a valuable marker for immature postmitotic neurons. [ 11 ] Neuronal Nuclei antigen ( NeuN ) or Fox-3 is a nuclear protein present in postmitotic cell, at the point of differentiation into mature cells. [ 27 ] It can be used to detect almost all neuronal cell types except Purkinje cells, olfactory bulb mitral cells, retinal photoreceptor and dopaminergic neurons in the substantia nigra . [ 15 ] Calbindin is expressed by cerebellar Purkinje cells and granule cells of the hippocampus. [ 11 ] The reorganization and migration of calbindin-stained Purkinje neurons in rat cerebellum after peripheral nerve injury suggests that calbindin may be a marker for immature post-mitotic neurons, similar to calretinin. [ 28 ] Tyrosine hydroxylase (TH) is an enzyme involved in the synthesis of dopamine and norepinephrine . Generally, it is used as a marker for dopaminergic neurons, but it can also be found in some forebrain neurons which make norepinephrine (which is the product of dopamine and the enzyme dopamine β-hydroxylase). [ 29 ] Choline Acetyltransferase (ChAT) is expressed in cholinergic neurons of both the CNS and PNS. In the CNS, ChAT is expressed in motor neurons and pre-ganglionic autonomic neurons of the spinal cord, a subset of neurons in the neostriatum and in the basal forebrain . On the other hand, in PNS it is present in a small group of sympathetic neurons and in all parasympathetic neurons. [ 30 ] GABA is a mature neuronal marker expressed in GABAergic interneurons (inhibitor neurons which are generally interneurons in the brain). GAD65/67 are two enzymes involved in GABA synthesis by GABAergic interneurons. [ 29 ] Amyloid precursor protein (APP), the central protein in Alzheimer's Disease, is expressed differently during neuron differentiation. The expression of APP isoforms 695 and 714 is up-regulated in well-differentiated post-mitotic neuronal cells, while APP isoform 770 is down-regulated. These dynamics provide valuable insights into the differentiation state of neurons, aiding in the identification and study of neuronal development and maturation stages. [ 31 ] Neuronal lineage markers can be used in clinical research to identify diseased cells and/or in repair process. Since selective degeneration of functional neurons is associated with the pathogenesis of neurodegenerative disorders , such as degeneration of midbrain dopaminergic neurons in Parkinson's disease , forebrain cholinergic neurons in Alzheimer's disease and cortical GABAergic neurons in schizophrenia , markers of neuronal cell phenotype are of particular interest because of their utility in understanding pathology of clinical disease. [ 32 ] There are two key markers in these studies: choline acetyltransferase and tyrosine hydroxylase . Choline acetyltransferase (ChAT) is an enzyme responsible for catalyzing the synthesis of acetylcholine, and is expressed in the majority of cholinergic neurons. Hence, ChAT immunoreactivity is used to detect cognitive decline in several neurodegenerative disorders. [ 15 ] In motor regions, sensory cortex and in the basal forebrain these immunolabeling has been applied to evaluate disruptions in cholinergic neurons of the ChAT fiber network and also for overall morphology. [ 33 ] The Tyrosine hydroxylase (TH) immunolabeling has been very useful for Parkinson's disease investigation. It is used to determine the quantity of dopaminergic cell loss in Parkinson's patients. ABCG2 ; NeuroD1 ; ASCL1/Mash1 ; Noggin ; Beta-catenin ; Notch-1 ; Notch-2 ; Brg1 ; Nrf2 ; N-Cadherin ; Nucleostemin ; Calcitonin R ; Numb ; CD15 / Lewis X ; Otx2 ; CDCP1 ; Pax3 ; COUP-TF I/NR2F1 ; Pax6 ; CXCR4 ; PDGF R alpha ; FABP7 / B-FABP ; PKC zeta ; FABP 8 /M-FABP; Prominin-2 ; FGFR2 ; ROR2 ; FGFR4 ; RUNX1/CBFA2 ; FoxD3 ; RXR alpha/NR2B1 ; Frizzled-9 ; sFRP-2 ; GATA-2 ; SLAIN 1 ; GCNF / NR6A1 ; SOX1 ; GFAP ; SOX2 ; Glut1 ; SOX9 ; HOXB1 ; SOX11 ; ID2 ; SOX21 ; Meteorin ; SSEA-1 ; MSX1 ; TRAF-4 ; Musashi-1 ; Vimentin ; Musashi-2 ; ZIC1 ; Nestin A2B5 ; AP-2 Alpha ; ATPase Na+/K+ transporting alpha 1 ; Activin RIIA ; Brg1 ; CD168 / RHAMM ; CD4 ; Doublecortin / DCX ; Frizzled 4 / CD344 ; GAP43 ; Jagged1 ; Laminin ; MSX1 /HOX7; Mash1 ; Musashi-1 ; Nestin ; Netrin-1 ; Netrin-4 ; Neuritin ; NeuroD1 ; Neurofilament alpha-internexin/NF66 ; Notch1 ; Notch2 ; Notch3 ; Nucleostemin ; Otx2 ; PAX3 ; S100B ; SOX2 ; Semaphorin 3C ; Semaphorin 6A ; Semaphorin 6B ; Semaphorin 7A ; TROY / TNFRSF19 ; Tubulin βII ; Tuj 1 ; Vimentin ATOH1 / MATH1 ; ASH1 / MASH1 ; HES5 ; HuC /Hu; HuD ; Internexin α ; L1 neural adhesion molecule ; MAP1B /MAP5; MAP2A; MAP2B; Nerve Growth Factor Rec/ NGFR ; Nestin ; NeuroD ; Neurofilament L 68 kDa; Neuron Specific Enolase /NSE; NeuN ; Nkx-2.2 /NK-2; Noggin ; Pax-6 ; PSA-NCAM ; Tbr1 ; Tbr2 ; Tubulin βIII ; TUC-4 ; Tyrosine hydroxylase /TH Calbindin; Calretinin; Collapsin Response Mediated Protein 1 / CRMP1 ; Collapsin Response Mediated Protein 2 /CRMP2; Collapsin Response Mediated Protein 5 /CRMP5; Contactin-1 ; Cysteine-rich motor neuron 1/ CRIM1 ; c-Ret phosphor Serine 696; Doublecortin / DCX ; Ephrin A2 ; Ephrin A4; Ephrin A5 ; Ephrin B1; Ephrin B2; GAP-43 ; HuC; HuD ; Internexin alpha ; Laminin-1 ; LINGO-1; MAP1B /MAP5; Mical-3; NAP-22; NGFR ; Nestin ; Netrin-1 ; Neuropilin ; Plexin-A1 ; RanBPM; Semaphorin 3A; Semaphorin 3F; Semaphorin 4D ; Slit2; Slit3; Staufen ; Tbr 1 ; Tbr 2; Trk A ; Tubulin βIII; TUC-4 NeuN ; NF-L; NF-M; GAD ; TH ; PSD-95 ; Synaptophysin ; VAMP ; ZENON; Up-regulated Amyloid-beta precursor protein isoform 695 & isoform 714; Down-regulated Amyloid-beta precursor protein isoform 770; Neuron Specific Enolase ; MAP2 ; MAPT ; ChAT / choline acetyltransferase ; Chox10; En1 ; Even-skipped/Eve; Evx1 ; Evx2; Fibroblast growth factor-1 / FGF1 ; HB9; Isl1; Isl2; Lim3; Nkx6; p75 neurotrophin receptor; REG2; Sim1; SMI32; Zfh1 4.1G; Acetylcholinesterase ; Ack1; AMPA Receptor Binding Protein/ABP; ARG3.1; Arp2; E-Cadherin; N-Cadherin; Calcyon ; Catenin alpha and beta; Caveolin ; CHAPSYN-110/PSD93; Chromogranin A ; Clathrin light chain; Cofilin ; Complexin 1/ CPLX1 /Synaphin 2; Contactin-1; CRIPT ; Cysteine String Protein/CSP; Dynamin 1 ; Dymanin 2; Flotillin-1; Fodrin; GRASP ; GRIP1 ; Homer; Mint-1; Munc-18 ; NSF; PICK1 ; PSD-95 ; RAB4; Rabphillin 3A; SAD A; SAD B; SAP-102; SHANK1a; SNAP-25 ; Snapin; Spinophilin /Neurabin-1; Stargazin ; Striatin; SYG-1; Synaptic Vesicle Protein 2A; Synaptic Vesicle Protein 2B; Synapsin 1 ; Synaptobrevin /VAMP; Synaptojanin 1; Synaptophysin ; Synaptotagmin ; synGAP; Synphilin-1; Syntaxin 1; Syntaxin 2; Syntaxin 3; Syntaxin 4; Synuclein alpha; VAMP-2; Vesicular Acetylcholine Transporter/VAChT; Vesicular GABA transporter /VGAT/VIAAT; Vesicular Glutamate Transporter 1, 2, 3/VGLUT; Vesicular monoamine transporter 1 , 2 Acetylcholine/ACh ; Acetylcholinesterase ; Choline Acetyltransferase/ChAT ; Choline transporter ; Vesicular Acetylcholine Transporter/VAChT Adrenaline ; Dopamine ; Dopamine Beta Hydroxylase/DBH ; Dopamine Transporter/DAT ; L-DOPA ; Nitric Oxide-Dopamine; Norepinephrine ; Norepinephrine Transporter/NET ; Parkin; Tyrosine Hydroxylase/TH ; TorsinA DL-5-Hydroxytryptophan; Serotonin ; Serotonin Transporter/SERT ; Tryptophan Hydroxylase DARPP-32 ; GABA ; GABA Transporters 1 ; GABA Transporters 2; GABA Transporters 3; Glutamate Decarboxylase/GAD ; Vesicular GABA transporter/VGAT/VIAAT Glutamate ; Glutamate Transporter ; Glutamine ; Glutamine Synthetase ; Vesicular Glutamate Transporter 1 ; Vesicular Glutamate Transporter 2; Vesicular Glutamate Transporter 3
https://en.wikipedia.org/wiki/Neuronal_lineage_marker
Neuronal self-avoidance , or isoneural avoidance , is an important property of neurons which consists in the tendency of branches ( dendrites and axons ) arising from a single soma (also called isoneuronal or sister branches) to turn away from one another. The arrangements of branches within neuronal arbors are established during development and result in minimal crossing or overlap [ 1 ] as they spread over a territory, resulting in the typical fasciculated morphology of neurons (Fig 1). In opposition, branches from different neurons can overlap freely with one another. This propriety demands that neurons are able to discriminate "self", which they avoid, from "non-self" branches, with which they coexist. [ 2 ] This neuronal self-recognition is attained through families of cell recognition molecules which work as individual barcodes, allowing the discrimination of any other nearby branch as either "self" or "non-self". [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Self-avoidance ensures that dendritic territories are covered completely and yet non-redundantly [ 8 ] guaranteeing that branches achieve functionally appropriate coverage of input or output territories. [ 9 ] Neuronal communication requires the coordinated assembly of axons, dendrites, and synapses . [ 10 ] Therefore, self-avoidance is necessary for proper neuronal wiring and postnatal development and, together with neuronal tiling (heteroneuronal avoidance), is a crucial spacing mechanism for patterning neural circuits that results in complete and nonredundant innervation of sensory or synaptic space. [ 11 ] The concept of neuronal self-avoidance emerged about 50 years ago. The pioneer studies were performed in the leech, focusing on the central nervous system and developing mechanosensory neurons. Leeches from two species: Hirudo medicinalis and Haementeria ghilianii , remained the main organism for the study of the question of neuronal self-recognition and self-avoidance. In this animal, the repeating segmental pattern of the nervous system along with the fact that neurons are relatively few in number, and many are large enough to be recognized [ 12 ] allowed the experimental study of the general problem of neuronal specificity. In 1968, through the mapping of mechanoreceptor axonal receptive fields in H. medicilalis , Nicholls and Baylor [ 12 ] revealed distinct types of boundaries between axons from the same or different types of neurons, and also between individual neurons. They observed that receptive fields were subdivided into discrete areas, innervated by the different branches of a single cell. These boundaries, unlike those between adjacent fields of different cells, were abrupt showing nearly no overlap. The authors then suggested a mechanism for the spatial arrangement of axons in which "a fiber might repel other branches more strongly if they arise from the same cell than if they come from a homologue, and not at all if they come from a cell with a different modality" . In 1976, Yau [ 13 ] confirmed their findings and proposed that the branches of a cell recognized each other, therefore avoiding to grow into the same territory and establishing the discrete areas that Nicholls and Baylor observed. It was then clear that mechanosensory neurons, in leech, show self-avoidance: with the repulsion between branches originating from the same cell, but they did not show class-avoidance, meaning that branches from the same type of neurons could overlap. The phenomena was recognized but a lot remained unknown, including the term "Self-avoidance" which arises in 1982/1983 with the studies of Kramer. In 1982, Kramer [ 14 ] postulated that isoneuronal axons (axons growing from the same neuron), contrarily to heteroneuronal axons, avoid each other when growing on the same substrate (see Movie). This was further explored, by other authors, the fact that this self-avoidance would require neurites to be able to distinguish between self and non-self, reinforcing the ideas of Yau. In 1983 Kramer and Kuwada [ 2 ] propose that this self-recognition of two growing axonal processes might be mediated by their filopodia , which appear to make mutual contacts. This idea was backed up by the studies of Goodman et al. (1982) [ 15 ] in insect neurons, which postulated that filopodia played an important role in the recognition and choice of axonal growth pathways. The conservation of the mechanism in invertebrates together with the fact that adult morphology of many neurons appears to satisfy the rule, suggested that non-overlap of isoneuronal processes could be a general phenomenon of neuronal development. In 1985 empirical data was added by Kramer and Stent [ 1 ] with the experimentally induced variations in the branching pattern through surgically preventing or delaying the outgrowth of the axon branches. As predicted by the proposal of self-avoidance, interference with the outgrowth of a field axon branch resulted in the spread of the axon branch of the other field into what normally was not a territory. Thus, neuronal self-avoidance does play a significant role in the development of mechanosensory receptive field structure. In the late 1980s, the molecular machinery that could be the basis of the phenomena started to be unveiled. Receptors such as cell adhesion molecules of the cadherin and immunoglobulin super families, which mediate interactions between opposing cell surfaces, and integrins acting as receptors for extracellular matrix components were widely expressed on developing neurites . [ 16 ] [ 17 ] In 1990, Macagno et al., [ 18 ] integrated the results from several studies, once again emphasizing the evolutionary conservation of the overall phenomena: Leech neurons, like those of other invertebrates and those of vertebrates, undergo specific interactions during development which allow the definition of the adult morphologies and synaptic connections. That morphology reflects the developmental compromise between the potential of the neuron to grow and the constraints placed upon that growth by internal and external factors. Thus, the self-recognizing mechanism would be useful not only to self-avoidance but also as a means of individualization. During development, competition among neurons of the same type for a limited supply required for process growth and maintenance would occur, with one cell gaining space at the expense of others. Inhibitory interactions were also invoked, and this placed the phenomena of self-recognition in the bigger picture of the axon guidance process. Together, these studies led to the view that neural circuit assembly emerged as a result of a relatively small number of different signals and their receptors, some acting in a graded fashion and in different combinations. [ 19 ] In 1991, scientists became aware that self-avoidance was also present in non-neuronal cell types, such as leech comb cells, which might similarly form discrete domains. [ 20 ] Later, this was also observed in mammalian astrocytes. [ 21 ] [ 22 ] [ 23 ] Wang and Macagno, [ 24 ] in 1998, again recurring to Hirudo medicinalis mechanosensory neurons, performed an elegant experiment in order to try to answer the still remaining question: "How does a cell recognize self and respond by not growing over or along itself?" The authors then proposed two general types of mechanisms: I) External signals: Sibling neurites display surface identifying molecular factors, unique to each cell, that are capable of homotypic binding and therefore repel sibling neurites, or II) Internal signals: synchronous cell activity, such as voltage, which is transmitted within the cell mediating a dynamic mechanism of sibling growth inhibition. Contrarily to the first hypothesis, the second would require the continuity and communication between all parts of the cell for self-avoidance to occur. So the experiment consisted of detaching one of the neurons dendrites and see how the remaining attached dendrites reacted towards the detached fragment, "do they still avoid overlapping?" The result was that the detached branch would stop being recognized as "self" by the other branches of the neuronal, leading to dendrite overlap. The clear conclusion of the study was that continuity between all parts of the neuron is critical for self-avoidance to operate. The authors then suggest various mechanisms that require continuity and could function as recognition signal, and thus might be the responsible ones, such as "electrical activity, active or passive, as well as the diffusion of cytoplasmic signals either passively or by fast axonal transport". In the late 1990s and beyond, model organisms started to be used in the studies and the molecular mechanisms of self-avoidance started to be unraveled. In 1999 Wu and Maniatis [ 25 ] discovered a striking organization of a large family of human neural protocadherin cell adhesion genes which formed a gene cluster encoding 58 protocadherins. The members of the protocadherin gene cluster were compelling candidates to provide the molecular code required for the maintenance of the self/non-self discrimination that led to self-avoidance. It was later (2012) confirmed, by Lefebvre et al., [ 6 ] in a study with amacrine cells and Purkinje cells of Mus musculus , that these proteins are expressed in different combinations in individual neurons, thus providing "barcodes" with that distinguish one neuron from another. In 2000, Schmucker et al., [ 26 ] through cDNA and genomic analyses of Drosophila dendritic arborization sensory neurons, the existence of multiple forms of Down syndrome cell adhesion molecule (Dscam) was revealed. The authors saw that alternative splicing could potentially generate more than 38,000 Dscam isoforms and hypothesized that this molecular diversity could contribute to the specificity of neuronal connectivity and thus, self-avoidance. Together, the discoveries of the two large families of cell surface proteins encoded by the Dscam1 locus and the clustered protocadherin (Pcdh) loci opened the door to the numerous modern studies. The current studies take great advantage not only of the uprising of the molecular and genomic biology but also from the bioinformatics tools, developed since the 19th century. Self-avoidance has been widely discussed amongst scientists and throughout time the experiments were done in several animal models. The first experiments were done in leech. In 1981, Wässle tried to understand how retinal ganglion cells establish their dendritic territories in cats. Processes like dendritic tiling and self-avoidance are extremely important to correct development of neuronal structures, and in this specific case ganglion cells have to cover the retina to guarantee that every point of the visual space is actually "seen". He saw that cell bodies are arrayed in regular mosaic, and dendritic fields adapt to the available space. However, this hypothesis was based in mathematical models: Dirichlet model . Perry and Linden (1982) [ 27 ] were the first to present clear evidence of the dendritic "competition" in mice retina. Destruction of ganglion cells gives the chance to their neighboring cells to extend their dendritic projections. They proposed competition for synapses as the cause for equilibrium between growth and repulsion of dendrites. Though mouse and Drosophila are the models currently used to construct a model of self-avoidance for vertebrates and invertebrates respectively, throughout time there are several examples of this phenomenon in other model and non-model species: Trigeminal neurons in head skin exhibit a competitive behavior and only when one of them is completely removed, for example the left trigeminal ganglion , allows the right ganglion neurites to cross midline and innervate left side of the head. The correct innervation is due to repulsive nature of interactions between these movement detector neurites reinforcing all the anterior models of self-avoidance. [ 28 ] Retina grows throughout life by addition of new neurons at the margin and death of ganglion neurons in the center. Once again it is proved that each cell senses neighbor cells and can occupy space left by other. [ 29 ] Trigeminal neurons, developed 16hrs post fertilization, are part of the peripheral sensory system and detect thermal and mechanical stimuli in the skin. The "growth-and-repulsion" model arose from complex topographic restriction of growth cones between trigeminal and Rohon-Beard neurons . [ 30 ] Dscam mutants exhibit severely disorganized neural network and axon fasciculation. [ 31 ] The two major structures used in self-avoidance studies are the retinal ganglion cells (RGC) in mice and somatosensory neurons in Drosophila . These structures are pointed as different molecular models because the principal molecule involved in self-avoidance is Dscam in invertebrates and Protocadherins in vertebrates. [ 32 ] Correct assembly of the components in mice retina is dependent of Dscam / DscamL1 correct expression to form mosaics of different cell type of RGC, soma spacing and dendrite arborization, thus ensuring the coverage of all visual area by each cell type and more specifically to inhibit excessive fasciculation and clumping of cell bodies in photoreceptors, rod bipolar cells (RBCs) and amacrine cells in visual system. The occurrence of correct stratification and connection with synapses tells us that Dscam knockout affects only the repulsive interactions and coverage of the dendritic arbors and functional bindings are maintained. [ 33 ] [ 34 ] The actual main conclusions are based in the identification of different types of retinal neurons, each one with a different coverage factor value revealing graded degrees of homotypic dendritic repulsion. Developmental sequence accepted is 1) define number and spacing of cells, 2) controlled growth of branches and 3) fine-tune of dendritic tiling for maximal coverage of the structure. Experiments with mutant mice for Math5 and Brn3b (responsible of degeneration of 95% and 80% of retinal ganglion cells, respectively) shows that removal of ganglion cells doesn't decrease the retinal ganglion cell types and that position of these cells isnot defined by dendritic homotypic interactions only, but for some kind of intrinsic genetic program. [ 35 ] Drosophila melanogaster is the model for experiments in multiple dendritic (MD) neurons which compose the stereotyped pattern of peripheral nervous system. Dendritic arborization neurons are the major subtype of MD neurons group and presents highly branched dendrites underneath the epidermis . Sugimura et al. [ 36 ] showed dendritic arborization (da) neurons that stabilize their branches shape in early larval stages and others that continue shaping throughout life cycle. As other types of cells involved in processes dependent of self-recognition (like self-avoidance and tiling, See Figure-2) these da neurons can fill the empty spaces left by neighbor cells and this filling-in process is triggered by loss of local isoneural inhibitory contacts. Since Drosophila is one of the best studied models in mechanisms of neuronal self-recognition, we can find several results obtained in larval stages. One of the most remarkable examples is the incorrect development of dendritic arbors in the larval eye ( Bolwig's organ ) due to Dscam knockout mutation. Numerous models and structures with different development timing and life cycles are used in studies of self-avoidance. Therefore, some conflicts arise when we try to define a strict development phase for the occurrence of these phenomena. The initial idea was that, in some early development point, neural cells contact with each other and organize their distribution, but several studies demonstrated that self-avoidance is also present in adult life. To solve this question it would be ideal to monitor dendritic development of neurons from its birth till maturation inside whole-mount animals. [ 34 ] [ 35 ] In Drosophila , studies comprise both larval and adult phases, and number of hours after egg layer is determinant for correct construction of dendritic tiling in sensory neurons. [ 36 ] Early in the pupal stage, those neurons prune all their dendrites. Later each neuron grows a completely new dendrite for adult function. While the dendrites are being remodeled, the axons stay largely intact [ 37 ] and all these phases will be negatively impacted in case of interference with self-avoidance propriety. Exons of Dscam domains can be differently expressed according to the life cycle phase of the fly. Exon 9 splicing is temporally regulated, with only a few exon 9 sequences contributing to early embryo isoforms and the remaining exon 9 possible sequences become more prevalent with age. These results prove that, independently of the thousands of isoforms that could be generated, the diversity continues to be temporally and spatially controlled. [ 38 ] In mouse retina the majority of ganglion cells are born at E17 (embryonic stage/day 17) . At this age the retina has reached 25% of its mature size [ 35 ] [ 39 ] Cellular studies of self-avoidance imply that any underlying molecular mechanism must enforce robust and selective contact-dependent cell surface recognition only between sister branches, and must link recognition to changes in growth cone behavior. Recent studies to define the molecular basis of contact-dependent homotypic interactions led to the identification of two large families of cell-surface proteins encoded by the Drosophila Down syndrome cell adhesion molecule 1 ( Dscam1 ) locus and the clustered protocadherin (Pcdh) loci in mammals. These proteins, with diverse extracellular domains, and shared cytoplasmatic presumptive intracellular signaling domains, are able to provide diverse recognition specificities to a vast array of different neurites , endowing neurons with a unique cell-surface identity that allows neurons to distinguish self from non-self. Additional self-surface receptors implicated in self-avoidance include the immunoglobulin superfamily member Turtle, which functions in some Drosophila da neurons to enforce terminal branch spacing. [ 40 ] Several studies have implicated Drosophila Dscam1 in dendritic and axonal self-avoidance and process spacing in diverse neuronal populations, including mushroom body axons , olfactory projection neuron (PN) dendrites, and dendritic arborization (da) neuron dendrites [ 3 ] [ 4 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] It is notable that the function of Dscam in invertebrates is both context and species-dependent, as the molecule has been shown to regulate repulsion, outgrowth, attraction/ adhesion, and synapse formation in different systems. [ 47 ] [ 48 ] Dscam1 encodes an immunoglobulin (Ig) superfamily member which, in Drosophila, can generate up to 19,008 proteins with distinct ectodomains . [ 26 ] In binding assays , Dscams show isoform-specific homophilic interactions, but little interaction occurs between different, yet closely related, isoforms . [ 49 ] [ 50 ] Dscam1 mediated self-recognition is essential for self-avoidance between sister neurites Hughes et al. (2007) reported that Dscam loss-of-function in da neurons caused excessive self-crossing of dendrites from the same neuron. Dscam over-expression forced the respective dendrites to segregate from each other. Based on these data, Dscam results in a lack of self-avoidance of sister dendrites. Therefore, the direct isoform-specific homophilic Dscam-Dscam interactions must result in signal transduction events that lead to repulsion of dendrites expressing identical Dscam isoforms. This conversion of an initial Dscam-dependent cell-surface interaction into a repulsive response that leads to dendrite separation in da neurons is supported by Matthews et al. (2007) in a study that demonstrated that the ectopic expression of identical Dscam isoforms on the dendrites of different cells promoted growth away from each other. The authors also suggest that identical Dscam isoforms expressed in two cell populations in vitro induced their aggregation in an isoform-specific manner, showing that Dscam provides cells with the ability to distinguish between different cell surfaces. Moreover, expression of single Dscam1 molecules lacking most of their cytoplasmic tail prevented ectopic branch segregation and instead led to apparently stable adhesion between dendrites. Combined, these results support a simple model for a direct role for Dscam in self-recognition in which identical Dscam ectodomains on the surfaces of isoneuronal dendrites recognize each other and induce a subsequent repulsive signal that is mediated by domains in the cytoplasmic tail (Figure 7). Homophilic recognition provides the molecular basis for self-avoidance To test whether homophilic binding of Dscam1 isoforms is required for self-avoidance, Wu and coworkers generated pairs of chimeric isoforms that bind to each other (heterophilic) but not to themselves (homophilic). These isoforms failed to support self-avoidance. By contrast, co-expression of complementary isoforms within the same neuron restored self-avoidance. These data establish that recognition between Dscam1 isoforms on opposing surfaces of neurites of the same cell provides the molecular basis for self-avoidance. [ 7 ] Diversity of Dscam isoforms in individual neurons is not required for self-avoidance ... In 2004, Zhan et al. published a study in which the function of Dscam diversity was explored by assessing the isoforms of Dscam expressed by the developing mushroom body (MB) neurons , as well as the ability of individual isoforms to rescue the Dscam loss-of-function phenotypes and the consequences of ectopic expression of single Dscam isoforms. They demonstrated that different subtypes of MB neurons express different arrays of Dscam isoforms and that loss of Dscam1 in these neurons leads to a failure in branch separation, a phenotype that can be rescued by the expression of single arbitrary isoforms in single neurons. Also, in da neurons, single arbitrarily chosen isoforms rescued the Dscam1 null self-avoidance phenotype. [ 7 ] These results lead to the conclusion that Dscam1 diversity is not required in individual neurons for self-avoidance. ... but diversity of Dscam isoforms expressed by neurons of different types is essential for discriminating between self and non-self neurites To test whether sister branch segregation requires neighboring mushroom body axons to express different sets of Dscam isoforms, Hattori et al. (2009) [ 51 ] reduced the entire repertoire of Dscam ectodomains to just a single isoform using homologous recombination and examined mushroom body morphology in Dscam single and control animals. In the majority of the mushroom bodies analyzed, one of the two lobes was completely absent and in the few remaining samples, one lobe was significantly thinner than the other. This dominant phenotype indicates that the defects do not result from the loss of any one isoform, but rather the presence of the same isoform on all axons. These studies led to the conclusion that each neuron expresses a set of Dscam1 isoforms largely different from their neighbors and that it is crucial for neighboring neurons to express distinct Dscam isoforms, but the specific identity of the isoforms expressed in an individual neuron is unimportant, as long as sister branches express the identical set of isoforms to allow for the homotypic repulsion between them. Thousands of isoforms are required for proper self-recognition Later on, Hattori et al. (2009) [ 51 ] took a genomic replacement strategy to generate mutant animals in which the number of potential Dscam1 isoforms was limited. Their goal was to determine how many isoforms were necessary to ensure that neurites do not inappropriately recognize and avoid non-self-neurites. Branching patterns improved as the potential number of isoforms increased, independently of the identity of the isoforms. In conclusion, the size of isoform pool required for robust discrimination between self and non-self is in the thousands. In sum, isoform identity between branches of the same neuron leads to recognition via the extracellular region and repulsion mediated by the intracellular tail of Dscam1. As the Dscam1 isoforms expressed in different da neurons are likely to be different, dendrites of different da neurons do not inappropriately recognize non-self as self. Thus, Dscam1 proteins are required for self-avoidance and provide the molecular code by which neurites discriminate between self-dendrites and those of neighboring cells (Figure 7). Self-avoidance has only recently been explored in the vertebrate brain development and mainly in the context of patterning neurites in the Inner plexiform layers (IPLs). [ 34 ] [ 52 ] In contrast to Drosophila , mouse DSCAMs are typical cell surface molecules, lacking the massive alternative splicing of the fly Dscam1 orthologous undergoes. So although DSCAMs may retain a conserved function in mediating self-avoidance in vertebrates, the absence of molecular diversity makes it clear that they do not play a role in self-recognition. Dscams act to negate cell-type-specific interactions rather than actively promoting repulsion in vertebrates' neurites Considering that Dscam and Dscaml1 have non-overlapping expression patterns in the mouse retina, with Dscam being expressed in a subset of amacrine cells and most retinal ganglion cells (RGC) and Dscaml1 expressed in the rod circuit, Fuerst et al. (2009) examined retinal ganglion cell populations in Dscam −/− mice and, in addition, assessed retinal anatomy in the rod circuit using a gene-trap-knockout allele of Dscaml1 . In the absence of either gene, the cells that would normally express it showed excessive fasciculation of their dendrites and clumping of their cell bodies. These findings led to the conclusion that Dscam and Dscaml1 prevent excessive adhesion, primarily by masking cell-type-specific adhesive interactions between dendrites of the same cell class, rather than actively promoting repulsion between them. Thus, in the absence of diversity, mammalian DSCAMs do not provide cells with the ability to distinguish between their own processes and the processes of all other cells, including processes from cells of the same type. Instead, DSCAM acts to negate cell-type-specific interactions that are promoted by other recognition molecules. More recent studies demonstrated that mice use a different family of cell recognition molecules: clustered Protocadherins (Pcdhs) , in a fly Dscam1-like strategy to regulate self-avoidance. Although both clustered Pcdhs and Dscam1 genes generate families of proteins with diverse ectodomains joined to a common cytoplasmic domain, the mode of generating clustered Pcdhs and fly Dscam1 counterpart diversity is markedly different. Pcdhs diversity is largely generated by alternative promoter choice, as opposed to alternative splicing. [ 53 ] [ 54 ] The number of Pcdhs isoforms varies between different vertebrate species, but in aggregate, there are typically on the order of 50 isoforms. [ 54 ] [ 55 ] Isoform-specific homophilic recognition Compelling evidence for discrete binding specificities of different clustered Pcdhs isoforms was uncovered in 2010 by Schreiner & Weiner, who verified that Pcdhs promote isoform-specific homophilic recognition. While the number of Pcdhs isoforms pales in comparison to the number of Dscam1 isoforms, hetero-oligomerization of Pcdhs markedly increases the number of discrete binding specificities encoded by the locus. Pcdhs are required for self-avoidance To seek roles of Pcdh-γs in self-avoidance, Lefebvre et al. (2012) focused on a retinal interneuron , the starburst amacrine cell (SAC), which expresses Pcdh-γs and exhibits dramatic dendritic self-avoidance. They used a Cre-Lox system to delete all the variable domains of the Pcdh-γ locus in the developing retina and verified that dendrites arising from a single SAC frequently crossed each other and sometimes formed loose bundles, similarly to the removal of Dscam1 from da neurons (Figure 8). Pcdhs diversity is essential for self-recognition Furthermore, Lefebvre and colleagues assessed the requirement for isoform diversity in Pcdh-γ -dependent self-avoidance. They demonstrated that single arbitrarily chosen isoforms rescued self-avoidance defects of Pcdh-γ mutant and that expression of the same isoform in neighboring SACs reduced the overlap between them. Their results indicate that diversity appears to underlie self/non-self discrimination, presumably because neighboring neurons are unlikely to express the same isoforms and are therefore free to interact. Therefore, isoform diversity enables SACs to distinguish isoneuronal from heteroneuronal dendrites. As with Dscam1 , self-avoidance in SACs does not rely on a specific isoform, but rather requires that isoform usage differs among neighboring cells. Thus, two phyla appear to have recruited different molecules to mediate similar, complex strategies for self-recognition, thereby promoting self-avoidance.
https://en.wikipedia.org/wiki/Neuronal_self-avoidance
Neuronal tiling is a phenomenon in which multiple arbors of neurons innervate the same surface or tissue in a nonredundant and tiled pattern that maximizes coverage of the surface while minimizing overlap between neighboring arbors. [ 1 ] Hence, dendrites of the same neuron spread out by avoiding one another ( self-avoidance ). Moreover, dendrites of certain types of neurons such as class III and class IV dendritic arborization neurons avoid dendrites of neighboring neurons of the same type (tiling), whereas dendrites of different neuronal types can cover the same territory (coexistence). [ 2 ] One good example of this organization is the cell bodies of virtually all retinal cell types which are arranged as independent, nonrandom mosaics that maximize the distance between neighbouring cells. [ 1 ] Elucidating the mechanisms of process spacing during development is therefore relevant for understanding principles of tissue organization inside and outside of the nervous system. [ 1 ] This neuroscience article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neuronal_tiling
Neuronal tracing , or neuron reconstruction is a technique used in neuroscience to determine the pathway of the neurites or neuronal processes, the axons and dendrites , of a neuron . From a sample preparation point of view, it may refer to some of the following as well as other genetic neuron labeling techniques, In broad sense, neuron tracing is more often related to digital reconstruction of a neuron's morphology from imaging data of above samples. Digital reconstruction or tracing of neuron morphology is a fundamental task in computational neuroscience . [ 1 ] [ 2 ] [ 3 ] It is also critical for mapping neuronal circuits based on advanced microscope images, usually based on light microscopy (e.g. laser scanning microscopy, bright field imaging) or electron microscopy or other methods. Due to the high complexity of neuron morphology and often seen heavy noise in such images, as well as the typically encountered massive amount of image data, it has been widely viewed as one of the most challenging computational tasks for computational neuroscience. Many image analysis based methods have been proposed to trace neuron morphology, usually in 3D, manually, semi-automatically or completely automatically. There are normally two processing steps: generation and proof editing of a reconstruction. [ 4 ] [ 5 ] The need to describe or reconstruct a neuron's morphology probably began in early days of neuroscience when neurons were labeled or visualized using Golgi's methods . Many of the known neuron types, such as pyramidal neurons and Chandelier cells , were described based on their morphological characterization. The first computer-assisted neuron reconstruction system, now known as Neurolucida , was developed by Dr. Edmund Glaser and Dr. Hendrik Van der Loos in the 1960s. [ 6 ] Modern approaches to trace a neuron started when digitized pictures of neurons were acquired using microscopes. Initially this was done in 2D. Quickly after the advanced 3D imaging, especially the fluorescence imaging and electron microscopic imaging , there were a huge demand of tracing neuron morphology from these imaging data. Neurons can be often traced manually either in 2D or 3D. To do so, one may either directly paint the trajectory of neuronal processes in individual 2D sections of a 3D image volume and manage to connect them, or use the 3D Virtual Finger painting which directly converts any 2D painted trajectory in a projection image to real 3D neuron processes. The major limitation of manual tracing of neurons is the huge amount of labor in the work. Automated reconstructions of neurons can be done using model (e.g. spheres or tubes) fitting and marching, [ 7 ] pruning of over-reconstruction, [ 8 ] minimal cost connection of key points, ray-bursting and many others. [ 9 ] Skeletonization is a critical step in automated neuron reconstruction, but in the case of all-path-pruning and its variants [ 10 ] it is combined with estimation of model parameters (e.g. tube diameters). The major limitation of automated tracing is the lack of precision especially when the neuron morphology is complicated or the image has substantial amount of noise. Semi-automated neuron tracing often depends on two strategies. One is to run the completely automated neuron tracing followed by manual curation of such reconstructions. The alternative way is to produce some prior knowledge, such as the termini locations of a neuron, with which a neuron can be more easily traced automatically. Semi-automated tracing is often thought to be a balanced solution that has acceptable time cost and reasonably good reconstruction accuracy. The open source software Vaa3D -Neuron, Neurolucida 360 , Imaris Filament Tracer and Aivia all provide both categories of methods. Tracing of electron microscopy image is thought to be more challenging than tracing light microscopy images, while the latter is still quite difficult, according to the DIADEM competition . [ 11 ] For tracing electron microscopy data, manual tracing is used more often than the alternative automated or semi-automated methods. [ 12 ] For tracing light microscopy data, more times the automated or semi-automated methods are used. Since tracing electron microscopy images takes substantial amount time, collaborative manual tracing software is useful. Crowdsourcing is an alternative way to effectively collect collaborative manual reconstruction results for such image data sets. [ 13 ] A number of neuron tracing tools especially software packages are available. One comprehensive Open Source software package that contains implementation of a number of neuron tracing methods developed in different research groups as well as many neuron utilities functions such as quantitative measurement, parsing, comparison, is Vaa3D and its Vaa3D-Neuron modules . Some other free tools such as NeuronStudio [ 14 ] also provide tracing function based on specific methods. Neuroscientists also use commercial tools such as Neurolucida , Neurolucida 360 , Aivia , Amira, etc. to trace and analyse neurons. A 2012 study show that Neurolucida is cited over 7 times more than all other available neuron tracing programs combined, [ 15 ] and is also the most widely used and versatile system to produce neuronal reconstruction. [ 16 ] The BigNeuron project (https://alleninstitute.org/bigneuron/about/) [ 17 ] is a recent substantial international collaboration effort to integrate the majority of known neuron tracing tools onto a common platform to facilitate Open Source, easy accessing of various tools at one single place. Powerful new tools such as UltraTracer, [ 18 ] that can trace arbitrarily large image volume, have been produced through this effort. The online tool WEBKNOSSOS has a Flight Mode for high-speed tracing of axons or dendrites, in which trained annotator crowds achieve tracing speeds of 1.5 ± 0.6 mm/h for axons and 2.1 ± 0.9 mm/h for dendrites in 3D electron microscopy data. [ 19 ] Reconstructions of single neurons can be stored in various formats. This largely depends on the software that have been used to trace such neurons. The SWC format, which consists of a number of topologically connected structural compartments (e.g. a single tube or sphere), is often used to store digital traced neurons, especially when the morphology lacks or does not need detailed 3D shape models for individual compartments. Other more sophisticated neuron formats have separate geometrical modeling of the neuron cell body and neuron processes using Neurolucida [ 20 ] [ 21 ] [ 22 ] among others. There are a few common single neuron reconstruction databases. A widely used database is http://NeuroMorpho.Org [ 23 ] which contains over 86,000 neuron morphology of >40 species contributed worldwide by a number of research labs. Allen Institute for Brain Science , HHMI's Janelia Research Campus , and other institutes are also generating large-scale single neuron databases. Many of related neuron data databases at different scales also exist.
https://en.wikipedia.org/wiki/Neuronal_tracing
Neuropeptides are chemical messengers made up of small chains of amino acids that are synthesized and released by neurons . Neuropeptides typically bind to G protein-coupled receptors (GPCRs) to modulate neural activity and other tissues like the gut, muscles, and heart. Neuropeptides are synthesized from large precursor proteins which are cleaved and post-translationally processed then packaged into large dense core vesicles . Neuropeptides are often co-released with other neuropeptides and neurotransmitters in a single neuron, yielding a multitude of effects. Once released, neuropeptides can diffuse widely to affect a broad range of targets. Neuropeptides are extremely ancient and highly diverse chemical messengers. Placozoans such as Trichoplax , extremely basal animals which do not possess neurons, use peptides for cell-to-cell communication in a way similar to the neuropeptides of higher animals. Peptide signals play a role in information processing that is different from that of conventional neurotransmitters, and many appear to be particularly associated with specific behaviours. For example, in mammals oxytocin [ 1 ] and vasopressin [ 2 ] have striking and specific effects on social behaviours, including maternal behaviour and pair bonding. In invertibrates, CCAP has several functions including regulating heart rate, [ 3 ] allatostatin [ 4 ] and proctolin [ 5 ] regulate food intake and growth, and bursicon [ 6 ] controls tanning of the cuticle. Neuropeptides are synthesized from inactive precursor proteins called prepropeptides. [ 7 ] Prepropeptides contain sequences for a family of distinct peptides and often contain duplicated copies of the same peptides, depending on the organism. [ 8 ] In addition to the precursor peptide sequences, prepropeptides also contain a signal peptide, spacer peptides, and cleavage sites. [ 9 ] The signal peptide sequence guides the protein to the secretory pathway, starting at the endoplasmic reticulum . The signal peptide sequence is removed in the endoplasmic reticulum, yielding a propeptide. The propeptide travels to the Golgi apparatus where it is proteolytically cleaved and processed into multiple peptides. Peptides are packaged into dense core vesicles, where further cleaving and processing, such as C-terminal amidation, can occur. Dense core vesicles are transported throughout the neuron and can release peptides at the synaptic cleft, cell body, and along the axon. [ 7 ] [ 10 ] [ 11 ] [ 12 ] A single animal may use hundreds of different neuropeptides. In C. elegans , for example, 120 genes specify more than 250 neuropeptides. [ 13 ] Neuropeptides are released by dense core vesicles after depolarization of the cell. Compared to classical neurotransmitter signaling, neuropeptide signaling is more sensitive. Neuropeptide receptor affinity is in the nanomolar to micromolar range while neurotransmitter affinity is in the micromolar to millimolar range. Additionally, dense core vesicles contain a small amount of neuropeptide (3 - 10mM) compared to synaptic vesicles containing neurotransmitters (e.g. 100mM for acetylcholine). [ 14 ] Evidence shows that neuropeptides are released after high-frequency firing or bursts, distinguishing dense core vesicle from synaptic vesicle release. [ 10 ] Neuropeptides utilize volume transmission and are not reuptaken quickly, allowing diffusion across broad areas (nm to mm) to reach targets. Almost all neuropeptides bind to G protein-coupled receptors (GPCRs), inducing second messenger cascades to modulate neural activity on long time-scales. [ 7 ] [ 10 ] [ 11 ] Expression of neuropeptides in the nervous system is diverse. Neuropeptides are often co-released with other neuropeptides and neurotransmitters, yielding a diversity of effects depending on the combination of release. [ 11 ] [ 15 ] For example, vasoactive intestinal peptide is typically co-released with acetylcholine. [ 16 ] Neuropeptide release can also be specific. In Drosophila larvae, for example, eclosion hormone is expressed in just two neurons. [ 12 ] Neuropeptides are often co-released with other neurotransmitters and neuropeptides to modulate synaptic activity. Synaptic vesicles and dense core vesicles can have differential activation properties for release, resulting in context-dependent co-release combinations. [ 17 ] [ 18 ] [ 19 ] For example, insect motor neurons are glutamatergic and some contain dense core vesicles with proctolin . At low frequency activation, only glutamate is released, yielding fast and rapid excitation of the muscle. At high frequency activation however, dense core vesicles release proctolin, inducing prolonged contractions. [ 20 ] Thus, neuropeptide release can be fine-tuned to modulate synaptic activity in certain contexts. Some regions of the nervous system are specialized to release distinctive sets of peptides. For example, the hypothalamus and the pituitary gland release peptides (e.g. TRH, GnRH, CRH, SST) that act as hormones [ 21 ] [ 22 ] In one subpoplation of the arcuate nucleus of the hypothalamus , three anorectic peptides are co-expressed: α-melanocyte-stimulating hormone (α-MSH), galanin-like peptide , and cocaine-and-amphetamine-regulated transcript (CART), and in another subpopulation two orexigenic peptides are co-expressed, neuropeptide Y and agouti-related peptide (AGRP). [ 23 ] These peptides are all released in different combinations to signal hunger and satiation cues. [ 24 ] The following is a list of neuroactive peptides co-released with other neurotransmitters. Transmitter names are shown in bold. Norepinephrine (noradrenaline). In neurons of the A2 cell group in the nucleus of the solitary tract ), norepinephrine co-exists with: GABA Acetylcholine Dopamine Epinephrine (adrenaline) Serotonin (5-HT) Some neurons make several different peptides. For instance, vasopressin co-exists with dynorphin and galanin in magnocellular neurons of the supraoptic nucleus and paraventricular nucleus , and with CRF (in parvocellular neurons of the paraventricular nucleus ) Oxytocin in the supraoptic nucleus co-exists with enkephalin , dynorphin , cocaine-and amphetamine regulated transcript (CART) and cholecystokinin . Most neuropeptides act on G-protein coupled receptors (GPCRs). Neuropeptide-GPCRs fall into two families: rhodopsin-like and the secretin class. [ 25 ] Most peptides activate a single GPCR, while some activate multiple GPCRs (e.g. AstA, AstC, DTK). [ 15 ] Peptide-GPCR binding relationships are highly conserved across animals. Aside from conserved structural relationships, some peptide-GPCR functions are also conserved across the animal kingdom. For example, neuropeptide F/neuropeptide Y signaling is structurally and functionally conserved between insects and mammals. [ 15 ] Although peptides mostly target metabotropic receptors, there is some evidence that neuropeptides bind to other receptor targets. Peptide-gated ion channels (FMRFamide-gated sodium channels) have been found in snails and Hydra. [ 26 ] Other examples of non-GPCR targets include: insulin-like peptides and tyrosine-kinase receptors in Drosophila and atrial natriuretic peptide and eclosion hormone with membrane-bound guanylyl cyclase receptors in mammals and insects. [ 27 ] Due to their modulatory and diffusive nature, neuropeptides can act on multiple time and spatial scales. A nearly complete map of these interactions is known for at least one small animal, C. elegans . [ 28 ] For many other animals, at least some neuropeptide actions are known, as shown in the Examples section above. Peptides are ancient signaling systems that are found in almost all animals on Earth. [ 29 ] [ 30 ] Genome sequencing reveals evidence of neuropeptide genes in Cnidaria , Ctenophora , and Placozoa , some of oldest living animals with nervous systems or neural-like tissues. [ 31 ] [ 32 ] [ 33 ] [ 8 ] Recent studies also show genomic evidence of neuropeptide processing machinery in metazoans and choanoflagellates , suggesting that neuropeptide signaling may predate the development of nervous tissues. [ 34 ] Additionally, Ctenophore and Placozoa neural signaling is entirely peptidergic and lacks the major amine neurotransmitters such as acetylcholine, dopamine, and serotonin. [ 35 ] [ 29 ] This also suggests that neuropeptide signaling developed before amine neurotransmitters. Neuropeptides and antagonists that bind to their receptors can be used as insecticides. [ 36 ] These include both naturally occurring neuropeptides [ 37 ] and synthetic compounds designed to block their receptors. [ 38 ] In humans, neuropeptides have been implicated in several human diseases. [ 39 ] Antagonists to the related receptors may have clinical application. [ 40 ] In the early 1900s, chemical messengers were crudely extracted from whole animal brains and tissues and studied for their physiological effects. In 1931, von Euler and Gaddum, used a similar method to try and isolate acetylcholine but instead discovered a peptide substance that induced physiological changes including muscle contractions and depressed blood pressure. These effects were not abolished using atropine, ruling out the substance as acetylcholine. [ 41 ] [ 16 ] In insects, proctolin was the first neuropeptide to be isolated and sequenced. [ 42 ] [ 43 ] In 1975, Alvin Starratt and Brian Brown extracted the peptide from hindgut muscles of the cockroach and found that its application enhanced muscle contractions. While Starratt and Brown initially thought of proctolin as an excitatory neurotransmitter, proctolin was later confirmed as a neuromodulatory peptide. [ 44 ] David de Wied first used the term "neuropeptide" in the 1970s to delineate peptides derived from the nervous system. [ 9 ] [ 14 ]
https://en.wikipedia.org/wiki/Neuropeptide
Neuropeptidergic means "related to neuropeptides ". A neuropeptidergic agent (or drug ) is a chemical which functions to directly modulate the neuropeptide systems in the body or brain. An example is opioidergics . This drug article relating to the nervous system is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neuropeptidergic
Neurophysins are carrier proteins which transport the hormones oxytocin and vasopressin to the posterior pituitary from the paraventricular and supraoptic nucleus of the hypothalamus , respectively. Inside the neurosecretory granules, the analogous neurophysin I and II form stabilizing complexes via covalent interactions. [ 1 ] Stabilizing neurophysin-hormone complexes that are formed within neurosecretory granules located in the posterior pituitary gland aid in intra-axonal transport. [ 2 ] During intra-axonal transport, the neurophysin's are believed to prevent the bound hormone from leaking into the cytoplasmic space and proteolytic digestion via enzymes. [ 3 ] However, due to the low concentration of neurophysin in the blood, it is likely the protein-hormone complex dissociates, indicating the neurophysin does not aid in transporting the hormone through the circulatory system . [ 2 ] Neurophysins are also secreted out of the posterior pituitary hypothalamus, each carrying their respective associated passenger hormone. When the posterior pituitary hypothalamus secretes vasopressin and its neurophysin carrier, it also secretes a glycopeptide . There are two types: These proteins are synthesized in the cell bodies of the supraoptic and paraventricular regions of the hypothalamus . The disulfide -rich neurophysin protein is suggested to be congruent with the synthesis of insulin in which a precursor molecule of higher molecular weight is proteolytically cleaved and forms disulfide linkages. [ 2 ] Although not enough data has been obtained, it is hypothesized that there is a common precursor molecule between neurophysin and the two hormones it stabilizes. [ 2 ] Neurophysins are acidic proteins with a molecular weight of approximately 10,000 Da that are rich in cysteine , glycine , and proline residues . The protein is double domain with a polypeptide chain of 93-95 residues with 14 cysteine residues forming 7 disulfide bridges . Domain I contains a COOH terminal with a disulfide loop; domain II lacks this COOH terminal disulfide loop . Based on the resemblance of the disulfide loop present on vasopressin and oxytocin, it is suggested that the hormones form covalent linkages to this disulfide loop present on the COOH terminal of domain I. [ 4 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neurophysins
Neuropixels probes (or "Neuropixels") are electrodes developed in 2017 to record the activity of hundreds of neurons in the brain. The probes are based on CMOS technology and have 1,000 recording sites arranged in two rows on a thin, 1-cm long shank. [ 1 ] [ 2 ] The probes are used in hundreds of neuroscience laboratories including the International Brain Laboratory , to record brain activity mostly in mice and rats. By revealing the activity of vast numbers of neurons, Neuropixels probes are allowing new approaches [ 3 ] to the study of brain processes such as sensory processing, decision making, [ 4 ] internal state, [ 5 ] and emotions [ 6 ] and to create brain-machine interfaces. [ 7 ] [ 8 ] The probes were announced in 2017. [ 9 ] They are designed and fabricated by imec , an electronics research center in Belgium . In 2022, Neuropixels probes were inserted in human patients. [ 10 ]
https://en.wikipedia.org/wiki/Neuropixels
Neuroprivacy , or "brain privacy," is a concept which refers to the rights people have regarding the imaging, extraction and analysis of neural data from their brains . [ 1 ] This concept is highly related to fields like neuroethics , neurosecurity , and neurolaw , and has become increasingly relevant with the development and advancement of various neuroimaging technologies. Neuroprivacy is an aspect of neuroethics specifically regarding the use of neural information in legal cases, neuromarketing , surveillance and other external purposes, as well as corresponding social and ethical implications. Neuroethical concepts such as neuroprivacy developed initially in the 2000s, after the initial invention and development of neuroimaging techniques such as positron emission tomography (PET), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI). [ 2 ] As neuroimaging became highly studied and popularized in the 1990s, it also started entering the commercial market as entrepreneurs sought to market the practical applications of neuroscience, such as neuromarketing, neuroenhancement and lie detection . Neuroprivacy consists of the privacy issues raised by both neuroscience research and applied uses of neuroimaging techniques. The relevance of neuroprivacy debate increased significantly after the 9/11 terrorist attacks , which led to a push for increased neuroimaging in the context of information/threat detection and surveillance . [ 3 ] [ 4 ] Brain fingerprinting is a controversial and unproven EEG technique that relies on identifying the P300 event-related potential , [ 5 ] which is correlated with recognition of some stimulus. [ 6 ] The purpose of this technique is to determine if a person has incriminating information or memory . In its current state, brain fingerprinting is only able to determine the existence of information, and is unable to provide any specific details about that information. [ 7 ] Its creator, Dr. Lawrence Farwell, claims brain fingerprinting is highly reliable and nearly impossible to fool, [ 6 ] but some studies dispute its reliability and lack of countermeasures. [ 8 ] [ 9 ] Some possible countermeasures include thinking of something else instead of processing the real stimuli, mental suppression of recognition, or simply not cooperating with the test. [ 8 ] There have been concerns over the potential use of memory dampening drugs such as propranolol to beat brain fingerprinting. [ 10 ] However, some studies have shown that propranolol actually dampens the emotional arousal associated with a memory instead of the memory itself, which could even improve the recollection of the memory. [ 11 ] A comparable EEG technique is brain electrical oscillation signature profiling (BEOS), which is very similar to brain fingerprinting in that it detects the presence of specific information or memories. Despite a significant lack of scientific studies confirming the validity of BEOS profiling, this technique has been used in India to provide evidence for criminal investigations. [ 9 ] [ 12 ] Current neuroimaging technology has been able to detect neural correlates of human attributes such as memory and morality. [ 13 ] [ 14 ] Neurodata can be used to diagnose and predict behavioral disorders and patterns such as psychopathy and antisocial behavior, both of which are factors in calculating likelihood of future criminal behavior. [ 15 ] [ 16 ] This ability to evaluate mental proficiencies, biases and faculties could be relevant to government or corporate entities for the purposes of surveillance or neuromarketing, especially if neurodata can be collected without the subjects' knowledge or consent. [ 17 ] Using neurodata to predict future behaviors and actions could help create or inform preventive measures to treat people before problems happen; however, this raises ethical issues as to how society defines "moral" or "acceptable" behavior. [ 16 ] It is possible to use neuroimaging as a form of lie detection. By assuming deception requires an increase of cognitive processes to develop an alternate story, the difference in mental states between telling the truth or lying should be noticeable. [ 7 ] However, this relies on assumptions that have yet to be conclusively determined, and as such neurological lie detection is not yet reliable or fully understood. This is in contrast to the standard polygraph , which relies on analyzing biological mechanisms that are well understood but still not necessarily reliable. [ 18 ] The legal systems of most countries generally do not accept neuroimaging data as permissible evidence, with some exceptions. India has allowed BEOS tests as legal evidence, and an Italian court of appeals used neuroimaging evidence in a 2009 case, being the first European court to do so. [ 7 ] Canadian and US courts have been more cautious in permitting neuroimaging data as legal evidence. [ 18 ] One of the reasons legal systems have been slow to adopt neuroimaging data as an accepted form of evidence is the possible error and misinterpretations that could result from such a new technology; courts in the US typically follow the Daubert standard set for evidence evaluation by the Daubert v. Merrell Dow Pharmaceuticals, Inc . Supreme Court case, which established that the validity of scientific evidence must be determined by the trial judge. [ 9 ] The Daubert standard serves as a safeguard for the reliability of scientific evidence, and requires a significant amount of testing for any neuroimaging technique to be considered for it to be considered as evidence. While brain fingerprinting was technically accepted in the Harrington v. Iowa case, the judge specifically stated that the EEG evidence was not to be presented to a jury and so the evidence did not set a significant precedent. [ 7 ] Neurological surveillance is relevant to governmental, corporate, academic and technological entities, as the improvement of technology increases the amount of information that can be extrapolated from neuroimaging. [ 19 ] Surveillance with current neuroimaging technology is considered difficult, given how fMRI data is difficult to collect and interpret even in laboratory settings; fMRI studies generally require subjects to be motionless and cooperative. [ 17 ] However, as technology improves it may be possible to overcome these requirements. Hypothetically, there are benefits in using neuroscience in the context of surveillance and security. [ 4 ] However, there is debate over whether doing so would violate neuroprivacy to an unacceptable extent. [ 3 ] [ 20 ] Neurodata is valuable to advertising and marketing entities by its potential to identify how and why people react to different stimuli in order to better influence consumers. [ 21 ] This ability to examine reactions and perceptions from the brain directly creates new ethical debates, such as how to define the acceptable limits of mental manipulation and how to avoid targeting vulnerable/receptive demographics. In a sense, these could be seen as not necessarily brand new debates but rather added dimensions to previously existing discussions. The main scientific arguments regarding neuroprivacy mainly revolve around the limits to the current understanding of neurodata. Many of the arguments against using neuroimaging in legal, surveillance and other contexts are based on the lack of a solid scientific basis, meaning the potential for error and misinterpretation is too high. [ 9 ] Brain fingerprinting, one of the most popularized forms of neuroanalysis, has been promoted by its creator, Lawrence Farwell, despite a lack of scientific agreement on its reliability. [ 22 ] [ 23 ] [ 8 ] Currently, there is even a lack of scientific understanding as to what can be interpreted from neurodata, which makes limiting and categorizing different types of neurodata difficult and thus complicating neuroprivacy. [ 24 ] Another complication is that neurodata is highly personal and is essentially inseparable from the subject, making it extremely sensitive and difficult to anonymize. Another issue is the conflation with scientific knowledge with beliefs regarding the relations between philosophical, neural and societal constructs. [ 3 ] Popularization and overconfidence in scientific techniques may lead to assumptions or misinterpretations of what neurodata actually describe, when in reality there are limits to what can be interpreted from correlations between neural activity and semantic meaning. [ 25 ] There are various legal arguments as to how neuroprivacy is covered under current protections and rights and how future laws should be implemented to define and protect neuroprivacy, as neuroscience has the potential to significantly change the legal status quo. [ 7 ] The legal definition of neuroprivacy has yet to be properly established, but there appears to be a general consensus that a legal and ethical foundation for neuroprivacy rights should be established before neuroimaging becomes widely accepted across legal, corporate and security contexts. [ 19 ] [ 3 ] [ 18 ] [ 9 ] [ 24 ] [ 1 ] [ 13 ] [ 17 ] [ 4 ] As neuroprivacy constitutes an international issue, an international consensus may be required to establish the necessary legal and ethical foundation. [ 7 ] Bringing neuroscience into legal contexts has been argued to have certain benefits. Current types of legal testimony, such as eyewitness testimony and polygraph testing, have significant flaws that may be possibly currently overlooked due to historical and traditional precedents. [ 26 ] [ 27 ] Neuroscience could potentially solve some of these issues by directly examining the brain, given scientific confidence in the neuroimaging techniques. [ 4 ] However, this raises questions concerning balancing legal usages of neuroscience with neuroprivacy protections. [ 17 ] In the US, there are certain existing rights that could be interpreted to protect neuroprivacy. The Fifth Amendment , which protects citizens from self-incrimination, could be interpreted to protecting citizens from being incriminated by their own brain. [ 17 ] However, the current interpretation is that the Fifth Amendment protects citizens from self-incriminating testimony; if neuroimaging constitutes physical evidence instead of testimony, the Fifth Amendment may not protect against neuroimaging evidence. [ 20 ] The Ninth and Fourteenth amendments help protect unspecified rights and fair procedures, which may or may not include neuroprivacy to some extent. [ 17 ] One interpretation of neuroimaging evidence is categorizing it as forensic evidence rather than scientific expert testimony; detecting memories and information of a crime could be compared to collecting forensic residue from a crime scene. This distinction would make it categorically different than a polygraph test, and increase its legal permissibility in Canadian and US legal systems. [ 18 ] Some general ethical concerns regarding neuroprivacy revolve around personal rights and control over personal information. As technology improves, it is possible that collecting neurodata without consent or knowledge will be easier or more common in the future. One argument is that the collection of neurodata is a violation of both personal property and intellectual property, as the collection of neurodata involves scanning both the body and the analysis of thought. [ 20 ] One of the main ethical controversies regarding neuroprivacy is related to the issue of free will , and the mind-body problem . A possible concern is the unknown extent to which neurodata can predict actions and thoughts - it is not currently known if the physical activity of the brain is conclusively or solely responsible for thoughts and actions. [ 28 ] Examining the brain as a way to prevent crimes or disorders before they manifest raises the question of if it is possible for people to exercise their agency despite their neurological condition. Even using neurodata in a way to treat certain disorders and diseases preemptively raises questions about identity, agency and how society defines morality. [ 15 ]
https://en.wikipedia.org/wiki/Neuroprivacy
The neuroprostanes are prostaglandin -like compounds formed in vivo from the free radical-catalyzed peroxidation of essential fatty acids (primarily docosahexaenoic acid ) without the direct action of cyclooxygenase (COX) enzymes. The result is the formation of isoprostane-like compounds F4-, D4-, E4-, A4-, and J4-neuroprostanes which have been shown to be produced in vivo. [ 1 ] These oxygenated essential fatty acids possess potent biological activity as anti-inflammatory mediators inhibiting the response of human macrophages that augment the perception of pain . [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neuroprostanes
Neuroproteomics is the study of the protein complexes and species that make up the nervous system . These proteins interact to make the neurons connect in such a way to create the intricacies that nervous system is known for. Neuroproteomics is a complex field that has a long way to go in terms of profiling the entire neuronal proteome. It is a relatively recent field that has many applications in therapy and science. So far, only small subsets of the neuronal proteome have been mapped, and then only when applied to the proteins involved in the synapse. The word proteomics was first used in 1994 by Marc Wilkins as the study of “the protein equivalent of a genome”. [ 1 ] It is defined as all of the proteins expressed in a biological system under specific physiologic conditions at a certain point in time. It can change with any biochemical alteration, and so it can only be defined under certain conditions. Neuroproteomics is a subset of this field dealing with the complexities and multi-system origin of neurological disease. Neurological function is based on the interactions of many proteins of different origin, and so requires a systematic study of subsystems within its proteomic structure. Neuroproteomics has the difficult task of defining on a molecular level the pathways of consciousness , senses, and self. Neurological disorders are unique [ citation needed ] in that they do not always exhibit outward symptoms. Defining the disorders becomes difficult and so neuroproteomics is a step in the right direction of identifying bio-markers that can be used to detect diseases. Not only does the field have to map out the different proteins possible from the genome, but there are many modifications that happen after transcription that affect function as well. Because neurons are such dynamic structures, changing with every action potential that travels through them, neuroproteomics offers the most potential for mapping out the molecular template of their function. Genomics offers a static roadmap of the cell, while proteomics can offer a glimpse into structures smaller than the cell because of its specific nature to each moment in time. In order for neuroproteomics to function correctly, proteins must be separated in terms of the proteome from which they came. For example, one set might be under normal conditions, while another might be under diseased conditions. Proteins are commonly separated using two-dimensional polyacrylamide gel electrophoresis (2D PAGE). For this technique, proteins are run across an immobile gel with a pH gradient until they stop at the point where their net charge is neutral. After separating by charge in one direction, sodium dodecyl sulfate is run in the other direction to separate the proteins by size. A two-dimensional map is created using this technique that can be used to match additional proteins later. One can usually match the function of a protein by identifying in an 2D PAGE in simple proteomics because many intracellular somatic pathways are known. In neuroproteomics, however, many proteins combine to give an end result that may be neurological disease or breakdown. It is necessary then to study each protein individually and find a correlation between the different proteins to determine the cause of a neurological disease. New techniques are being developed that can identify proteins once they are separated out using 2D PAGE. Protein separate techniques, such as 2D PAGE, are limited in that they cannot handle very high or low molecular weight protein species. Alternative methods have been developed to deal with such cases. These include liquid chromatography mass spectrometry along with sodium dodecyl sulfate polyacrylamide gel electrophoresis, or liquid chromatography mass spectrometry run in multiple dimensions. Compared to simple 2D page, liquid chromatography mass spectrometry can handle a larger range of protein species size, but it is limited in the amount of protein sample it handle at once. Liquid chromatography mass spectrometry is also limited in its lack of a reference map from which to work with. Complex algorithms are usually used to analyze the fringe results that occur after a procedure is run. The unknown portions of the protein species are usually not analyzed in favor of familiar proteomes, however. This fact reveals a fault with current technology; new techniques are needed to increase both the specificity and scope of proteome mapping. It is commonly known that drug addiction involves permanent synaptic plasticity of various neuronal circuits. Neuroproteomics is being applied to study the effect of drug addiction across the synapse. Research is being conducted by isolating distinct regions of the brain in which synaptic transmission takes place and defining the proteome for that particular region. Different stages of drug abuse must be studied, however, in order to map out the progression of protein changes along the course of the drug addiction. These stages include enticement, ingesting, withdrawal, addiction, and removal. It begins with the change in the genome through transcription that occurs due to the abuse of drugs. It continues to identify the most likely proteins to be affected by the drugs and focusing in on that area. For drug addiction, the synapse is the most likely target as it involves communication between neurons. Lack of sensory communication in neurons is often an outward sign of drug abuse , and so neuroproteomics is being applied to find out what proteins are being affected to prevent the transport of neurotransmitters. In particular, the vesicle releasing process is being studied to identify the proteins involved in the synapse during drug abuse. Proteins such as synaptotagmin and synaptobrevin interact to fuse the vesicle into the membrane. Phosphorylation also has its own set of proteins involved that work together to allow the synapse to function properly. Drugs such as morphine change properties such as cell adhesion, neurotransmitter volume, and synaptic traffic. After significant morphine application, tyrosine kinases received less phosphorylation and thus send fewer signals inside the cell. These receptor proteins are unable to initiate the intracellular signaling processes that enable the neuron to live, and necrosis or apoptosis may be the result. With more and more neurons affected along this chain of cell death, permanent loss of sensory or motor function may be the result. By identifying the proteins that are changed with drug abuse, neuroproteomics may give clinicians even earlier biomarkers to test for to prevent permanent neurological damage. Recently, a novel terminology (Psychoproteomics) has been coined by the University of Florida researchers from Dr. Mark S Gold Lab. Kobeissy et al. defined Psychoproteomics as integral proteomics approach dedicated to studying proteomic changes in the field of psychiatric disorders, particularly substance-and drug-abuse neurotoxicity . Traumatic brain injury is defined as a “direct physical impact or trauma to the head followed by a dynamic series of injury and repair events”. [ 2 ] Recently, neuroproteomics have been applied to studying the disability that over 5.4 million Americans live with. In addition to physically injuring the brain tissue, traumatic brain injury induces the release of glutamate that interacts with ionotropic glutamate receptors (iGluRs). These glutamate receptors acidify the surrounding intracranial fluid, causing further injury on the molecular level to nearby neurons. The death of the surrounding neurons is induced through normal apoptosis mechanisms, and it is this cycle that is being studied with neuroproteomics. Three different cysteine protease derivatives are involved in the apoptotic pathway induced by the acidic environment triggered by glutamate. These cysteine proteases include calpain, caspase, and cathepsin. These three proteins are examples of detectable signs of traumatic brain injury that are much more specific than temperature, oxygen level, or intracranial pressure. Proteomics thus also offers a tracking mechanism by which researchers can monitor the progression of traumatic brain injury, or a chronic disease such as Alzheimer’s or Parkinson’s. Especially in Parkinson’s, in which neurotransmitters play a large role, recent proteomic research has involved the study of synaptotagmin. Synaptotagmin is involved in the calcium-induced budding of vesicle containing neurotransmitters from the presynaptic membrane. By studying the intracellular mechanisms involved in neural apoptosis after traumatic brain injury, researchers can create a map that genetic changes can follow later on. One group of researchers applied the field of neuroproteomics to examine how different proteins affect the initial growth of neuritis. [ 3 ] The experiment compared the protein activity of control neurons with the activity of neurons treated with nerve growth factor (NGF) and JNJ460, an “immunophilin ligand.” JNJ460 is an offspring of another drug that is used to prevent immune attack when organs are transplanted. It is not an immunosuppressant, however, but rather it acts as a shield against microglia. NGF promotes neuron viability and differentiation by binding to TrkA, a tyrosine receptor kinase. This receptor is important in initiating intracellular metabolic pathways, including Ras, Rak, and MAP kinase. Protein differentiation was measured in each cell sample with and without treatment by NGF and JNJ460. A peptide mixture was made by washing off unbound portions of the amino acid sequence in a reverse column. The resulting mixture was then suspended a peptide mixture in a bath of cation exchange fluid. The proteins were identified by splicing them with trypsin and then searching through the results of passing the product through a mass spectrometer. This applies a form of liquid chromatography mass spectrometry to identify proteins in the mixture JNJ460 treatment resulted in an increase in “signal transduction” proteins, while NGF resulted in an increase in proteins associated with the ribosome and synthesis of other proteins. JNJ460 also resulted in more structural proteins associated with intercellular growth, such as actin, myosin, and troponin. With NGF treatment, cells increased protein synthesis and creation of ribosomes. This method allows the analysis of all of the protein patterns overall, rather than a single change in an amino acid. Western blots confirmed the results, according to the researchers, though the changes in proteins were not as obvious in their protocol. The main significance to these findings are that JNJ460 are NGF are distinct processes that both control the protein output of the cell. JNJ460 resulted in increased neuronal size and stability while NGF resulted in increased membrane proteins. When combined, they significantly increase a neuron’s chance of growth. While JNJ460 may “prime” some parts of the cell for NGF treatment, they do not work together. JNJ460 is thought to interact with Schwann cells in regenerating actin and myosin, which are key players in axonal growth. NGF helps the neuron grow as a whole. These two proteins do not play a part in communication with other neurons, however. They merely increase the size of the membrane down which a signal can be sent. Other neurotrophic factor proteomes are needed to guide neurons to each other to create synapses. The broad scope of the available raw neuronal proteins to map requires that initial studies be focused on small areas of the neurons. When taking samples, there are a few places that interest neurologists most. The most important place to start for neurologists is the plasma membrane. This is where most of the communication between neurons takes place. The proteins being mapped here include ion channels, neurotransmitter receptors, and molecule transporters. Along the plasma membrane, the proteins involved in creating cholesterol-rich lipid rafts are being studied because they have been shown to be crucial for glutamate uptake during the initial stages of neuron formation. As mentioned before, vesicle proteins are also being studied closely because they are involved in disease. Collecting samples to study, however, requires special consideration to ensure that the reproducibility of the samples is not compromised. When taking a global sample of one area of the brain for example, proteins that are ubiquitous and relatively unimportant show up very clear in the SDS PAGE. Other unexplored, more specific proteins barely show up and are therefore ignored. It is usually necessary to divide up the plasma membrane proteome, for example, into subproteomes characterized by specific function. This allows these more specific classes of peptides to show up more clearly. In a way, dividing into subproteomes is simply applying a magnifying lens to a specific section of a global proteome’s SDS PAGE map. This method seems to be most effective when applied to each cellular organelle separately. Mitochondrial proteins, for example, which are more effective at transporting electrons across its membrane, can be specifically targeted effectively in order to match their electron-transporting ability to their amino acid sequence.
https://en.wikipedia.org/wiki/Neuroproteomics
Neuroscience is the scientific study of the nervous system (the brain , spinal cord , and peripheral nervous system ), its functions, and its disorders. [ 1 ] [ 2 ] [ 3 ] It is a multidisciplinary science that combines physiology , anatomy , molecular biology , developmental biology , cytology , psychology , physics , computer science , chemistry , medicine , statistics , and mathematical modeling to understand the fundamental and emergent properties of neurons , glia and neural circuits . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The understanding of the biological basis of learning , memory , behavior , perception , and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences . [ 9 ] The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. The techniques used by neuroscientists have expanded enormously, from molecular and cellular studies of individual neurons to imaging of sensory , motor and cognitive tasks in the brain. The earliest study of the nervous system dates to ancient Egypt . Trepanation , the surgical practice of either drilling or scraping a hole into the skull for the purpose of curing head injuries or mental disorders , or relieving cranial pressure, was first recorded during the Neolithic period. Manuscripts dating to 1700 BC indicate that the Egyptians had some knowledge about symptoms of brain damage . [ 10 ] Early views on the function of the brain regarded it to be a "cranial stuffing" of sorts. In Egypt , from the late Middle Kingdom onwards, the brain was regularly removed in preparation for mummification . It was believed at the time that the heart was the seat of intelligence. According to Herodotus , the first step of mummification was to "take a crooked piece of iron, and with it draw out the brain through the nostrils, thus getting rid of a portion, while the skull is cleared of the rest by rinsing with drugs." [ 11 ] The view that the heart was the source of consciousness was not challenged until the time of the Greek physician Hippocrates . He believed that the brain was not only involved with sensation—since most specialized organs (e.g., eyes, ears, tongue) are located in the head near the brain—but was also the seat of intelligence. [ 12 ] Plato also speculated that the brain was the seat of the rational part of the soul. [ 13 ] Aristotle , however, believed the heart was the center of intelligence and that the brain regulated the amount of heat from the heart. [ 14 ] This view was generally accepted until the Roman physician Galen , a follower of Hippocrates and physician to Roman gladiators , observed that his patients lost their mental faculties when they had sustained damage to their brains. [ 15 ] Abulcasis , Averroes , Avicenna , Avenzoar , and Maimonides , active in the Medieval Muslim world, described a number of medical problems related to the brain. In Renaissance Europe , Vesalius (1514–1564), René Descartes (1596–1650), Thomas Willis (1621–1675) and Jan Swammerdam (1637–1680) also made several contributions to neuroscience. Luigi Galvani 's pioneering work in the late 1700s set the stage for studying the electrical excitability of muscles and neurons. In 1843 Emil du Bois-Reymond demonstrated the electrical nature of the nerve signal, [ 16 ] whose speed Hermann von Helmholtz proceeded to measure, [ 17 ] and in 1875 Richard Caton found electrical phenomena in the cerebral hemispheres of rabbits and monkeys. [ 18 ] Adolf Beck published in 1890 similar observations of spontaneous electrical activity of the brain of rabbits and dogs. [ 19 ] Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s. The procedure used a silver chromate salt to reveal the intricate structures of individual neurons . His technique was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine , the hypothesis that the functional unit of the brain is the neuron. [ 20 ] Golgi and Ramón y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions, and categorizations of neurons throughout the brain. In parallel with this research, in 1815 Jean Pierre Flourens induced localized lesions of the brain in living animals to observe their effects on motricity, sensibility and behavior. Work with brain-damaged patients by Marc Dax in 1836 and Paul Broca in 1865 suggested that certain regions of the brain were responsible for certain functions. At the time, these findings were seen as a confirmation of Franz Joseph Gall 's theory that language was localized and that certain psychological functions were localized in specific areas of the cerebral cortex . [ 21 ] [ 22 ] The localization of function hypothesis was supported by observations of epileptic patients conducted by John Hughlings Jackson , who correctly inferred the organization of the motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Modern research through neuroimaging techniques, still uses the Brodmann cerebral cytoarchitectonic map (referring to the study of cell structure ) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks. [ 23 ] During the 20th century, neuroscience began to be recognized as a distinct academic discipline in its own right, rather than as studies of the nervous system within other disciplines. Eric Kandel and collaborators have cited David Rioch , Francis O. Schmitt , and Stephen Kuffler as having played critical roles in establishing the field. [ 24 ] Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research , starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology , bringing together biology, chemistry, physics, and mathematics. The first freestanding neuroscience department (then called Psychobiology) was founded in 1964 at the University of California, Irvine by James L. McGaugh . [ 25 ] This was followed by the Department of Neurobiology at Harvard Medical School , which was founded in 1966 by Stephen Kuffler. [ 26 ] In the process of treating epilepsy , Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain. [ 27 ] [ 28 ] He summarized his findings in a 1950 book called The Cerebral Cortex of Man . [ 29 ] Wilder Penfield and his co-investigators Edwin Boldrey and Theodore Rasmussen are considered to be the originators of the cortical homunculus . [ 30 ] The understanding of neurons and of nervous system function became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for the transmission of electrical signals in neurons of the giant axon of a squid, which they called " action potentials ", and how they are initiated and propagated, known as the Hodgkin–Huxley model . In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model . In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses . Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia . In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model . Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation . As a result of the increasing interest about the nervous system, several prominent neuroscience organizations have been formed to provide a forum to all neuroscientists during the 20th century. For example, the International Brain Research Organization was founded in 1961, [ 31 ] the International Society for Neurochemistry in 1963, [ 32 ] the European Brain and Behaviour Society in 1968, [ 33 ] and the Society for Neuroscience in 1969. [ 34 ] Recently, the application of neuroscience research results has also given rise to applied disciplines as neuroeconomics , [ 35 ] neuroeducation , [ 36 ] neuroethics , [ 37 ] and neurolaw . [ 38 ] Over time, brain research has gone through philosophical, experimental, and theoretical phases, with work on neural implants and brain simulation predicted to be important in the future. [ 39 ] The scientific study of the nervous system increased significantly during the second half of the twentieth century, principally due to advances in molecular biology , electrophysiology , and computational neuroscience . This has allowed neuroscientists to study the nervous system in all its aspects: how it is structured, how it works, how it develops, how it malfunctions, and how it can be changed. For example, it has become possible to understand, in much detail, the complex processes occurring within a single neuron . Neurons are cells specialized for communication. They are able to communicate with neurons and other cell types through specialized junctions called synapses , at which electrical or electrochemical signals can be transmitted from one cell to another. Many neurons extrude a long thin filament of axoplasm called an axon , which may extend to distant parts of the body and are capable of rapidly carrying electrical signals, influencing the activity of other neurons, muscles, or glands at their termination points. A nervous system emerges from the assemblage of neurons that are connected to each other in neural circuits , and networks . The vertebrate nervous system can be split into two parts: the central nervous system (defined as the brain and spinal cord ), and the peripheral nervous system . In many species—including all vertebrates—the nervous system is the most complex organ system in the body, with most of the complexity residing in the brain. The human brain alone contains around one hundred billion neurons and one hundred trillion synapses; it consists of thousands of distinguishable substructures, connected to each other in synaptic networks whose intricacies have only begun to be unraveled. At least one out of three of the approximately 20,000 genes belonging to the human genome is expressed mainly in the brain. [ 40 ] Due to the high degree of plasticity of the human brain, the structure of its synapses and their resulting functions change throughout life. [ 41 ] Making sense of the nervous system's dynamic complexity is a formidable research challenge. Ultimately, neuroscientists would like to understand every aspect of the nervous system, including how it works, how it develops, how it malfunctions, and how it can be altered or repaired. Analysis of the nervous system is therefore performed at multiple levels, ranging from the molecular and cellular levels to the systems and cognitive levels. The specific topics that form the main focus of research change over time, driven by an ever-expanding base of knowledge and the availability of increasingly sophisticated technical methods. Improvements in technology have been the primary drivers of progress. Developments in electron microscopy , computer science , electronics , functional neuroimaging , and genetics and genomics have all been major drivers of progress. Advances in the classification of brain cells have been enabled by electrophysiological recording, single-cell genetic sequencing , and high-quality microscopy, which have combined into a single method pipeline called patch-sequencing in which all three methods are simultaneously applied using miniature tools. [ 42 ] The efficiency of this method and the large amounts of data that is generated has allowed researchers to make some general conclusions about cell types; for example that the human and mouse brain have different versions of fundamentally the same cell types. [ 43 ] Basic questions addressed in molecular neuroscience include the mechanisms by which neurons express and respond to molecular signals and how axons form complex connectivity patterns. At this level, tools from molecular biology and genetics are used to understand how neurons develop and how genetic changes affect biological functions. [ 44 ] The morphology , molecular identity, and physiological characteristics of neurons and how they relate to different types of behavior are also of considerable interest. [ 45 ] Questions addressed in cellular neuroscience include the mechanisms of how neurons process signals physiologically and electrochemically. These questions include how signals are processed by neurites and somas and how neurotransmitters and electrical signals are used to process information in a neuron. Neurites are thin extensions from a neuronal cell body , consisting of dendrites (specialized to receive synaptic inputs from other neurons) and axons (specialized to conduct nerve impulses called action potentials ). Somas are the cell bodies of the neurons and contain the nucleus. [ 46 ] Another major area of cellular neuroscience is the investigation of the development of the nervous system . [ 47 ] Questions include the patterning and regionalization of the nervous system, axonal and dendritic development, trophic interactions , synapse formation and the implication of fractones in neural stem cells , [ 48 ] [ 49 ] differentiation of neurons and glia ( neurogenesis and gliogenesis ), and neuronal migration . [ 50 ] Computational neurogenetic modeling is concerned with the development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes, on the cellular level (Computational Neurogenetic Modeling (CNGM) can also be used to model neural systems). [ 51 ] Systems neuroscience research centers on the structural and functional architecture of the developing human brain, and the functions of large-scale brain networks , or functionally-connected systems within e brain. Alongside brain development, systems neuroscience also focuses on how the structure and function of the brain enables or restricts the processing of sensory information, using learned mental models of the world, to motivate behavior. Questions in systems neuroscience include how neural circuits are formed and used anatomically and physiologically to produce functions such as reflexes , multisensory integration , motor coordination , circadian rhythms , emotional responses , learning , and memory . [ 52 ] In other words, this area of research studies how connections are made and morphed in the brain, and the effect it has on human sensation, movement, attention, inhibitory control, decision-making, reasoning, memory formation, reward, and emotion regulation. [ 53 ] Specific areas of interest for the field include observations of how the structure of neural circuits effect skill acquisition, how specialized regions of the brain develop and change ( neuroplasticity ), and the development of brain atlases, or wiring diagrams of individual developing brains. [ 54 ] The related fields of neuroethology and neuropsychology address the question of how neural substrates underlie specific animal and human behaviors. [ 55 ] Neuroendocrinology and psychoneuroimmunology examine interactions between the nervous system and the endocrine and immune systems, respectively. [ 56 ] Despite many advancements, the way that networks of neurons perform complex cognitive processes and behaviors is still poorly understood. [ 57 ] Cognitive neuroscience addresses the questions of how psychological functions are produced by neural circuitry . The emergence of powerful new measurement techniques such as neuroimaging (e.g., fMRI , PET , SPECT ), EEG , MEG , electrophysiology , optogenetics and human genetic analysis combined with sophisticated experimental techniques from cognitive psychology allows neuroscientists and psychologists to address abstract questions such as how cognition and emotion are mapped to specific neural substrates. Although many studies hold a reductionist stance looking for the neurobiological basis of cognitive phenomena, recent research shows that there is an interplay between neuroscientific findings and conceptual research, soliciting and integrating both perspectives. For example, neuroscience research on empathy solicited an interdisciplinary debate involving philosophy, psychology and psychopathology. [ 58 ] Moreover, the neuroscientific identification of multiple memory systems related to different brain areas has challenged the idea of memory as a literal reproduction of the past, supporting a view of memory as a generative, constructive and dynamic process. [ 59 ] Neuroscience is also allied with the social and behavioral sciences , as well as with nascent interdisciplinary fields. Examples of such alliances include neuroeconomics , decision theory , social neuroscience , and neuromarketing to address complex questions about interactions of the brain with its environment. A study into consumer responses for example uses EEG to investigate neural correlates associated with narrative transportation into stories about energy efficiency . [ 60 ] Questions in computational neuroscience can span a wide range of levels of traditional analysis, such as development , structure , and cognitive functions of the brain. Research in this field utilizes mathematical models , theoretical analysis, and computer simulation to describe and verify biologically plausible neurons and nervous systems. For example, biological neuron models are mathematical descriptions of spiking neurons which can be used to describe both the behavior of single neurons as well as the dynamics of neural networks . Computational neuroscience is often referred to as theoretical neuroscience. Neurology, psychiatry, neurosurgery, psychosurgery, anesthesiology and pain medicine , neuropathology, neuroradiology , ophthalmology , otolaryngology , clinical neurophysiology , addiction medicine , and sleep medicine are some medical specialties that specifically address the diseases of the nervous system. These terms also refer to clinical disciplines involving diagnosis and treatment of these diseases. [ 61 ] Neurology works with diseases of the central and peripheral nervous systems, such as amyotrophic lateral sclerosis (ALS) and stroke , and their medical treatment. Psychiatry focuses on affective , behavioral, cognitive , and perceptual disorders. Anesthesiology focuses on perception of pain, and pharmacologic alteration of consciousness. Neuropathology focuses upon the classification and underlying pathogenic mechanisms of central and peripheral nervous system and muscle diseases, with an emphasis on morphologic, microscopic, and chemically observable alterations. Neurosurgery and psychosurgery work primarily with surgical treatment of diseases of the central and peripheral nervous systems. [ 62 ] Neuroscience underlies the development of various neurotherapy methods to treat diseases of the nervous system. [ 63 ] [ 64 ] [ 65 ] Recently, the boundaries between various specialties have blurred, as they are all influenced by basic research in neuroscience. For example, brain imaging enables objective biological insight into mental illnesses, which can lead to faster diagnosis, more accurate prognosis, and improved monitoring of patient progress over time. [ 66 ] Integrative neuroscience describes the effort to combine models and information from multiple levels of research to develop a coherent model of the nervous system. For example, brain imaging coupled with physiological numerical models and theories of fundamental mechanisms may shed light on psychiatric disorders. [ 67 ] Another important area of translational research is brain–computer interfaces (BCIs), or machines that are able to communicate and influence the brain. They are currently being researched for their potential to repair neural systems and restore certain cognitive functions. [ 68 ] However, some ethical considerations have to be dealt with before they are accepted. [ 69 ] [ 70 ] Modern neuroscience education and research activities can be very roughly categorized into the following major branches, based on the subject and scale of the system in examination as well as distinct experimental or curricular approaches. Individual neuroscientists, however, often work on questions that span several distinct subfields. Source: [ 97 ] The largest professional neuroscience organization is the Society for Neuroscience (SFN), which is based in the United States but includes many members from other countries. Since its founding in 1969 the SFN has grown steadily: as of 2010 it recorded 40,290 members from 83 countries. [ 98 ] Annual meetings, held each year in a different American city, draw attendance from researchers, postdoctoral fellows, graduate students, and undergraduates, as well as educational institutions, funding agencies, publishers, and hundreds of businesses that supply products used in research. Other major organizations devoted to neuroscience include the International Brain Research Organization (IBRO), which holds its meetings in a country from a different part of the world each year, and the Federation of European Neuroscience Societies (FENS), which holds a meeting in a different European city every two years. FENS comprises a set of 32 national-level organizations, including the British Neuroscience Association , the German Neuroscience Society ( Neurowissenschaftliche Gesellschaft ), and the French Société des Neurosciences . [ 99 ] The first National Honor Society in Neuroscience, Nu Rho Psi , was founded in 2006. Numerous youth neuroscience societies which support undergraduates, graduates and early career researchers also exist, such as Simply Neuroscience [ 100 ] and Project Encephalon. [ 101 ] In 2013, the BRAIN Initiative was announced in the US. The International Brain Initiative [ 102 ] was created in 2017, [ 103 ] currently integrated by more than seven national-level brain research initiatives (US, Europe , Allen Institute , Japan , China , Australia, [ 104 ] Canada, [ 105 ] Korea, [ 106 ] and Israel [ 107 ] ) [ 108 ] spanning four continents. In addition to conducting traditional research in laboratory settings, neuroscientists have also been involved in the promotion of awareness and knowledge about the nervous system among the general public and government officials. Such promotions have been done by both individual neuroscientists and large organizations. For example, individual neuroscientists have promoted neuroscience education among young students by organizing the International Brain Bee , which is an academic competition for high school or secondary school students worldwide. [ 109 ] In the United States, large organizations such as the Society for Neuroscience have promoted neuroscience education by developing a primer called Brain Facts, [ 110 ] collaborating with public school teachers to develop Neuroscience Core Concepts for K-12 teachers and students, [ 111 ] and cosponsoring a campaign with the Dana Foundation called Brain Awareness Week to increase public awareness about the progress and benefits of brain research. [ 112 ] In Canada, the Canadian Institutes of Health Research's (CIHR) Canadian National Brain Bee is held annually at McMaster University . [ 113 ] Neuroscience educators formed a Faculty for Undergraduate Neuroscience (FUN) in 1992 to share best practices and provide travel awards for undergraduates presenting at Society for Neuroscience meetings. [ 114 ] Neuroscientists have also collaborated with other education experts to study and refine educational techniques to optimize learning among students, an emerging field called educational neuroscience . [ 115 ] Federal agencies in the United States, such as the National Institute of Health (NIH) [ 116 ] and National Science Foundation (NSF), [ 117 ] have also funded research that pertains to best practices in teaching and learning of neuroscience concepts. Neuromorphic engineering is a branch of neuroscience that deals with creating functional physical models of neurons for the purposes of useful computation. The emergent computational properties of neuromorphic computers are fundamentally different from conventional computers in the sense that they are complex systems , and that the computational components are interrelated with no central processor. [ 118 ] One example of such a computer is the SpiNNaker supercomputer. [ 119 ] Sensors can also be made smart with neuromorphic technology. An example of this is the Event Camera 's BrainScaleS (brain-inspired Multiscale Computation in Neuromorphic Hybrid Systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University in Germany. It was developed as part of the Human Brain Project 's neuromorphic computing platform and is the complement to the SpiNNaker supercomputer, which is based on digital technology. The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) that of their biological counterparts. [ 120 ] Recent advances in neuromorphic microchip technology have led a group of scientists to create an artificial neuron that can replace real neurons in diseases. [ 121 ] [ 122 ] United States
https://en.wikipedia.org/wiki/Neuroscience
The neuroscience of aging is the study of the changes in the nervous system that occur with aging . Aging is associated with many changes in the central nervous system, such as mild atrophy of the cortex, which is considered non-pathological. Aging is also associated with many neurological and neurodegenerative diseases, such as amyotrophic lateral sclerosis , dementia , mild cognitive impairment , Parkinson's disease , and Creutzfeldt–Jakob disease . [ 1 ] Neurogenesis occurs very little in adults; it only occurs in the hypothalamus and striatum to a small extent in a process called adult neurogenesis . Environmental enrichment, physical activity and stress (which can stimulate or hinder this process) are key environmental and physiological factors affecting adult neurogenesis. [ 2 ] Sensory stimulation, social interactions, and cognitive challenges can describe an enriched environment. [ 3 ] Exercising has frequently increased the reproduction of neuronal precursor cells and helped with age-related declines in neurogenesis. The brain volume decreases roughly 5% per decade after forty. It is currently unclear why brain volume decreases with age. However, a few causes may include cell death, decreased cell volume, and changes in synaptic structure. The changes in brain volume are heterogeneous across regions, with the prefrontal cortex receiving the most significant reduction in volume, followed in order by the striatum, the temporal lobe, the cerebellar vermis, the cerebellar hemispheres, and the hippocampus. [ 4 ] However, one review found that the amygdala and ventromedial prefrontal cortex remained relatively free of atrophy, consistent with the finding of emotional stability occurring with non-pathological aging. [ 5 ] Enlargement of the ventricles, sulci and fissures is common in non-pathological aging. [ 6 ] Changes may also be associated with neuroplasticity, synaptic functionality and voltage-gated calcium channels . [ 7 ] Increased hyperpolarization, possibly due to dysfunctional calcium regulation, decreases neuron firing rate and plasticity. This effect is particularly pronounced in the hippocampus of aged animals and may be an important contributor to age-associated memory deficits . The hyperpolarization of a neuron can be divided into three stages: fast, medium, and slow hyperpolarization. In aged neurons, the medium and slow hyperpolarization phases involve the prolonged opening of calcium-dependent potassium channels. The prolonging of this phase has been hypothesized to result from deregulated calcium and hypoactivity of cholinergic, dopaminergic, serotonergic and glutaminergic pathways. [ 8 ] Episodic memory (remembering specific events) declines gradually from middle age, while semantic memory (general knowledge and facts) increases into early old age and then declines thereafter. [ 4 ] Older adults can exhibit reduced activity in specific brain regions during cognitive tasks, particularly in medial temporal areas related to memory processing. On the other hand, overrecruitment of other brain areas, mainly in the prefrontal cortex, can be engaged in memory-related tasks. [ 9 ] Older adults also tend to engage their prefrontal cortex more often during working memory tasks, possibly to compensate for executive functions. Further impairments of cognitive function associated with aging include decreased processing speed and inability to focus. A model proposed to account for altered activation posits that decreased neural efficiency driven by amyloid plaques and decreased dopamine functionality lead to compensatory activation. [ 10 ] Decreased processing of negative stimuli, as opposed to positive stimuli, appears in aging and becomes significant enough to detect even with autonomic nervous responses to emotionally charged stimuli. [ 11 ] Aging is also associated with decreased plantar reflex and Achilles reflex response. Nerve conductance also decreases during normal aging. [ 12 ] DNA damage is a major risk factor in neurodegenerative diseases and in the decline of neuronal function with age. [ 13 ] Certain genes of the human frontal cortex display reduced transcriptional expression after age 40, especially after age 70. [ 14 ] In particular, genes with central roles in synaptic plasticity display reduced expression with age . The promoters of genes with reduced expression in the cortex of older individuals have a marked increase in DNA damage , likely oxidative DNA damage . [ 14 ] Roughly 20% of persons greater than 60 years of age have a neurological disorder, with episodic disorders being the most common, followed by extrapyramidal movement disorders and nerve disorders. [ 15 ] Diseases commonly associated with old age include The misfolding of proteins is a common component of the proposed pathophysiology of many aging-related diseases. However, there is insufficient evidence to prove this. For example, the tau hypothesis for Alzheimer's proposes that tau protein accumulation results in the breakdown of neuron cytoskeletons, leading to Alzheimer's. [ 25 ] Another proposed mechanism for Alzheimer's is related to the accumulation of amyloid beta [ 26 ] in a similar mechanism to the prion propagation of Creutzfeldt-Jakob disease. Until a recent study, tau proteins were believed to be the precedents for Alzheimer's but in a combination of amyloid beta. [ 27 ] Similarly, the protein alpha-synuclein is hypothesized to accumulate in Parkinson's and related diseases. [ 28 ] Treatments with anticancer chemotherapeutic agents often are toxic to the cells of the brain, leading to memory loss and cognitive dysfunction that can persist long after the period of exposure. This condition, termed chemo brain, appears to be due to DNA damages that cause epigenetic changes in the brain that accelerate the brain aging process. [ 29 ] Treatment of an age-related neurological disease varies from disease to disease. Modifiable risk factors for dementia include diabetes , hypertension , smoking , hyperhomocysteinemia , hypercholesterolemia , and obesity (which are usually associated with many other risk factors for dementia). Paradoxically, drinking and smoking confer protection against Parkinson's disease . [ 30 ] [ 31 ] It also confers protective benefits to age-related neurological disease in the consumption of coffee or caffeine. [ 32 ] [ 33 ] [ 34 ] Consumption of fruits, fish and vegetables confers protection against dementia, as does a Mediterranean diet . [ 35 ] In animal experiments, long-term calorie restriction was found to help reduce oxidative DNA damage. [ 36 ] Physical exercise significantly lowers the risk of cognitive decline in old age [ 37 ] and is an effective treatment for those with dementia [ 38 ] [ 39 ] and Parkinson's disease . [ 40 ] [ 41 ] [ 42 ] [ 43 ]
https://en.wikipedia.org/wiki/Neuroscience_of_aging
Neurosexism is an alleged bias in the neuroscience of sex differences towards reinforcing harmful gender stereotypes . The term was coined by feminist scholar Cordelia Fine in a 2008 article [ 1 ] and popularised by her 2010 book Delusions of Gender . [ 2 ] [ 3 ] [ 4 ] The concept is now widely used by critics of the neuroscience of sex differences in neuroscience , neuroethics and philosophy . [ 5 ] [ 6 ] [ 7 ] [ 8 ] Neuroscientist Gina Rippon defines neurosexism as follows: " 'Neurosexism' is the practice of claiming that there are fixed differences between female and male brains, which can explain women's inferiority or unsuitability for certain roles." [ 5 ] For example, "this includes things such as men being more logical and women being better at languages or nurturing." [ 5 ] Fine and Rippon, along with Daphna Joel , state that "the point of critical enquiry is not to deny differences between the sexes, but to ensure a full understanding of the findings and meaning of any particular report." [ 9 ] Many of the issues they discuss to support their position are "serious issues for all areas of behavioral research", but they argue that "in sex/gender differences research... they are often particularly acute." [ 9 ] Nonetheless, the common factor influencing logical maturity between males and females is the maturity of the frontal cortex, which matures at the age of 25, at the earliest. [ 10 ] The topic of neurosexism is thus closely tied to wider debates about scientific methodology , especially in the behavioral sciences . [ citation needed ] The history of science contains many examples of scientists and philosophers drawing conclusions about the mental inferiority of women, or their lack of aptitude for certain tasks, on the basis of alleged anatomical differences between male and female brains. [ 2 ] In the late 19th century, George J. Romanes used the difference in average brain weight between men and women to explain the "marked inferiority of intellectual power" of the latter. [ 11 ] Absent a sexist background assumption about male superiority, there would be nothing to explain here. Despite these historical pseudo-scientific studies, Becker et al. [ 12 ] argue that "for decades" the scientific community has abstained from studying sex-differences. Larry Cahill [ 13 ] asserts that today there is a widely held belief in the scientific community that sex-differences do not matter to large parts of biology and neuroscience, apart from explaining reproduction and the workings of reproduction hormones. Although overtly sexist statements may no longer have a place within the scientific community, Cordelia Fine , Gina Rippon and Daphna Joel contend that similar patterns of reasoning still exist. They claim that many researchers who make claims about gendered brain differences fail to provide sufficient warrant for their position. Philosophers of science who believe in a value-free normative standard for science find the practice of neurosexism particularly problematic. They hold that science should be free from values and biases, and argue that only epistemic values have a legitimate role to play in scientific inquiry. However, contrary to the value-free ideal view, Heather Douglas argues that 'value-free science is inadequate science' [ 14 ] Contemporary research continues in a more subtle vein through Prenatal Hormone Theory . According to the Prenatal Hormone Theory , "male and female foetuses differ in testosterone concentrations beginning as early as week 8 of gestation [and] the early hormone difference exerts permanent influences on brain development and behaviour." [ 15 ] Charges of neurosexism may then be moved against the PHT if these alleged hormonal differences are interpreted as causing the male/female brain distinction and in turn are used to reinforce stereotypical behaviours and gender roles. [ 15 ] The notion that there are hard-wired differences between male and female brains is particularly explicit in Simon Baron-Cohen 's empathising-systematising (E-S) theory . Empathy is defined as the drive to identify and respond appropriately to emotions and thoughts in others, and systematising is defined as the drive to analyse and explore a system, isolate the underlying rules that govern the behaviour of that system, and build new systems. [ 16 ] These two characteristics can be seen amongst young girls and boys. Girls have a tendency to play with baby dolls when they are young, enacting their social and emotional skills. Boys tend to play with plastic cars, illustrating a more mechanical, system-driven mind. This may be of course due simply to the environment and to social norms. However, the empathising-systematising theory posits three broad brain types, or organisation structures: type E, the empathiser; type S, the systematiser; type B, the 'balanced brain'. Given that females are twice as likely to display brain type E, and males are twice as likely to display brain type S, [ 17 ] he labels these brain 'types' the 'female brain' and the 'male brain', respectively. This type of analysis suggests therefore that most (or at least some) differences in skills and occupation between males and females can be explained by virtue of them having different brain structures. Baron Cohen's theory has been criticised because it presents a clear-cut dichotomy between male and female brains, whilst this is not necessarily the case: there are females with 'male brains', and males with 'female brains'. Using the gendered labels makes it significantly more likely that evidence of gendered brain differences will be over-stated in the media, in way that might actively shape the gender norms within society. [ 18 ] In Delusions of Gender , Cordelia Fine criticises work by Ruben and Raquel Gur and collaborators. [ 19 ] In the context of explaining the under-representation of women in science and mathematics, [ 20 ] she quotes them as claiming that "the greater facility of women with interhemispheric communications may attract them to disciplines that require integration rather than detailed scrutiny of narrowly characterized processes." [ 21 ] This contention is however corroborated by a 2014 study [ 22 ] about the structural connectome. The study used 949 youths to establish novel sex differences, establishing the key difference that male brains are optimised for intrahemispheric communication, while female brains are optimised for interhemispheric communication. Furthermore, the development timeframe of male and female brain are vastly different. However, this study used youths from age 8 to 22, where the brain is still developing so the results may not be conclusive enough. In a 1999 study, Gur et al. found a link between the amount of white matter in a person's brain and their performance on spatial tasks. [ 23 ] Fine points out that the sample size of ten people is a small sample size, and the researchers tested for thirty-six different relationships in this sample. [ 23 ] Fine argues that results like these should be treated with caution, because, given the sample size and the number of relationships tested, the correlation found between white matter volume and performance in the tasks could be a false positive . Fine accuses the researchers of downplaying the risk of a false positive after conducting many statistical analyses of past research projects, she argues that using their results as the basis of an explanation for why women are under-represented in scientific fields is inadequate here. [ 19 ] Fine also discusses a 2004 neuroimaging study by neuroscientist Sandra Witelson and collaborators. [ 24 ] This study was taken to support sex differences in emotional processing by Allan and Barbara Pease in their book Why Men Don't Listen and Women Can't Read Maps [ 25 ] and by Susan Pinker in her book The Sexual Paradox . [ 26 ] Fine argues that, with a sample size of just 16, the results could easily have been false positives . She compares the study to a famous 2009 study in which, to illustrate the risk of false positives in neuroimaging research, researchers showed increased brain activity in a dead salmon during a perspective-taking task. [ 27 ] [ 19 ] A notable dispute in 2010 between Fine and neuroscientist Simon Baron-Cohen in The Psychologist magazine centred on a study into sex differences in the responses of newborn babies to human faces and mechanical mobiles. [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 2 ] The research took babies under 24 hours old and showed them a human face, or a mechanical mobile. If they were shown the human face first, they were then shown the mechanical mobile and vice versa. The babies responses were recorded, and judges coded the eye movements of the babies to discern which, if either, of the stimuli the babies looked at for longer. [ 31 ] The study concluded that female babies looked at human faces for longer, and male babies looked at mechanical mobiles for longer. [ 31 ] Therefore, this theory derived the conclusion that female brains are programmed towards empathy whereas male brains are more inclined towards practicality and building systems. This theory suggested that an individual can be characterised into having a certain "brain type" where empathising was called the brain type E, and systemising was called the brain type S. However, some individuals can be equally strong at empathising and systemising and therefore, they possess a "balanced brain". This has the brain type B. Fine criticised the study, arguing that because the babies were shown one stimulus first and then the other, they may have become fatigued, affecting the results of the study. [ 2 ] Furthermore, Fine also argued that the panel of judges watching the babies' eye movements may have been able to guess the sex of the baby, for example if the baby was dressed in certain clothing or had particular congratulatory cards present, giving rise to confirmation type biases. Baron-Cohen has countered these criticisms. [ 29 ] Baron-Cohen replied to the fatigue argument by explaining that the stimuli were shown in randomised order, so as to avoid the problem of specific stimuli fatigue in either sex. In response to the claim about bias, he argued that the judges were only able to assess the babies' eye movements by watching a video of the eye area of the baby, through which it would have been almost impossible to derive the sex of the baby. Notwithstanding this, Fine argued that the effort to conceal the babies' sex from the experimenters in the room with the babies was "minimal", allowing room for implicit bias , rendering the results unreliable. [ 30 ] [ 33 ] Rebecca Jordan-Young provides a good case study of neurosexism in studies of those with congenital adrenal hyperplasia (CAH) . Because Prenatal Hormone Theory posits early steroid hormones during fetal development as conducive to sex-typical behaviours, studies of genetic females with CAH are important to test the feasibility of this hypothesis. Jordan-Young conducts a comprehensive review of these studies, finding them to neglect four broad categories of variables that plausibly affect psychosexual development: "(1) physiological effects of CAH, including complex disruption of steroid hormones from early development onwards; (2) intensive medical intervention and surveillance, which many women with CAH describe as traumatic; (3) direct effects of genital morphology on sexuality; and (4) expectations of masculinisation that likely affect both the development and evaluation of gender and sexuality in CAH." [ 34 ] Complex and continuous interactions between biological factors, medical intervention, and social pressures suggest a more holistic explanation for atypicalities in the psychological make up and behaviour of those with CAH than the conventional explanation that prenatal hormones "masculinise" the brain. Neglecting these four categories in our methodology of studies into those with CAH then favours the sex difference hypothesis, providing a clear example of neurosexism in scientific research. However, studies of CAH fail to account for unusual childhood experiences, parental expectations or reporting bias. The media reporting of the neuroscience of sex differences has also attracted criticism. A high-profile example was the reporting of a 2014 neuroimaging study on sex differences the structural connectome of the human brain. [ 35 ] The study used diffusion tensor imaging to investigate white matter connections in the brains of 949 participants ranging from 8 to 22 years old. The authors claimed to have discovered "fundamental sex differences in the structural architecture of the human brain". [ 35 ] The study was widely reported by media organizations around the world. A content analysis of the media coverage investigated the claims made in the original scientific article and in several different types of media reporting. [ 36 ] The analysis showed that information from the scientific article was given "increasingly diversified, personalized, and politicized meaning" in media outlets and was widely seen to have vindicated traditional gender stereotypes, even though the neuroimaging technique used could only detect structural differences, not functional differences, between the sexes. [ 36 ] The way scientific information is passed from the scientific community into the public consciousness has changed with the development of technology, social media, and news platforms. The traditional route from study, to media, to public consciousness no longer permits. The advent of the "blogosphere" and other forms of social media mean that audiences now actively produce and critique scientific alongside other scientists, whether or not this is a benefit or hindrance to the scientific community is yet to be realised given the infancy of these channels. We should, however, remain alert to the problems arising from greater public involvement in our scientific communication, particularly for the understanding of findings. Cliodhna O'Connor and Helene Joffe examine how traditional media, blogs, and their comment sections autonomously project prevailing understanding of sex differences (emotion-rationality dualism and traditional role divisions) onto mute findings, construing men as purely rational and women as highly emotional, noting how both social representation theory and system justification theory may be causing bias in the interpretation of these findings. [ 36 ] The findings of their study showed significant scope for parties to apply their own personal and cultural agendas onto the findings, and share these through blogs and comments. In projecting prevailing stereotypes on the mute findings, we have a prime example of how neurosexism can arrive in stages outside the domain of science, raising further concerns for the feminist camp for whilst we can apply the necessary checks and balances in the method of our science, once the information is in the public consciousness they can manipulate and construe research however they see fit. The interest and coverage generated by neurological studies on sex differences is an instance of a wider phenomenon. It is possible to see the 'neuro-' prefix being widely used: " neuromarketing ", " neuroeconomics ", " neurodrinks ". One study documented in the Journal of Cognitive Neuroscience tested the hypothesis that irrelevant neuroscience explanations accompanying descriptions of psychological phenomena causes people to rate the descriptions as better quality. Results showed that irrelevant neuroscience information does indeed cause people to rate explanations more satisfying than without, even in cases where neuroscience was not useful to explain the phenomenon. [ 37 ] According to Cordelia Fine and Gina Rippon , there are systematic methodological issues in the neuroscience of sex differences that increase the chances of neurosexism. [ 7 ] [ 19 ] [ 4 ] In other words, questions of neurosexism are not entirely independent of questions about scientific methodology . A reverse inference infers that activation in a particular brain region causes the presence of a mental process. Fine argues that such inferences are routine in the neuroscience of sex differences, yet "the absence of neat one-to-one mapping between brain regions and mental processes renders reverse inferences logically invalid". [ 4 ] She emphasises that mental processes arise from complex interactions between a multiplicity of brain regions; the inference from correlation to causation is invalid, because the interactions between brain regions and mental processes are vastly complex. The invalidity stems from brain region activation being multiply realisable. [ citation needed ] For example, the mental processes of experiencing visual art and experiencing the taste of food both activate the nucleus accumbens; activation of the nucleus accumbens then doesn't necessarily cause the mental process of tasting food, since activation could be causing another mental process (e.g. experiencing visual art). [ citation needed ] Plasticity refers to the brain's ability to change as a result of experiences in one's life. Because of the brain's plasticity, it is possible in principle for social phenomena related to gender to influence the organization of a person's brain. Fine has argued that the neuroscience of sex differences does not do enough to take plasticity into account. In Fine's view, neuroscientists tend to take a snap-shot comparison (looking at current neural differences) and describe the results as "hard-wired", without considering that the observed patterns could change over time. [ 2 ] [ 3 ] To examine one possible example of this, consider the 2014 Ingalhalikar et al. study, which used diffusion tensor imaging to find relatively greater within-hemisphere neural connectivity in males' brains, and relatively greater across-hemisphere connectivity in females' brains. [ 35 ] This was then employed to naturalise sex-specific cognitive differences, which then naturalised their suitability for divergent skill-sets. However, given the aforementioned concept of brain plasticity, the notion that these connectivity differences are exclusively a result of natural biology can be challenged. [ citation needed ] This is because plasticity introduces the alternative possibility that individuals' sex-specific learned behaviours could have also impacted their brain connectome. Thus, the concept of brain plasticity raises the question of whether the observed brain differences from the study are caused by nature or nurture. [ citation needed ] Fine [ 38 ] has criticised the small sample sizes that are typical of Functional Neuroimaging (FNI) studies reporting sex differences in the brain. She supports this claim with a meta-analysis. She takes a sample of thirty-nine studies from Medline, Web of Science, and PsycINFO databases, published between 2009 and 2010, in which sex differences were referred to in the article title. Fine reports that over the entire sample, the mean number of males was 19, and the mean number of females was 18.5. Disregarding the studies making sex-by-age and sex-by-group comparisons (which require larger sample sizes), the average sample sizes were even smaller, with a mean of 13.5 males, and a mean of 13.8 females. She also points out that the second largest study in the group reported a null finding. Small sample sizes are problematic because they increase the risk of False positives . Not only do false positives misinform, but they also "tend to persist because failures to replicate are inconclusive and unappealing both to attempt by researchers and to publish by journals". [ 39 ] Simon Baron-Cohen has defended the neuroscience of sex differences against the charge of neurosexism. In a review of Delusions of Gender , he said that "Ultimately, for me, the biggest weakness of Fine's neurosexism allegation is the mistaken blurring of science with politics", saying that "You can be a scientist interested in the nature of sex differences while being a clear supporter of equal opportunities and a firm opponent of all forms of discrimination in society." [ 40 ]
https://en.wikipedia.org/wiki/Neurosexism
Neurotoxicity is a form of toxicity in which a biological, chemical, or physical agent produces an adverse effect on the structure or function of the central and/or peripheral nervous system . [ 1 ] It occurs when exposure to a substance – specifically, a neurotoxin or neurotoxicant – alters the normal activity of the nervous system in such a way as to cause permanent or reversible damage to nervous tissue . [ 1 ] This can eventually disrupt or even kill neurons , which are cells that transmit and process signals in the brain and other parts of the nervous system. Neurotoxicity can result from organ transplants , radiation treatment , certain drug therapies , recreational drug use , exposure to heavy metals , bites from certain species of venomous snakes , pesticides , [ 2 ] [ 3 ] certain industrial cleaning solvents , [ 4 ] fuels [ 5 ] and certain naturally occurring substances. Symptoms may appear immediately after exposure or be delayed. They may include limb weakness or numbness, loss of memory, vision, and/or intellect, uncontrollable obsessive and/or compulsive behaviors, delusions, headache, cognitive and behavioral problems and sexual dysfunction. Chronic mold exposure in homes can lead to neurotoxicity which may not appear for months to years of exposure. [ 6 ] All symptoms listed above are consistent with mold mycotoxin accumulation. [ 7 ] The term neurotoxicity implies the involvement of a neurotoxin; however, the term neurotoxic may be used more loosely to describe states that are known to cause physical brain damage , but where no specific neurotoxin has been identified. [ citation needed ] The presence of neurocognitive deficits alone is not usually considered sufficient evidence of neurotoxicity, as many substances may impair neurocognitive performance without resulting in the death of neurons. This may be due to the direct action of the substance, with the impairment and neurocognitive deficits being temporary, and resolving when the substance is eliminated from the body. In some cases the level or exposure-time may be critical, with some substances only becoming neurotoxic in certain doses or time periods. Some of the most common naturally occurring brain toxins that lead to neurotoxicity as a result of long term drug use are amyloid beta (Aβ), glutamate , dopamine , and oxygen radicals . When present in high concentrations, they can lead to neurotoxicity and death ( apoptosis ). Some of the symptoms that result from cell death include loss of motor control, cognitive deterioration and autonomic nervous system dysfunction. Additionally, neurotoxicity has been found to be a major cause of neurodegenerative diseases such as Alzheimer's disease (AD). [ citation needed ] [ 8 ] Amyloid beta (Aβ) was found to cause neurotoxicity and cell death in the brain when present in high concentrations. Aβ results from a mutation that occurs when protein chains are cut at the wrong locations, resulting in chains of different lengths that are unusable. Thus they are left in the brain until they are broken down, but if enough accumulate, they form plaques which are toxic to neurons . Aβ uses several routes in the central nervous system to cause cell death. An example is through the nicotinic acetylcholine receptor (nAchRs), which is a receptor commonly found along the surfaces of the cells that respond to nicotine stimulation, turning them on or off. Aβ was found manipulating the level of nicotine in the brain along with the MAP kinase , another signaling receptor, to cause cell death. Another chemical in the brain that Aβ regulates is JNK ; this chemical halts the extracellular signal-regulated kinases (ERK) pathway, which normally functions as memory control in the brain. As a result, this memory favoring pathway is stopped, and the brain loses essential memory function. The loss of memory is a symptom of neurodegenerative disease , including AD. Another way Aβ causes cell death is through the phosphorylation of AKT ; this occurs as the phosphate group is bound to several sites on the protein. This phosphorylation allows AKT to interact with BAD , a protein known to cause cell death. Thus an increase in Aβ results in an increase of the AKT/BAD complex, in turn stopping the action of the anti-apoptotic protein Bcl-2 , which normally functions to stop cell death, causing accelerated neuron breakdown and the progression of AD. [ citation needed ] Glutamate is a chemical found in the brain that poses a toxic threat to neurons when found in high concentrations. This concentration equilibrium is extremely delicate and is usually found in millimolar amounts extracellularly. When disturbed, an accumulation of glutamate occurs as a result of a mutation in the glutamate transporters , which act like pumps to clear glutamate from the synapse. This causes glutamate concentration to be several times higher in the blood than in the brain; in turn, the body must act to maintain equilibrium between the two concentrations by pumping the glutamate out of the bloodstream and into the neurons of the brain. In the event of a mutation, the glutamate transporters are unable to pump the glutamate back into the cells; thus a higher concentration accumulates at the glutamate receptors . This opens the ion channels, allowing calcium to enter the cell causing excitotoxicity. Glutamate results in cell death by turning on the N-methyl-D-aspartic acid receptors (NMDA); these receptors cause an increased release of calcium ions (Ca 2+ ) into the cells. As a result, the increased concentration of Ca 2+ directly increases the stress on mitochondria , resulting in excessive oxidative phosphorylation and production of reactive oxygen species (ROS) via the activation of nitric oxide synthase , ultimately leading to cell death. Aβ was also found aiding this route to neurotoxicity by enhancing neuron vulnerability to glutamate. [ citation needed ] The formation of oxygen radicals in the brain is achieved through the nitric oxide synthase (NOS) pathway. This reaction occurs as a response to an increase in the Ca 2+ concentration inside a brain cell. This interaction between the Ca 2+ and NOS results in the formation of the cofactor tetrahydrobiopterin (BH4), which then moves from the plasma membrane into the cytoplasm. As a final step, NOS is dephosphorylated yielding nitric oxide (NO), which accumulates in the brain, increasing its oxidative stress . There are several ROS, including superoxide , hydrogen peroxide and hydroxyl , all of which lead to neurotoxicity. Naturally, the body utilizes a defensive mechanism to diminish the fatal effects of the reactive species by employing certain enzymes to break down the ROS into small, benign molecules of simple oxygen and water. However, this breakdown of the ROS is not completely efficient; some reactive residues are left in the brain to accumulate, contributing to neurotoxicity and cell death. The brain is more vulnerable to oxidative stress than other organs, due to its low oxidative capacity. Because neurons are characterized as postmitotic cells, meaning that they live with accumulated damage over the years, accumulation of ROS is fatal. Thus, increased levels of ROS age neurons, which leads to accelerated neurodegenerative processes and ultimately the advancement of AD. The endogenously produced autotoxin metabolite of dopamine, 3,4-Dihydroxyphenylacetaldehyde (DOPAL), is a potent inducer of programmed cell death (apoptosis) in dopaminergic neurons. [ 9 ] DOPAL may play an important role in the pathology of Parkinson's disease . [ 10 ] [ 11 ] Certain drugs, most famously the pesticide and metabolite MPP+ (1-methyl-4-phenylpyridin-1-ium) can induce Parkinson's disease by destroying dopaminergic neurons in the substantia nigra . [ 12 ] MPP+ interacts with the electron transport chain in the mitochondria to generate reactive oxygen species which cause generalized oxidative damage and ultimately cell death. [ 13 ] [ 14 ] MPP+ is produced by monoamine oxidase B as a metabolite of MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) , and its toxicity is particularly significant to dopaminergic neurons because of an active transporter on those cells that bring it into the cytoplasm. [ 14 ] The neurotoxicity of MPP+ was first investigated after MPTP was produced as a contaminant in the pethidine synthesized by a chemistry graduate student, who injected the contaminated drug and developed overt Parkinson's within weeks. [ 13 ] [ 12 ] Discovery of the mechanism of toxicity was an important advance in the study of Parkinson's disease, and the compound is now used to induce the disease in research animals. [ 12 ] [ 15 ] The prognosis depends upon the length and degree of exposure and the severity of neurological injury. In some instances, exposure to neurotoxins or neurotoxicants can be fatal. In others, patients may survive but not fully recover. In other situations, many individuals recover completely after treatment. [ 16 ] The word neurotoxicity ( / ˌ n ʊər oʊ t ɒ k ˈ s ɪ s ɪ t i / ) uses combining forms of neuro- + tox- + -icity , yielding " nervous tissue poisoning".
https://en.wikipedia.org/wiki/Neurotoxicity
A neurotransmitter is a signaling molecule secreted by a neuron to affect another cell across a synapse . The cell receiving the signal, or target cell, may be another neuron, but could also be a gland or muscle cell . [ 1 ] Neurotransmitters are released from synaptic vesicles into the synaptic cleft where they are able to interact with neurotransmitter receptors on the target cell. Some neurotransmitters are also stored in large dense core vesicles . [ 2 ] The neurotransmitter's effect on the target cell is determined by the receptor it binds to. Many neurotransmitters are synthesized from simple and plentiful precursors such as amino acids , which are readily available and often require a small number of biosynthetic steps for conversion. [ citation needed ] Neurotransmitters are essential to the function of complex neural systems. The exact number of unique neurotransmitters in humans is unknown, but more than 100 have been identified. [ 3 ] Common neurotransmitters include glutamate , GABA , acetylcholine , glycine , dopamine and norepinephrine . Neurotransmitters are generally synthesized in neurons and are made up of, or derived from, precursor molecules that are found abundantly in the cell. Classes of neurotransmitters include amino acids , monoamines , and peptides . Monoamines are synthesized by altering a single amino acid. For example, the precursor of serotonin is the amino acid tryptophan. Peptide neurotransmitters, or neuropeptides , are protein transmitters which are larger than the classical small-molecule neurotransmitters and are often released together to elicit a modulatory effect. [ 4 ] Purine neurotransmitters, like ATP , are derived from nucleic acids. Metabolic products such as nitric oxide and carbon monoxide have also been reported to act like neurotransmitters. [ 5 ] Neurotransmitters are generally stored in synaptic vesicles , clustered close to the cell membrane at the axon terminal of the presynaptic neuron. However, some neurotransmitters, like the metabolic gases carbon monoxide and nitric oxide, are synthesized and released immediately following an action potential without ever being stored in vesicles. [ 6 ] Generally, a neurotransmitter is released via exocytosis at the presynaptic terminal in response to an electrical signal called an action potential in the presynaptic neuron. However, low-level "baseline" release also occurs without electrical stimulation. Neurotransmitters are released into and diffuse across the synaptic cleft , where they bind to specific receptors on the membrane of the postsynaptic neuron. [ 7 ] After being released into the synaptic cleft, neurotransmitters diffuse across the synapse where they are able to interact with receptors on the target cell. The effect of the neurotransmitter is dependent on the identity of the target cell's receptors present at the synapse. Depending on the receptor, binding of neurotransmitters may cause excitation , inhibition , or modulation of the postsynaptic neuron. [ 8 ] In order to avoid continuous activation of receptors on the post-synaptic or target cell, neurotransmitters must be removed from the synaptic cleft. [ 9 ] Neurotransmitters are removed through one of three mechanisms: For example, acetylcholine is eliminated by having its acetyl group cleaved by the enzyme acetylcholinesterase ; the remaining choline is then taken in and recycled by the pre-synaptic neuron to synthesize more acetylcholine . [ 12 ] Other neurotransmitters are able to diffuse away from their targeted synaptic junctions and are eliminated from the body via the kidneys, or destroyed in the liver. Each neurotransmitter has very specific degradation pathways at regulatory points, which may be targeted by the body's regulatory system or medication. Cocaine blocks a dopamine transporter responsible for the reuptake of dopamine. Without the transporter, dopamine diffuses much more slowly from the synaptic cleft and continues to activate the dopamine receptors on the target cell. [ 13 ] Until the early 20th century, scientists assumed that the majority of synaptic communication in the brain was electrical. However, through histological examinations by Ramón y Cajal , a 20 to 40 nm gap between neurons, known today as the synaptic cleft , was discovered. The presence of such a gap suggested communication via chemical messengers traversing the synaptic cleft, and in 1921 German pharmacologist Otto Loewi confirmed that neurons can communicate by releasing chemicals. Through a series of experiments involving the vagus nerves of frogs, Loewi was able to manually slow the heart rate of frogs by controlling the amount of saline solution present around the vagus nerve. Upon completion of this experiment, Loewi asserted that sympathetic regulation of cardiac function can be mediated through changes in chemical concentrations. Furthermore, Otto Loewi is credited with discovering acetylcholine (ACh) – the first known neurotransmitter. [ 14 ] To identify neurotransmitters, the following criteria are typically considered: However, given advances in pharmacology , genetics , and chemical neuroanatomy , the term "neurotransmitter" can be applied to chemicals that: The anatomical localization of neurotransmitters is typically determined using immunocytochemical techniques, which identify the location of either the transmitter substances themselves or of the enzymes that are involved in their synthesis. Immunocytochemical techniques have also revealed that many transmitters, particularly the neuropeptides , are co-localized, that is, a neuron may release more than one transmitter from its synaptic terminal . [ 16 ] Various techniques and experiments such as staining , stimulating, and collecting can be used to identify neurotransmitters throughout the central nervous system . [ 17 ] Neurons communicate with each other through synapses , specialized contact points where neurotransmitters transmit signals. When an action potential reaches the presynaptic terminal , the action potential can trigger the release of neurotransmitters into the synaptic cleft. These neurotransmitters then bind to receptors on the postsynaptic membrane, influencing the receiving neuron in either an inhibitory or excitatory manner. If the overall excitatory influences outweigh the inhibitory influences, the receiving neuron may generate its own action potential, continuing the transmission of information to the next neuron in the network. This process allows for the flow of information and the formation of complex neural networks. [ 18 ] A neurotransmitter may have an excitatory, inhibitory or modulatory effect on the target cell. The effect is determined by the receptors the neurotransmitter interacts with at the post-synaptic membrane. Neurotransmitter influences trans-membrane ion flow either to increase (excitatory) or to decrease (inhibitory) the probability that the cell with which it comes in contact will produce an action potential. Synapses containing receptors with excitatory effects are called Type I synapses, while Type II synapses contain receptors with inhibitory effects. [ 19 ] Thus, despite the wide variety of synapses, they all convey messages of only these two types. The two types are different appearance and are primarily located on different parts of the neurons under its influence. [ 20 ] Receptors with modulatory effects are spread throughout all synaptic membranes and binding of neurotransmitters sets in motion signaling cascades that help the cell regulate its function. [ 8 ] Binding of neurotransmitters to receptors with modulatory effects can have many results. For example, it may result in an increase or decrease in sensitivity to future stimulus by recruiting more or less receptors to the synaptic membrane. [ citation needed ] Type I (excitatory) synapses are typically located on the shafts or the spines of dendrites, whereas type II (inhibitory) synapses are typically located on a cell body. In addition, Type I synapses have round synaptic vesicles, whereas the vesicles of type II synapses are flattened. The material on the presynaptic and post-synaptic membranes is denser in a Type I synapse than it is in a Type II, and the Type I synaptic cleft is wider. Finally, the active zone on a Type I synapse is larger than that on a Type II synapse. [ citation needed ] The different locations of Type I and Type II synapses divide a neuron into two zones: an excitatory dendritic tree and an inhibitory cell body. From an inhibitory perspective, excitation comes in over the dendrites and spreads to the axon hillock to trigger an action potential . If the message is to be stopped, it is best stopped by applying inhibition on the cell body, close to the axon hillock where the action potential originates. Another way to conceptualize excitatory–inhibitory interaction is to picture excitation overcoming inhibition. If the cell body is normally in an inhibited state, the only way to generate an action potential at the axon hillock is to reduce the cell body's inhibition. In this "open the gates" strategy, the excitatory message is like a racehorse ready to run down the track, but first, the inhibitory starting gate must be removed. [ 21 ] As explained above, the only direct action of a neurotransmitter is to activate a receptor. Therefore, the effects of a neurotransmitter system depend on the connections of the neurons that use the transmitter, and the chemical properties of the receptors. There are many different ways to classify neurotransmitters. They are commonly classified into amino acids , monoamines and peptides . [ 35 ] Some of the major neurotransmitters are: In addition, over 100 neuroactive peptides have been found, and new ones are discovered regularly. [ 38 ] [ 39 ] Many of these are co-released along with a small-molecule transmitter. Nevertheless, in some cases, a peptide is the primary transmitter at a synapse. Beta-Endorphin is a relatively well-known example of a peptide neurotransmitter because it engages in highly specific interactions with opioid receptors in the central nervous system . [ citation needed ] Single ions (such as synaptically released zinc ) are also considered neurotransmitters by some, [ 40 ] as well as some gaseous molecules such as nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H 2 S). [ 41 ] The gases are produced in the neural cytoplasm and are immediately diffused through the cell membrane into the extracellular fluid and into nearby cells to stimulate production of second messengers. Soluble gas neurotransmitters are difficult to study, as they act rapidly and are immediately broken down, existing for only a few seconds. [ citation needed ] The most prevalent transmitter is glutamate , which is excitatory at well over 90% of the synapses in the human brain. [ 36 ] The next most prevalent is gamma-Aminobutyric Acid, or GABA, which is inhibitory at more than 90% of the synapses that do not use glutamate. Although other transmitters are used in fewer synapses, they may be very important functionally: the great majority of psychoactive drugs exert their effects by altering the actions of some neurotransmitter systems, often acting through transmitters other than glutamate or GABA. Addictive drugs such as cocaine and amphetamines exert their effects primarily on the dopamine system. The addictive opiate drugs exert their effects primarily as functional analogs of opioid peptides , which, in turn, regulate dopamine levels. [ citation needed ] Neurons expressing certain types of neurotransmitters sometimes form distinct systems, where activation of the system affects large volumes of the brain, called volume transmission . Major neurotransmitter systems include the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system, among others. Trace amines have a modulatory effect on neurotransmission in monoamine pathways (i.e., dopamine, norepinephrine, and serotonin pathways) throughout the brain via signaling through trace amine-associated receptor 1 . [ 45 ] [ 46 ] A brief comparison of these systems follows: Caudal nuclei (CN): Raphe magnus , raphe pallidus , and raphe obscurus Rostral nuclei (RN): Nucleus linearis , dorsal raphe , medial raphe , and raphe pontis Forebrain cholinergic nuclei (FCN): Nucleus basalis of Meynert , medial septal nucleus , and diagonal band Striatal tonically active cholinergic neurons (TAN) Brainstem cholinergic nuclei (BCN): Pedunculopontine nucleus , laterodorsal tegmentum , medial habenula , and parabigeminal nucleus Understanding the effects of drugs on neurotransmitters comprises a significant portion of research initiatives in the field of neuroscience . Most neuroscientists involved in this field of research believe that such efforts may further advance our understanding of the circuits responsible for various neurological diseases and disorders, as well as ways to effectively treat and someday possibly prevent or cure such illnesses. [ 63 ] [ medical citation needed ] Drugs can influence behavior by altering neurotransmitter activity. For instance, drugs can decrease the rate of synthesis of neurotransmitters by affecting the synthetic enzyme(s) for that neurotransmitter. When neurotransmitter syntheses are blocked, the amount of neurotransmitters available for release becomes substantially lower, resulting in a decrease in neurotransmitter activity. Some drugs block or stimulate the release of specific neurotransmitters. Alternatively, drugs can prevent neurotransmitter storage in synaptic vesicles by causing the synaptic vesicle membranes to leak. Drugs that prevent a neurotransmitter from binding to its receptor are called receptor antagonists . For example, drugs used to treat patients with schizophrenia such as haloperidol, chlorpromazine, and clozapine are antagonists at receptors in the brain for dopamine. Other drugs act by binding to a receptor and mimicking the normal neurotransmitter. Such drugs are called receptor agonists . An example of a receptor agonist is morphine , an opiate that mimics effects of the endogenous neurotransmitter β-endorphin to relieve pain. Other drugs interfere with the deactivation of a neurotransmitter after it has been released, thereby prolonging the action of a neurotransmitter. This can be accomplished by blocking re-uptake or inhibiting degradative enzymes. Lastly, drugs can also prevent an action potential from occurring, blocking neuronal activity throughout the central and peripheral nervous system . Drugs such as tetrodotoxin that block neural activity are typically lethal. [ citation needed ] Drugs targeting the neurotransmitter of major systems affect the whole system, which can explain the complexity of action of some drugs. Cocaine , for example, blocks the re-uptake of dopamine back into the presynaptic neuron, leaving the neurotransmitter molecules in the synaptic gap for an extended period of time. Since the dopamine remains in the synapse longer, the neurotransmitter continues to bind to the receptors on the postsynaptic neuron, eliciting a pleasurable emotional response. Physical addiction to cocaine may result from prolonged exposure to excess dopamine in the synapses, which leads to the downregulation of some post-synaptic receptors. After the effects of the drug wear off, an individual can become depressed due to decreased probability of the neurotransmitter binding to a receptor. Fluoxetine is a selective serotonin re-uptake inhibitor (SSRI), which blocks re-uptake of serotonin by the presynaptic cell which increases the amount of serotonin present at the synapse and furthermore allows it to remain there longer, providing potential for the effect of naturally released serotonin. [ 64 ] AMPT prevents the conversion of tyrosine to L-DOPA , the precursor to dopamine; reserpine prevents dopamine storage within vesicles ; and deprenyl inhibits monoamine oxidase (MAO)-B and thus increases dopamine levels. [ citation needed ] Prevents muscle contractions Stimulates muscle contractions Increases effects of ACh at receptors Used to treat myasthenia gravis Increases attention Reinforcing effects Prevents muscle contractions Toxic Blocks saliva production Causes sedation and depression High dose: stimulates postsynaptic receptors Blocks reuptake [ 45 ] [ 46 ] Blocks reuptake Enhances attention and impulse control in ADHD Blocks voltage-dependent sodium channels Can be used as a topical anesthetic (eye drops) Prevents destruction of dopamine Alleviates hallucinations Reduces nausea and vomiting Treats depression, some anxiety disorders, and OCD [ 64 ] Common examples: Prozac and Sarafem Inhibits reuptake of serotonin Used as an appetite suppressant Stimulates 5-HT 2A receptors in forebrain Causes excitatory and hallucinogenic effects Increases appetite Cognitive effects Used in smoking cessation Used in research to increase cannabinoid system activity Used in research to increase cannabinoid system activity Increases wakefulness Prevents calcium ions from entering neurons Impairs learning Impairs synaptic plasticity and certain forms of learning Induces trance-like state, helps with pain relief and sedation Ionotropic receptor Ionotropic receptor Causes seizures Increase availability of GABA Reduces the likelihood of seizures Used to study norepinephrine system Used to study norepinephrine system without affecting dopamine system An agonist is a chemical capable of binding to a receptor, such as a neurotransmitter receptor, and initiating the same reaction typically produced by the binding of the endogenous substance. [ 68 ] An agonist of a neurotransmitter will thus initiate the same receptor response as the transmitter. In neurons, an agonist drug may activate neurotransmitter receptors either directly or indirectly. Direct-binding agonists can be further characterized as full agonists , partial agonists , inverse agonists . [ 69 ] [ 70 ] Direct agonists act similar to a neurotransmitter by binding directly to its associated receptor site(s), which may be located on the presynaptic neuron or postsynaptic neuron, or both. [ 71 ] Typically, neurotransmitter receptors are located on the postsynaptic neuron, while neurotransmitter autoreceptors are located on the presynaptic neuron, as is the case for monoamine neurotransmitters ; [ 45 ] in some cases, a neurotransmitter utilizes retrograde neurotransmission , a type of feedback signaling in neurons where the neurotransmitter is released postsynaptically and binds to target receptors located on the presynaptic neuron. [ 72 ] [ note 1 ] Nicotine , a compound found in tobacco , is a direct agonist of most nicotinic acetylcholine receptors , mainly located in cholinergic neurons . [ 67 ] Opiates , such as morphine , heroin , hydrocodone , oxycodone , codeine , and methadone , are μ-opioid receptor agonists; this action mediates their euphoriant and pain relieving properties. [ 67 ] Indirect agonists increase the binding of neurotransmitters at their target receptors by stimulating the release or preventing the reuptake of neurotransmitters. [ 71 ] Some indirect agonists trigger neurotransmitter release and prevent neurotransmitter reuptake . Amphetamine , for example, is an indirect agonist of postsynaptic dopamine, norepinephrine, and serotonin receptors in each their respective neurons; [ 45 ] [ 46 ] it produces both neurotransmitter release into the presynaptic neuron and subsequently the synaptic cleft and prevents their reuptake from the synaptic cleft by activating TAAR1 , a presynaptic G protein-coupled receptor , and binding to a site on VMAT2 , a type of monoamine transporter located on synaptic vesicles within monoamine neurons . [ 45 ] [ 46 ] An antagonist is a chemical that acts within the body to reduce the physiological activity of another chemical substance (such as an opiate); especially one that opposes the action on the nervous system of a drug or a substance occurring naturally in the body by combining with and blocking its nervous receptor. [ 73 ] There are two main types of antagonist: direct-acting Antagonist and indirect-acting Antagonists: An antagonist drug is one that attaches (or binds) to a site called a receptor without activating that receptor to produce a biological response. It is therefore said to have no intrinsic activity. An antagonist may also be called a receptor "blocker" because they block the effect of an agonist at the site. The pharmacological effects of an antagonist, therefore, result in preventing the corresponding receptor site's agonists (e.g., drugs, hormones, neurotransmitters) from binding to and activating it. Antagonists may be "competitive" or "irreversible". [ citation needed ] A competitive antagonist competes with an agonist for binding to the receptor. As the concentration of antagonist increases, the binding of the agonist is progressively inhibited, resulting in a decrease in the physiological response. High concentration of an antagonist can completely inhibit the response. This inhibition can be reversed, however, by an increase of the concentration of the agonist, since the agonist and antagonist compete for binding to the receptor. Competitive antagonists, therefore, can be characterized as shifting the dose–response relationship for the agonist to the right. In the presence of a competitive antagonist, it takes an increased concentration of the agonist to produce the same response observed in the absence of the antagonist. [ citation needed ] An irreversible antagonist binds so strongly to the receptor as to render the receptor unavailable for binding to the agonist. Irreversible antagonists may even form covalent chemical bonds with the receptor. In either case, if the concentration of the irreversible antagonist is high enough, the number of unbound receptors remaining for agonist binding may be so low that even high concentrations of the agonist do not produce the maximum biological response. [ 74 ] While intake of neurotransmitter precursors does increase neurotransmitter synthesis, evidence is mixed as to whether neurotransmitter release and postsynaptic receptor firing is increased. Even with increased neurotransmitter release, it is unclear whether this will result in a long-term increase in neurotransmitter signal strength, since the nervous system can adapt to changes such as increased neurotransmitter synthesis and may therefore maintain constant firing. [ 78 ] [ unreliable medical source? ] Some neurotransmitters may have a role in depression and there is some evidence to suggest that intake of precursors of these neurotransmitters may be useful in the treatment of mild and moderate depression. [ 78 ] [ unreliable medical source? ] [ 79 ] L -DOPA , a precursor of dopamine that crosses the blood–brain barrier , is used in the treatment of Parkinson's disease . For depressed patients where low activity of the neurotransmitter norepinephrine is implicated, there is only little evidence for benefit of neurotransmitter precursor administration. L-phenylalanine and L-tyrosine are both precursors for dopamine , norepinephrine , and epinephrine . These conversions require vitamin B6 , vitamin C , and S-adenosylmethionine . A few studies suggest potential antidepressant effects of L-phenylalanine and L-tyrosine, but there is much room for further research in this area. [ 78 ] [ unreliable medical source? ] Administration of L-tryptophan , a precursor for serotonin , is seen to double the production of serotonin in the brain. It is significantly more effective than a placebo in the treatment of mild and moderate depression. [ 78 ] [ unreliable medical source? ] This conversion requires vitamin C . [ 30 ] 5-hydroxytryptophan (5-HTP), also a precursor for serotonin , is more effective than a placebo. [ 78 ] [ unreliable medical source? ] Diseases and disorders may also affect specific neurotransmitter systems. The following are disorders involved in either an increase, decrease, or imbalance of certain neurotransmitters. [ citation needed ] For example, problems in producing dopamine (mainly in the substantia nigra ) can result in Parkinson's disease , a disorder that affects a person's ability to move as they want to, resulting in stiffness, tremors or shaking, and other symptoms. Some studies suggest that having too little or too much dopamine or problems using dopamine in the thinking and feeling regions of the brain may play a role in disorders like schizophrenia or attention deficit hyperactivity disorder (ADHD). Dopamine is also involved in addiction and drug use, as most recreational drugs cause an influx of dopamine in the brain (especially opioid and methamphetamines ) that produces a pleasurable feeling, which is why users constantly crave drugs. [ 80 ] Similarly, after some research suggested that drugs that block the recycling, or reuptake, of serotonin seemed to help some people diagnosed with depression, it was theorized that people with depression might have lower-than-normal serotonin levels. Though widely popularized, this theory was not borne out in subsequent research. [ 81 ] Therefore, selective serotonin reuptake inhibitors (SSRIs) are used to increase the amounts of serotonin in synapses. [ citation needed ] Furthermore, problems with producing or using glutamate have been suggestively and tentatively linked to many mental disorders, including autism , obsessive–compulsive disorder (OCD), schizophrenia , and depression . [ 82 ] Having too much glutamate has been linked to neurological diseases such as Parkinson's disease , multiple sclerosis , Alzheimer's disease , stroke , and ALS (amyotrophic lateral sclerosis). [ 83 ] Generally, there are no scientifically established "norms" for appropriate levels or "balances" of different neurotransmitters. In most cases, it is practically impossible to measure neurotransmitter levels in the brain or body at any given moment. Neurotransmitters regulate each other's release, and weak consistent imbalances in this mutual regulation were linked to temperament in healthy people. [ 84 ] [ 85 ] [ 86 ] [ 87 ] [ 88 ] However, significant imbalances or disruptions in neurotransmitter systems are associated with various diseases and mental disorders, including Parkinson's disease, depression, insomnia, Attention Deficit Hyperactivity Disorder (ADHD), anxiety, memory loss, dramatic weight changes, and addictions. Some of these conditions are also related to neurotransmitter switching, a phenomenon where neurons change the type of neurotransmitters they release. [ 89 ] [ 90 ] [ 91 ] Chronic physical or emotional stress can be a contributor to neurotransmitter system changes. Genetics also plays a role in neurotransmitter activities. Apart from recreational use, medications that directly and indirectly interact with one or more transmitter or its receptor are commonly prescribed for psychiatric and psychological issues. Notably, drugs interacting with serotonin and norepinephrine are prescribed to patients with problems such as depression and anxiety—though the notion that there is much solid medical evidence to support such interventions has been widely criticized. [ 92 ] Studies shown that dopamine imbalance has an influence on multiple sclerosis and other neurological disorders. [ 93 ]
https://en.wikipedia.org/wiki/Neurotransmitter
A neurotransmitter receptor (also known as a neuroreceptor ) is a membrane receptor protein [ 1 ] that is activated by a neurotransmitter . [ 2 ] Chemicals on the outside of the cell, such as a neurotransmitter, can bump into the cell's membrane, in which there are receptors. If a neurotransmitter bumps into its corresponding receptor, they will bind and can trigger other events to occur inside the cell. Therefore, a membrane receptor is part of the molecular machinery that allows cells to communicate with one another. A neurotransmitter receptor is a class of receptors that specifically binds with neurotransmitters as opposed to other molecules. In postsynaptic cells, neurotransmitter receptors receive signals that trigger an electrical signal, by regulating the activity of ion channels . The influx of ions through ion channels opened due to the binding of neurotransmitters to specific receptors can change the membrane potential of a neuron. This can result in a signal that runs along the axon (see action potential ) and is passed along at a synapse to another neuron and possibly on to a neural network . [ 1 ] On presynaptic cells, there are receptors known as autoreceptors that are specific to the neurotransmitters released by that cell, which provide feedback and mediate excessive neurotransmitter release from it. [ 3 ] There are two major types of neurotransmitter receptors: ionotropic and metabotropic . Ionotropic means that ions can pass through the receptor, whereas metabotropic means that a second messenger inside the cell relays the message (i.e. metabotropic receptors do not have channels). There are several kinds of metabotropic receptors, including G protein-coupled receptors . [ 2 ] [ 4 ] Ionotropic receptors are also called ligand-gated ion channels and they can be activated by neurotransmitters ( ligands ) like glutamate and GABA , which then allow specific ions through the membrane. Sodium ions (that are, for example, allowed passage by the glutamate receptor ) excite the post-synaptic cell, while chloride ions (that are, for example, allowed passage by the GABA receptor) inhibit the post-synaptic cell. Inhibition reduces the chance that an action potential will occur, while excitation increases the chance. Conversely, G-protein-coupled receptors are neither excitatory nor inhibitory. Rather, they can have a broad number of functions such as modulating the actions of excitatory and inhibitory ion channels or triggering a signalling cascade that releases calcium from stores inside the cell. [ 2 ] Most neurotransmitters receptors are G-protein coupled. [ 1 ] Neurotransmitter (NT) receptors are located on the surface of neuronal and glial cells . At a synapse , one neuron sends messages to the other neuron via neurotransmitters. Therefore, the postsynaptic neuron, the one receiving the message, clusters NT receptors at this specific place in its membrane. NT receptors can be inserted into any region of the neuron's membrane such as dendrites, axons, and the cell body. [ 5 ] Receptors can be located in different parts of the body to act as either an inhibitor or an excitatory receptor for a specific Neurotransmitter [ 6 ] An example of this are the receptors for the neurotransmitter Acetylcholine (ACh), one receptor is located at the neuromuscular junction in skeletal muscle to facilitate muscle contraction (excitation), while the other receptor is located in the heart to slow down heart rate (inhibitory) [ 6 ] Ligand-gated ion channels ( LGICs ) are one type of ionotropic receptor or channel-linked receptor . They are a group of transmembrane ion channels that are opened or closed in response to the binding of a chemical messenger (i.e., a ligand ), [ 7 ] such as a neurotransmitter . [ 8 ] The binding site of endogenous ligands on LGICs protein complexes are normally located on a different portion of the protein (an allosteric binding site) compared to where the ion conduction pore is located. The direct link between ligand binding and opening or closing of the ion channel, which is characteristic of ligand-gated ion channels, is contrasted with the indirect function of metabotropic receptors , which use second messengers . LGICs are also different from voltage-gated ion channels (which open and close depending on membrane potential ), and stretch-activated ion channels (which open and close depending on mechanical deformation of the cell membrane ). [ 8 ] [ 9 ] G protein-coupled receptors ( GPCRs ), also known as seven-transmembrane domain receptors , 7TM receptors , heptahelical receptors , serpentine receptor , and G protein-linked receptors ( GPLR ), comprise a large protein family of transmembrane receptors that sense molecules outside the cell and activate inside signal transduction pathways and, ultimately, cellular responses. G protein-coupled receptors are found only in eukaryotes , including yeast, choanoflagellates , [ 10 ] and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors , pheromones , hormones , and neurotransmitters , and vary in size from small molecules to peptides to large proteins . G protein-coupled receptors are involved in many diseases, and are also the target of approximately 30% of all modern medicinal drugs. [ 11 ] [ 12 ] There are two principal signal transduction pathways involving the G protein-coupled receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. [ 13 ] When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G-protein by exchanging its bound GDP for a GTP . The G-protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type ( G αs , G αi/o , G αq/11 , G α12/13 ). [ 14 ] : 1160 Neurotransmitter receptors are subject to ligand-induced desensitization: That is, they can become unresponsive upon prolonged exposure to their neurotransmitter. Neurotransmitter receptors are present on both postsynaptic neurons and presynaptic neurons with the former being used to receive neurotransmitters and the latter for the purpose of preventing further release of a given neurotransmitter. [ 15 ] In addition to being found in neuron cells, neurotransmitter receptors are also found in various immune and muscle tissues. Many neurotransmitter receptors are categorized as a serpentine receptor or G protein-coupled receptor because they span the cell membrane not once, but seven times. Neurotransmitter receptors are known to become unresponsive to the type of neurotransmitter they receive when exposed for extended periods of time. This phenomenon is known as ligand-induced desensitization [ 15 ] or downregulation . The following are some major classes of neurotransmitter receptors: [ 16 ]
https://en.wikipedia.org/wiki/Neurotransmitter_receptor
Neurotrophic factors ( NTFs ) are a family of biomolecules – nearly all of which are peptides or small proteins – that support the growth, survival, and differentiation of both developing and mature neurons. [ 1 ] [ 2 ] [ 3 ] Most NTFs exert their trophic effects on neurons by signaling through tyrosine kinases , [ 2 ] usually a receptor tyrosine kinase . In the mature nervous system, they promote neuronal survival, induce synaptic plasticity, and modulate the formation of long-term memories. [ 2 ] Neurotrophic factors also promote the initial growth and development of neurons in the central nervous system and peripheral nervous system , and they are capable of regrowing damaged neurons in test tubes and animal models. [ 1 ] [ 4 ] Some neurotrophic factors are also released by the target tissue in order to guide the growth of developing axons . Most neurotrophic factors belong to one of three families: (1) neurotrophins , (2) glial cell-line derived neurotrophic factor family ligands (GFLs), and (3) neuropoietic cytokines. [ 4 ] Each family has its own distinct cell signaling mechanisms , although the cellular responses elicited often do overlap. [ 4 ] Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems , they are loosely immobilized such that they can be selectively released at specified times and in specified amounts. [ medical citation needed ] Although more information is being discovered about neurotrophic factors, their classification is based on different cellular mechanisms and they are grouped into three main families: the neurotrophins , the CNTF family, and GDNF family . [ 2 ] [ 5 ] [ 6 ] Brain-derived neurotrophic factor (BDNF) is structurally similar to NGF , NT-3, and NT-4 /5, [ 7 ] and shares the TrkB receptor with NT-4. [ 8 ] The brain-derived neurotrophic factor/TrkB system promotes thymocyte survival, as studied in the thymus of mice. [ 8 ] Other experiments suggest BDNF is more important and necessary for neuronal survival than other factors. [ 5 ] However, this compensatory mechanism is still not known. Specifically, BDNF promotes survival of dorsal root ganglion neurons. [ 7 ] Even when bound to a truncated TrkB, BDNF still shows growth and developmental roles. [ 7 ] Without BDNF (homozygous (-/-)), mice do not survive past three weeks. [ 7 ] Including development, BDNF has important regulatory roles in the development of the visual cortex , enhancing neurogenesis, and improving learning and memory. [ 7 ] Specifically, BDNF acts within the hippocampus . Studies have shown that corticosterone treatment and adrenalectomy reduces or upregulated hippocampal BDNF expression. [ 9 ] Consistent between human and animal studies, BDNF levels are decreased in those with untreated major depression . [ 9 ] However, the correlation between BDNF levels and depression is controversial. [ 9 ] [ 10 ] Nerve growth factor (NGF) uses the high-affinity receptor TrkA [ 11 ] [ 8 ] to promote myelination [ 11 ] and the differentiation of neurons. [ 12 ] Studies have shown dysregulation of NGF causes hyperalgesia and pain. [ 8 ] [ 12 ] NGF production is highly correlated to the extent of inflammation . Even though it is clear that exogenous administration of NGF helps decrease tissue inflammation, the molecular mechanisms are still unknown. [ 12 ] Moreover, blood NGF levels are increased in times of stress, during immune disease , and with asthma or arthritis , amongst other conditions. [ 8 ] [ 12 ] Whereas neurotrophic factors within the neurotrophin family commonly have a protein tyrosine kinase receptor (Trk), Neurotrophin-3 (NT-3) has the unique receptor, TrkC . [ 8 ] In fact, the discovery of the different receptors helped differentiate scientists' understanding and classification of NT-3. [ 13 ] NT-3 does share similar properties with other members of this class, and is known to be important in neuronal survival. [ 13 ] The NT-3 protein is found within the thymus , spleen , intestinal epithelium but its role in the function of each organ is still unknown. [ 8 ] The CNTF family of neurotrophic factors includes ciliary neurotrophic factor (CNTF), leukemia inhibitory factor (LIF), interleukin-6 (IL-6), prolactin , growth hormone , leptin , interferons (i.e., interferon-α, -β, and -γ ), and oncostatin M . [ 2 ] Ciliary neurotrophic factor affects embryonic motor neurons , dorsal root ganglion sensory neurons, and ciliary neuron hippocampal neurons. [ 14 ] It is structurally related to leukemia inhibitory factor (LIF), interleukin 6 (IL-6), and oncostatin M (OSM). [ 15 ] CNTF prevents degeneration of motor neurons in rats and mice which increases survival time and motor function of the mice. These results suggest exogenous CNTF could be used as a therapeutic treatment for human degenerative motor neuron diseases . [ 16 ] It also has unexpected leptin -like characteristics as it causes weight loss. [ 14 ] The GDNF family of ligands includes glial cell line-derived neurotrophic factor (GDNF), artemin , neurturin , and persephin . [ 2 ] Glial cell line-derived neurotrophic factor (GDNF) was originally detected as survival promoter derived from a glioma cell . Later studies determined GDNF uses a receptor tyrosine kinase and a high-affinity ligand-binding co-receptor GFRα . [ 17 ] GDNF has an especially strong affinity for dopaminergic (DA) neurons . [ 5 ] Specifically, studies have shown GDNF plays a protective role against MPTP toxins for DA neurons. It has also been detected in motor neurons of embryonic rats and is suggested to aid development and to reduce axotomy . [ 5 ] The ephrins are a family of neurotrophic factors that signal through eph receptors , a class of receptor tyrosine kinases ; [ 2 ] the family of ephrins include ephrin A1 , A2 , A3 , A4 , A5 , B1 , B2 , and B3 . The EGF and TGF families of neurotrophic factors are composed of epidermal growth factor , the neuregulins , transforming growth factor alpha (TGFα), and transforming growth factor beta (TGFβ). [ 2 ] They signal through receptor tyrosine kinases and serine/threonine protein kinases . [ 2 ] Several other biomolecules that have identified as neurotrophic factors include: glia maturation factor , insulin , insulin-like growth factor 1 (IGF-1), vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF), platelet-derived growth factor (PDGF), pituitary adenylate cyclase-activating peptide (PACAP), interleukin-1 (IL-1), interleukin-2 (IL-2), interleukin-3 (IL-3), interleukin-5 (IL-5), interleukin-8 (IL-8), macrophage colony-stimulating factor (M-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), and neurotactin . [ 2 ]
https://en.wikipedia.org/wiki/Neurotrophic_factors
Neurotrophins are a family of proteins that induce the survival, [ 1 ] development, and function [ 2 ] of neurons . They belong to a class of growth factors , secreted proteins that can signal particular cells to survive, differentiate, or grow. [ 3 ] Growth factors such as neurotrophins that promote the survival of neurons are known as neurotrophic factors . Neurotrophic factors are secreted by target tissue and act by preventing the associated neuron from initiating programmed cell death – allowing the neurons to survive. Neurotrophins also induce differentiation of progenitor cells , to form neurons. Although the vast majority of neurons in the mammalian brain are formed prenatally, parts of the adult brain (for example, the hippocampus ) retain the ability to grow new neurons from neural stem cells , a process known as neurogenesis . [ 4 ] Neurotrophins are chemicals that help to stimulate and control neurogenesis. According to the United States National Library of Medicine 's medical subject headings , the term neurotrophin may be used as a synonym for neurotrophic factor , [ 5 ] but the term neurotrophin is more generally reserved for four structurally related factors: nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), and neurotrophin-4 (NT-4). [ 6 ] The term neurotrophic factor generally refers to these four neurotrophins, the GDNF family of ligands , and ciliary neurotrophic factor (CNTF), among other biomolecules . [ 6 ] Neurotrophin-6 and neurotrophin-7 also exist, but are only found in zebrafish . [ 7 ] During the development of the vertebrate nervous system, many neurons become redundant (because they have died, failed to connect to target cells, etc.) and are eliminated. At the same time, developing neurons send out axon outgrowths that contact their target cells. [ 8 ] Such cells control their degree of innervation (the number of axon connections) by the secretion of various specific neurotrophic factors that are essential for neuron survival. One of these is nerve growth factor (NGF or beta-NGF), a vertebrate protein that stimulates division and differentiation of sympathetic and embryonic sensory neurons. [ 9 ] [ 10 ] NGF is mostly found outside the central nervous system (CNS), but slight traces have been detected in adult CNS tissues, although a physiological role for this is unknown. [ 8 ] It has also been found in several snake venoms. [ 11 ] [ 12 ] In the peripheral and central neurons, neurotrophins are important regulators for survival, differentiation, and maintenance of nerve cells. They are small proteins that secrete into the nervous system to help keep nerve cells alive. There are two distinct classes of glycosylated receptors that can bind to neurotrophins. These two proteins are p75 (NTR), which binds to all neurotrophins, and subtypes of Trk , which are each specific for different neurotrophins. The reported structure above is a 2.6 Å-resolution crystal structure of neurotrophin-3 (NT-3) complexed to the ectodomain of glycosylated p75 (NRT), forming a symmetrical crystal structure. [ citation needed ] There are two classes of receptors for neurotrophins: p75 and the "Trk" family of Tyrosine kinases receptors. [ 13 ] Nerve growth factor (NGF), the prototypical growth factor , is a protein secreted by a neuron's target cell. NGF is critical for the survival and maintenance of sympathetic and sensory neurons. NGF is released from the target cells, binds to and activates its high affinity receptor TrkA on the neuron, and is internalized into the responsive neuron. The NGF/TrkA complex is subsequently trafficked back to the neuron's cell body . This movement of NGF from axon tip to soma is thought to be involved in the long-distance signaling of neurons. [ 14 ] Brain-derived neurotrophic factor (BDNF) is a neurotrophic factor found originally in the brain , but also found in the periphery. To be specific, it is a protein that has activity on certain neurons of the central nervous system and the peripheral nervous system; it helps to support the survival of existing neurons, and encourage the growth and differentiation of new neurons and synapses through axonal and dendritic sprouting. In the brain, it is active in the hippocampus , cortex , cerebellum , and basal forebrain – areas vital to learning, memory, and higher thinking. BDNF was the second neurotrophic factor to be characterized, after NGF and before neurotrophin-3. [ citation needed ] BDNF is one of the most active substances to stimulate neurogenesis. Mice born without the ability to make BDNF suffer developmental defects in the brain and sensory nervous system, and usually die soon after birth, suggesting that BDNF plays an important role in normal neural development . [ citation needed ] Despite its name, BDNF is actually found in a range of tissue and cell types, not just the brain. Expression can be seen in the retina, the CNS, motor neurons, the kidneys, and the prostate. Exercise has been shown to increase the amount of BDNF and therefore serve as a vehicle for neuroplasticity. [ 15 ] Neurotrophin-3, or NT-3, is a neurotrophic factor, in the NGF-family of neurotrophins. It is a protein growth factor that has activity on certain neurons of the peripheral and central nervous system ; it helps to support the survival and differentiation of existing neurons, and encourages the growth and differentiation of new neurons and synapses . NT-3 is the third neurotrophic factor to be characterized, after NGF and BDNF. [ citation needed ] NT-3 is unique among the neurotrophins in the number of neurons it has potential to stimulate, given its ability to activate two of the receptor tyrosine kinase neurotrophin receptors ( TrkC and TrkB ). Mice born without the ability to make NT-3 have loss of proprioceptive and subsets of mechanoreceptive sensory neurons. [ citation needed ] Neurotrophin-4 (NT-4) is a neurotrophic factor that signals predominantly through the TrkB receptor tyrosine kinase . It is also known as NT4, NT5, NTF4, and NT-4/5. [ 16 ] The endogenous steroids dehydroepiandrosterone (DHEA) and its sulfate ester , DHEA sulfate (DHEA-S), have been identified as small-molecule agonists of the TrkA and p75 NTR with high affinity (around 5 nM), and hence as so-called "microneurotrophins". [ 17 ] [ 18 ] [ 19 ] [ 20 ] DHEA has also been found to bind to the TrkB and TrkC, though while it activated the TrkC, it was unable to activate the TrkB. [ 17 ] It has been proposed that DHEA may have been the ancestral ligand of the Trk receptors early on in the evolution of the nervous system , eventually being superseded by the polypeptide neurotrophins. [ 17 ] [ 19 ] During neuron development neurotrophins play a key role in growth, differentiation , and survival. [ 21 ] They also play an important role in the apoptotic programmed cell death (PCD) of neurons. [ 22 ] Neurotrophic survival signals in neurons are mediated by the high-affinity binding of neurotrophins to their respective Trk receptor. [ 21 ] In turn, a majority of neuronal apoptotic signals are mediated by neurotrophins binding to the p75NTR . [ 22 ] The PCD which occurs during brain development is responsible for the loss of a majority of neuroblasts and differentiating neurons. [ 21 ] It is necessary because during development there is a massive over production of neurons which must be killed off to attain optimal function. [ 21 ] [ 22 ] In the development of both the peripheral nervous system (PNS) and the central nervous system (CNS) the p75NTR-neurotrophin binding activates multiple intracellular pathways which are important in regulating apoptosis. [ 21 ] [ 23 ] Proneurotrophins (proNTs) are neurotrophins which are released as biologically active uncleaved pro-peptides . [ 21 ] Unlike mature neurotrophins which bind to the p75NTR with a low affinity, proNTs preferentially bind to the p75NTR with high affinity. [ 24 ] [ 25 ] The p75NTR contains a death domain on its cytoplasmic tail which when cleaved activates an apoptotic pathway. [ 21 ] [ 22 ] [ 26 ] The binding of a proNT (proNGF or proBDNF) to p75NTR and its sortilin co-receptor (which binds the pro-domain of proNTs) causes a p75NTR-dependent signal transduction cascade. [ 21 ] [ 22 ] [ 24 ] [ 26 ] The cleaved death domain of p75NTR activates c-Jun N-terminal kinase (JNK). [ 22 ] [ 27 ] [ 28 ] The activated JNK translocates into the nucleus , where it phosphorylates and transactivates c-Jun . [ 22 ] [ 27 ] The transactivation of c-Jun results in the transcription of pro-apoptotic factors TFF-a , Fas-L and Bak . [ 21 ] [ 22 ] [ 24 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] The importance of sortilin in p75NTR-mediated apoptosis is exhibited by the fact that the inhibition of sortilin expression in neurons expressing p75NTR suppresses proNGF-mediated apoptosis, and the prevention of proBDNF binding to p75NTR and sortilin abolished apoptotic action. [ 24 ] Activation of p75NTR-mediated apoptosis is much more effective in the absence of Trk receptors due to the fact that activated Trk receptors suppress the JNK cascade. [ 28 ] [ 30 ] The expression of TrkA or TrkC receptors in the absence of neurotrophins can lead to apoptosis, but the mechanism is poorly understood. [ 31 ] The addition of NGF (for TrkA) or NT-3 (for TrkC) prevents this apoptosis. [ 31 ] For this reason TrkA and TrkC are referred to as dependence receptors , because whether they induce apoptosis or survival is dependent on the presence of neurotrophins. [ 22 ] [ 32 ] The expression of TrkB, which is found mainly in the CNS, does not cause apoptosis. [ 22 ] This is thought to be because it is differentially located in the cell membrane while TrkA and TrkC are co-localized with p75NTR in lipid rafts . [ 22 ] [ 31 ] In the PNS (where NGF, NT-3 and NT-4 are mainly secreted) cell fate is determined by a single growth factor (i.e. neurotrophins). [ 24 ] [ 32 ] However, in the CNS (where BDNF is mainly secreted in the spinal cord , substantia nigra , amygdala , hypothalamus , cerebellum , hippocampus and cortex ) more factors determine cell fate, including neural activity and neurotransmitter input. [ 24 ] [ 32 ] Neurotrophins in the CNS have also been shown to play a more important role in neural cell differentiation and function rather than survival. [ 32 ] For these reasons, compared to neurons in the PNS, neurons of the CNS are less sensitive to the absence of a single neurotrophin or neurotrophin receptor during development; with the exception being neurons in the thalamus and substantia nigra . [ 22 ] Gene knockout experiments were conducted to identify the neuronal populations in both the PNS and CNS that were affected by the loss of different neurotrophins during development and the extent to which these populations were affected. [ 22 ] These knockout experiments resulted in the loss of several neuron populations including the retina , cholinergic brainstem and the spinal cord . [ 22 ] [ 24 ] It was found that NGF-knockout mice had losses of a majority of their dorsal root ganglia (DRG), trigeminal ganglia and superior cervical ganglia . [ 22 ] [ 28 ] The viability of these mice was poor. [ 22 ] The BDNF-knockout mice had losses of a majority of their vestibular ganglia and moderate losses of their DRG, [ 33 ] trigeminal ganglia, nodose petrosal ganglia and cochlear ganglia. [ 22 ] [ 28 ] In addition they also had minor losses of their facial motoneurons located in the CNS. [ 22 ] [ 28 ] The viability of these mice was moderate. [ 22 ] The NT-4-knockout mice had moderate losses of their nodose petrosal ganglia and minor losses of their DRG, trigeminal ganglia and vestibular ganglia. [ 22 ] [ 28 ] The NT-4-knockout mice also had minor losses of facial motoneurons. [ 22 ] [ 28 ] These mice were very viable. [ 22 ] The NT-3 knockout mice had losses of a majority of their DRG, trigeminal ganglia, cochlear ganglia and superior cervical ganglia and moderate losses of nodose petrosal ganglia and vestibular ganglia. [ 22 ] [ 28 ] In addition the NT-3-knockout mice had moderate losses of spinal moroneurons . [ 22 ] [ 28 ] These mice had very poor viability. [ 22 ] These results show that the absence of different neurotrophins result in losses of different neuron populations (mainly in the PNS). [ 22 ] Furthermore, the absence of the neurotrophin survival signal leads to apoptosis. [ 22 ]
https://en.wikipedia.org/wiki/Neurotrophin
Neurotrophin mimetics are small molecules or peptide like molecules that can modulate the action of the neurotrophin receptor. One of the main causes of neurodegeneration involves changes in the expression of neurotrophins (NTs) and/or their receptors ( TrkA , TrkB , TrkC and p75NTR ). Indeed, these imbalances or changes in their activity, lead to neuronal damage resulting in neurological and neurodegenerative conditions. The therapeutic properties of neurotrophins attracted the focus of many researchers during the years, but the poor pharmacokinetic properties, such as reduced bioavailability and low metabolic stability, the hyperalgesia , the inability to penetrate the blood–brain barrier and the short half-lives render the large neurotrophin proteins not suitable to be implemented as drugs. [ 1 ] For this reason, several efforts have been made to develop neurotrophin mimetics (small molecules and peptidomimetics ) that can modulate the action of the neurotrophin receptors ( Trks and p75NTR) and possess drug-like pharmacokinetic and pharmacodynamic profiles. Specifically, these mimetics can be classified as TrkA and TrkB receptor agonists and p75NTR modulators/antagonists. [ 2 ] Among the TrkA agonists, the small molecule gambogic amide exerts a potent neurotrophic activity decreasing apoptosis in primary hippocampal neurons. [ 3 ] The non-peptidic TrkA agonist MT2 protects neurons from Aβ amyloid -mediated death in NGF -deficient neurons [ 4 ] and talaumidin and its derivatives show neuroprotective effects, promoting neurite outgrowth in PC12 cells . [ 5 ] Furthermore, the peptidomimetic cerebrolysin is known for its protective role in Alzheimer's disease (AD). [ 6 ] It was shown to improve the activities of daily living and the psychiatric symptoms in patients with mild to severe form of AD, after intravenous administration in a double-blind trial . [ 7 ] In addition, the cyclic peptide tavilermide (MIM-D3), acting as a partial TrkA receptor agonist, showed a relevant improvement of cognitive capacities of treated aged rats, leading to a selective survival of the cholinergic neurons . [ 8 ] A phase 3 clinical trial of 5% and 1% tavilermide ophthalmic solutions for the treatment of dry eye was completed in 2020 (NCT03925727), with positive results concerning safety and efficacy. Recent studies demonstrated the neurotrophic activity of carvacrol by inducing neurite outgrowth and phosphorylation of TrkA in cells deprived of NGF. [ 9 ] The same research group investigated the neurotrophic effect of the well-known antibiotic doxycycline and they found that it prevents amyloid toxicity in a Drosophila model of AD both in vitro and in vivo and induces neuritogenesis by activation of TrkA. [ 10 ] Additionally, some novel DHEA derivatives were shown to be TrkA agonists. In particular, the C17-spiroepoxy derivative, BNN-27 , [ 11 ] induces phosphorylation of TrkA in neuronal and glial cells in culture and it exerts antiapoptotic effect without inducing hyperalgesia . [ 12 ] Moreover, it improved memorizing abilities in rats after i.p. administration [ 13 ] and restored the myelin loss in cuprizone-induced demyelination in vivo . [ 14 ] Moreover, the C17-spirocyclopryl DHEA derivatives, ENT-A010 and ENT-A013, were shown to be potent TrkA agonists. [ 15 ] [ 16 ] In particular, ENT-A010 acts as dual TrkA and TrkB agonist while, ENT-A013 acts as a selective TrkA agonist. Both induce phosphorylation of TrkA and its downstream signaling pathways , and promote cell survival of PC12 cells from serum deprivation. In addition, they exhibit potent neuroprotective effects in dorsal root ganglia and anti-amyloid activity in hippocampal neurons. [ 15 ] [ 16 ] TrkB agonists have received extensive interest from the scientific community resulting in the synthesis and biological evaluation of a large number of mimetics. Deoxygedunin , with a selective TrkB activity, is able to promote axon regeneration in topical treatments. [ 17 ] Furthermore, it shows efficacy in two Parkinson's disease (PD) animal models, leading to the protection of locomotor function and the reduction of neuronal death in dopaminergic neurons . [ 18 ] A number of studies corroborated that the flavonoid 7,8-Dihydroxyflavone (7,8-DHF) shows neuroprotection in PD and Huntington's disease (HD) models [ 19 ] [ 20 ] together with antioxidant activity [ 21 ] and enhancement of motor neuronal survival, motor function and spine density in amyotrophic lateral sclerosis (ALS) model. [ 22 ] The benzothiazole riluzole exerts neuroprotective effects by increasing BDNF and GDNF levels with improvement of motor neuron survival. It has been approved for the treatment of ALS and delays the onset of ventilator-dependence or tracheostomy in some people and may increase survival by two to three months. [ 23 ] Furthermore, several combinations of riluzole with other drugs are in clinical trials (NCT02588677, NCT03127267). Brimonidine exerts neuroprotective effects in retinal ganglion cells (RGCs) through up-regulation of the expression of BDNF in these cells. [ 24 ] It is used in the treatment of glaucoma as eye drops to reduce intraocular pressure (IOP) under the brand name Lumify®. Different drugs, used against PD also behave as neurotrophin mimetics such as rotigotine , selegiline , rasagiline , memantine and levodopa interacting with TrkB and increasing BDNF expression. [ 25 ] Furthermore, of particular note, the groups of F. Longo and S. Massa discovered small molecule neurotrophic mimetics exhibiting specificity for TrkB at nanomolar concentrations . [ 26 ] In particular, LM22A-4 , prevents neuronal death in in vitro models of AD, HD and PD. [ 27 ] Among the peptidomimetic TrkB agonists, the dimeric dipeptide GSB-106 showed neurotrophic and neuroprotective effects by specific activation of TrkB and its signaling pathways . [ 28 ] [ 29 ] Furthermore, the tricyclic dimeric peptide TDP6 acts as a TrkB agonist mimicking BDNF and induces autophosphorylation of TrkB in primary oligodendrocyte cultures, leading to oligodendrocyte myelination. [ 30 ] Regarding DHEA derivatives, the C17-spiroepoxy analogue, BNN-20 , binds with high affinity to TrkB, showing antiapoptotic activity in vitro . Its neuroprotective activity was analyzed in the Weaver mouse genetic model of PD in which long term administration of BNN-20 protects the dopaminergic neurons by mimicking BDNF and induces antiapoptotic, antioxidant and anti-inflammatory effects. [ 11 ] [ 31 ] In this class it is worthwhile to highlight the small non-peptide molecules LM22A-24 and LM11A-31 developed by Longo and Massa. Through the modulation of p75NTR activity, these compounds downregulate degenerative and upregulate trophic signaling. [ 32 ] In particular, LM11A-31 was found to inhibit several pathophysiological mechanisms involved in AD and related to p75NTR. [ 33 ] [ 34 ] Oral administration in AD mice models reduces degeneration of cholinergic neurites. [ 34 ] Furthermore, by a direct activation of p75NTR signaling and inhibition of apoptotic pathway, it improves motor function in a spinal cord injury (SCI) mice model and leads to an antiapoptotic effect in mice after traumatic brain injury (TBI). [ 35 ] [ 36 ] In February 2017, a phase 2 clinical trial started focusing on the evaluation of the safety of LM11A-31 in mild to moderate AD (NCT03069014). This study was completed in June 2020, but the results have not been published yet. Another drug belonging to the class of p75NTR antagonists is THX-B, which inhibits NGF-p75NTR binding and prevents the death of RGCs in axotomy and glaucoma. In addition, in combination with LM22A-24, THX-B delays the loss of retinal structure, prevents RGC degeneration and preserves ganglion cell layer - inner plexiform layer thickness with a better efficacy compared to LM22A-24. [ 37 ] Finally, a p75NTR antagonist, EVT901, was able to improve functional outcomes in two models of traumatic brain injury. [ 38 ] Furthermore it was found to reduce inflammation in vivo in the TGFAD344 rat model of AD. [ 39 ] There are a number of natural products with neurotrophic activity, which results from several mechanisms including enhancing BDNF gene transcription , upregulating the expression of BDNF and TrkB, and extracellular signal-regulated kinase (ERK) and CREB signalling. [ 40 ] [ 41 ] [ 42 ] The first discovered non-protein neurotrophic natural product was lactacystin , isolated from a culture broth of Streptomyces sp . [ 40 ] Magnolol and honokiol , the main constituents of Magnolia officinalis and Magnolia obovata stem bark, have been reported to have neurotrophic activity in primary cultured rat cortical by enhancing the BDNF expression. [ 41 ] [ 42 ] Merrilactone A , jiadifenin, jiadifenolide , (1R,10S)-2-oxo-3,4-dehydroxyneomajucin, jiadifenoxolane A, (2R)-hydroxynorneomajucin, 11- O -debenzoyltashironin,tricycloillicinone, and bicycloillicinone, natural products of the Illicium family have been shown to promote neurite outgrowth in primary cultures of cortical neurons of fetal rats. [ 40 ] [ 41 ] Neurotrophic properties are also possessed by several members of the Lycopodium alkaloids ( huperzine A , lyconadins, complanadine A and B, and nankakurine A and B). Studies have shown that huperazine A can elevate the levels of NGF and BDNF. Synthesis of NGF can be upregulated by administration of cyathanediterpenoids specifically erinacines , scabronines and cyrneines. [ 40 ] Some flavonoids , Isoflavonoids and neoflavonoids were found to have neuroprotective activity. Among the effective flavonoids, luteolin from Lonicera japonica sp. , isorhamnetin from Opuntia ficus-indica , genistein from Genista tinctoria , and calycosin from Astragalus membranaceus showed the most promising effects by increasing the mRNA expression and protein secretion of NGF, GDNF, and BDNF. [ 42 ] Paecilomycine A and spirotenuipesines A and B, members of the trichothecenes , isolated from the fruiting bodies of Paecilomycestenuipes, have significant neurotrophic profiles especially paecilomycine A which can stimulate the synthesis of neurotrophic factors. [ 40 ] Polyprenylatedacylphloroglucinols (PPAPs) represented by hyperforin , hypericin and garsubellin A, have neurotrophic like properties. Hyperforin, isolated from the herb St. John's wort ( Hypericum perforatum ), can stimulate the upregulation of the TrkB receptor. [ 40 ] [ 42 ] Beside natural products, there are some small molecules of natural origin that exert neurotrophic activities such as: Panaxytriol (promotes NGF-induced neurite outgrowth in PC-12 cells); 7,8- dihydroxyflavone (TrkB activator); Deoxygedunin (BDNF mimetic); Kansuinin E (promotes neurotrophic activity, most likely through TrkA activation); Tripchlorolide (stimulates expression of BDNF mRNA); Fucoxanthin (increases BDNF production and activates PKA/CREB pathway); Silibinin (Activate hippocampal ROS-BDNF-TrkB patway). [ 40 ] [ 42 ]
https://en.wikipedia.org/wiki/Neurotrophin_mimetics
A neurotropic virus is a virus that is capable of infecting nerve tissue . [ 1 ] A neurotropic virus is said to be neuroinvasive if it is capable of accessing or entering the nervous system and neurovirulent if it is capable of causing disease within the nervous system. Both terms are often applied to central nervous system infections, although some neurotropic viruses are highly neuroinvasive for the peripheral nervous system (e.g. herpes simplex virus ). Important neuroinvasive viruses include poliovirus , which is highly neurovirulent but weakly neuroinvasive, and rabies virus , which is highly neurovirulent but requires tissue trauma (often resulting from an animal bite) to become neuroinvasive. Using these definitions, herpes simplex virus is highly neuroinvasive for the peripheral nervous system and rarely neuroinvasive for the central nervous system, but in the latter case may cause herpesviral encephalitis and is therefore considered highly neurovirulent. Many arthropod-borne neurotropic viruses, like West Nile virus , spread to the brain primarily via the blood system by crossing the blood–brain barrier in what is called hematogenous dissemination . [ citation needed ] Neurotropic viruses that cause infection include Japanese Encephalitis , Venezuelan Equine Encephalitis , and California encephalitis viruses; polio , coxsackie , echo , mumps , measles , influenza and rabies , as well as diseases caused by members of the family Herpesviridae such as herpes simplex , varicella-zoster , Epstein–Barr , cytomegalovirus and HHV-6 viruses. [ 2 ] All seven of the known human coronaviruses are neurotropic, the common cold viruses mainly in vulnerable populations while the more virulent SARS-CoV-1 , MERS , and SARS-CoV-2 frequently attack the nervous systems (primarily in animal models). [ 3 ] Those causing latent infection include herpes simplex and varicella-zoster viruses. Those causing slow virus infection include measles virus , rubella and JC viruses, and retroviruses such as human T-lymphotropic virus 1 and HIV . [ citation needed ] Neurotropic viruses are increasingly being exploited as research tools, and for their potential use in treatment. In particular, they are being used to improve the understanding of the nervous systems circuits. [ 4 ] [ 5 ] Several diseases, including transmissible spongiform encephalopathy , kuru , and Creutzfeldt–Jakob disease resemble a slow neurotropic virus infection—but are, in fact, caused by the infectious proteins known as prions . [ 2 ]
https://en.wikipedia.org/wiki/Neurotropic_virus
Neurotubules are microtubules found in neurons in nervous tissues. [ 1 ] Along with neurofilaments and microfilaments , they form the cytoskeleton of neurons. Neurotubules are undivided hollow cylinders that are made up of tubulin protein polymers [ 2 ] and arrays parallel to the plasma membrane in neurons. [ 3 ] Neurotubules have an outer diameter of about 23 nm and an inner diameter, also known as the central core, of about 12 nm. The wall of a neurotubule is about 5 nm in width. There is a non-opaque clear zone surrounding the neurotubule and it is about 40 nm in diameter. [ 3 ] Like microtubules, neurotubules are greatly dynamic and their length can be adjusted by polymerization and depolymerization of tubulin. [ 4 ] Despite having similar mechanical properties, neurotubules are distinct from microtubules found in other cell types with regards to their function and intracellular arrangement. Most neurotubules are not anchored in the microtubule organizing center (MTOC) as conventional microtubules are. Instead, they are released for transport into dendrites and axons after their nucleation in the centrosome . Therefore, both ends of the neurotubules terminates in the cytoplasm instead. [ 5 ] Neurotubules are crucial in various cellular processes in neurons. Together with neurofilaments , they help to maintain the shape of a neuron and provide mechanical support. Neurotubules also aid the transportation of organelles, vesicles containing neurotransmitters , messenger RNA and other intracellular molecules inside a neuron. [ 6 ] Like microtubules, neurotubules are made up of protein polymers of α-tubulin and β-tubulin , globular proteins that are closely related. They join together to form a dimer, called tubulin . Neurotubules are generally assembled by 13 protofilaments which are polymerized from tubulin dimers. As a tubulin dimer consists of one α-tubulin and one β-tubulin, one end of the neurotubule is exposed with the α-tubulin and the other end with β-tubulin, these two ends contribute to the polarity of the neurotubule – the plus (+) end and the minus (-) end. The β-tubulin subunit is exposed on the plus (+) end. The two ends differ in their growth rate: plus (+) end is the fast-growing end while minus (-) end is the slow-growing end. Both ends have their own rate of polymerization and depolymerization of tubulin dimers, net polymerization causes the assembly of tubulin, hence the length of the neurotubules. [ 4 ] The growth of neurotubules is regulated by dynamic instability . [ 7 ] It is characterized by distinct phases of growth and rapid shrinkage. The transition from growth to rapid shrinkage is called a 'catastrophe'. The reverse is called a 'rescue'. Neurons have a polarized neurotubule network. [ 8 ] Axons of most neurons contain neurotubules with plus (+) end uniformly pointing towards the axon terminal and minus (-) end orienting towards the cell body, similar to the general orientation of microtubules in other cell types. On the other hand, dendrites contain neurotubules with mixed polarities. Half of them point their plus (+) end towards the dendritic top and the other half points it towards the cell body, reminiscent of the anti-parallel microtubule array of the mitotic spindle . The polarized neurotubule network forms the basis for selective cargo trafficking into axons and dendrites. [ 9 ] For example, when mutations occur in dynein , a motor protein that is crucial in maintaining the uniform orientation of axonal neurotubules, the neurotubule polarity in axon becomes mixed. [ 10 ] Dendritic proteins are mis-trafficked into axons as a result. [ 11 ] For unpolarized neurons, the neurites contain 80% neurotubules with plus (+) end facing the terminal. [ citation needed ] Neurotubules are responsible for the trafficking of intracellular materials. The cargoes are transported by motor proteins that uses neurotubules as a 'track'. The axonal transport can be classified according to speed - fast or slow, and according to direction - anterograde or retrograde. The cargoes are transported at a fast rate or a slow rate. The fast axonal transport has a rate of 50–500 mm per day, while the slow axonal transport was found to be 0.4 mm per day in goldfish, 1–10 mm per day in mammalian nerve. Transport of insoluble protein contributes to the fast movement while the slow transport is transporting up to 40% - 50% soluble protein. [ 12 ] The speed of transport depends on the types of cargo to be transported. Neurotrophins , a family of proteins important for the survival of neuron, as well as organelles , such as mitochondria and endosomes , are transported at a fast rate. In contrast, structural proteins such as tubulin and neurofilament subunits are transported at lower rates. Proteins that are transported from the spinal cord to the foot can take up to a year to complete the journey. [ 13 ] Anterograde transport refers to the transportation of cargoes from the minus (-) end to the plus (+) end, whereas retrograde transport is the transportation of cargoes in the opposite direction. Anterograde transport is often the transportation from the cell body to the periphery of the neuron whereas retrograde transport brings organelles and vesicles away from the axon terminus to the cell body. Anterograde transport is regulated by kinesins , a class of motor proteins. Kinesins have two head domains which work together like feet – one binds to the neurotubules, and then another binds while the former dissociates. The binding of ATP rises the affinity of kinesins for neurotubules. When ATP binds to one head domain, a conformational change will be induced in the head domain, causing it to bind tightly on the neurotubule. Another ATP then binds to another head domain while the former ATP is hydrolyzed and the head domain is dissociated. The process repeats itself as cycles so that kinesins move along the neurotubules together with the organelles and vesicular cargoes they carry. [ 14 ] Retrograde transport is regulated by dyneins , also a class of motor proteins. It shares similar structures with kinesins, as well as the transporting mechanism. It transports cargoes from the periphery to the cell body in neurons. Microtubule-associated proteins (MAPs) are proteins that interact with microtubules by binding to their tubulin subunits and regulating their stability. The MAPs make-up of neurotubules is notably different from microtubules of non-neuronal cells. [ 15 ] For example, type II MAPs are exclusively found in neurons and not in other cells. The most well-studied ones include MAP2 , and tau . MAPs are differentially distributed within the neuronal cytoplasm. Their distribution varies across different stages of development of a neuron as well. A juvenile isoform of MAP2 is present on neurotubules of axons and dendrites of developing neurons but becomes down-regulated as neurons mature. The adult isoform of MAP2 is enriched in the neurotubules of dendrites and is virtually absent from axonal neurotubules. [ 16 ] In contrast, tau is absent on neurotubules of dendrites and its presence is limited to axonal neurotubules. The phosphorylation of tau at certain sites is required for tau to bind to neurotubules. In a healthy neuron, this process does not occur at a significant degree in dendrites, causing the absence of tau on dendritic neurotubules. The binding of tau of different isoforms and of different levels of phosphorylation regulate the stability of neurotubule. It is found that neurotubules of the neurons in embryonic central nervous system contain more highly phosphorylated tau than those in adults. [ 17 ] Additionally, tau is responsible for neurotubule bundling. [ 18 ] Microtubule plus end tracking proteins (+TIPs) are MAPs that accumulates in the plus end of microtubules. In neurotubules, +TIPs control the neurotubule dynamics, direction of growth, and interaction with components of cell cortex . They are important in neurite extension and axon outgrowth . [ 19 ] Many other non-neuron specific MAPs such as MAP1B and MAP6 , are found on neurotubules. Moreover, the interaction between actin and some MAPs provide a potential link between neurotubules and actin filaments . [ 20 ] Disruption in the integrity and dynamics of neurotubules can interfere with the cellular functions they perform and cause various neurological disorders . In Alzheimer's disease , hyperphosphorylation of tau protein causes the dissociation of tau from neurotubules and tau misfolding . The aggregation of misfolded tau forms insoluble neurofibrillary tangles which is a characteristic finding in Alzheimer's disease. [ 21 ] This pathological change is called tauopathy . Neurotubules become prone to disintegration by microtubule-severing proteins when tau dissociates. [ 22 ] As a result, essential processes in the neuron such as axonal transport and neural communication will be disrupted, forming the basis for neurodegeneration . [ 23 ] Neurotubule disintegration is thought to occur by different mechanisms in axons and in dendrites. The detachment of tau destabilizes the neurotubules by allowing excess severing by katanin , causing it to disintegrate. Neurotubules disintegration in the axon disrupts transport of mRNA and signalling molecules to the axon terminal. [ 22 ] For dendrites, new evidence suggests that an abnormal tau invasion into dendrites causes a heightened level of dendritic TTLL6 (Tubulin polyglutamylase TTLL6), which elevates the polyglutamylation status of the neurotubules in dendrites. [ 22 ] Because spastin displays strong preference for polyglutamylated microtubule, dendritic neurotubules become susceptible to spastin-induced disintegration. [ 22 ] The loss of neurotubule networks in dendrites and axons, along with the formation of neurofibrillary tangles results in the impairment in the trafficking of important cargoes across the cell, which can eventually lead to apoptosis . [ 24 ] Lissencephaly is a rare congenital condition in which the cerebrum loses its folds( gyri ) and grooves( sulci ), making the brain surface appear smooth. It is caused by defective neuronal migration . [ 25 ] The failure of post-mitotic neurons in reaching their proper positions leads to the formation of a disorganized and thickened four-layer neocortex instead of the normal six-layer neocortex . The severity of lissencephaly ranges from a complete loss of brain folds ( agyria ) to a general reduction in cortical folds( pachygyria ). Neurotubule is central to the migration mechanism of neurons. The defective neural migration in individuals affected by lissencephaly is caused by mutations associated with neurotubule-related genes, such as LIS1 and DCX . [ 26 ] LIS1 encodes an adaptor protein Lis1 that is responsible for stabilization of neurotubule during neuronal migration by minimizing neurotubule catastrophe. It also regulates the motor protein dynein that is crucial in the translocation of the nucleus along neurotubule. This action propels the soma of the neuron forward, which is an essential step in neuronal migration. [ 27 ] In addition, mutations in LIS1 is found to disrupt the uniform plus-end-distal polarity in axons in animal models, causing the mistrafficking of dendritic proteins into axons. [ 11 ] On the other hand, DCX encodes the protein doublecortin that interacts with Lis1 on top of supporting the 13 protofilament structure of neurotubule. Chemotherapy-induced peripheral neuropathy is a pathological change in neurons caused by the disruption in the dynamics of neurotubules by chemotherapy drugs, manifesting in pain, numbness, tingling sensation and muscle weakness in limbs. It is an irreversible condition that affects about one-third of chemotherapy patients. [ 28 ] Tubulin inhibitors inhibit mitosis in cancer cells by affecting the stability and dynamics of microtubules which forms the mitotic spindle responsible for chromosome segregation during mitosis, suppressing tumor growth . However, the same drugs also affects neurotubules in neurons. Vinblastine binds to free tubulin and lower their polymerization capacity, promoting neurotubule depolymerization. On the other hand, paclitaxel binds to the cap of neurotubules, which prevents the conversion of tubulin-bound GTP into GDP, a process that promotes neurotubule depolymerization. For in vitro neurons treated with paclitaxel, the polarity pattern of neurotubule is disturbed, which can incur long term neuronal damage. In addition, over-stabilization of neurotubules interferes with their ability to perform essential cellular functions in neurons. [ 29 ]
https://en.wikipedia.org/wiki/Neurotubule
Neurturin (NRTN) is a protein that is encoded in humans by the NRTN gene . Neurturin belongs to the glial cell line-derived neurotrophic factor (GDNF) family of neurotrophic factors , which regulate the survival and function of neurons. Neurturin’s role as a growth factor places it in the transforming growth factor beta (TGF-beta) subfamily along with its homologs persephin , artemin , and GDNF. [ 1 ] It shares a 42% similarity in amino acid sequence with mature GDNF. [ 2 ] It is also considered a trophic factor and critical in the development and growth of neurons in the brain. [ 3 ] Neurotrophic factors like neurturin have been tested in several clinical trial settings for the potential treatment of neurodegenerative diseases, specifically Parkinson's disease . [ 4 ] Neurturin is encoded for by the NRTN gene located on chromosome 19 in humans and has been shown to promote potent effects on survival and function of developing and mature midbrain dopaminergic neurons (DA) in vitro. [ 5 ] In vivo the direct administration of neurturin into substantia nigra of mice models also shows mature DA neuron protection. [ 5 ] In addition, neurturin has also been shown to support the survival of several other neurons including sympathetic and sensory neurons of the dorsal root ganglia. [ 6 ] Knockout mice have shown that neurturin does not appear essential for survival. However, evidence shows retarded growth of enteric, sensory and parasympathetic neurons in mice upon the removal of neurturin receptors. [ 6 ] Neurturin signaling is mediated by the activation of a multi-component receptor system including the ret tyrosine kinase (RET), a cell-surface bound GDNF family receptor-α (GFRα) protein, and a glycosyl phosphatidylinositol (GPI)-linked protein. Neurturin preferentially binds to the GFRα2 co-receptor. Upon assembly of the complex, specific tyrosine residues are phosphorylated within two molecules of RET that are brought together to initiate signal transduction and the MAP kinase signaling pathway. [ 7 ] Neurturin has been shown to upregulate B1 (bradykinin) receptors in neurons of mice, indicating a possible influence on pain and inflammation pathways. [ 8 ] In addition knockout mice have shown that in the absence of neurturin an increased acetylcholine response is observed. [ 9 ] The exact role and function of neurturin in multiple signaling pathways is widely unknown. The most studied is neurturin’s role in neurodegenerative disease like Parkinson's disease and Huntington's, where several rat studies have implicated neurturin’s role in rescuing neurons. [ 5 ] However, these results have never been observed in humans. Hirschsprung disease, a autosomal dominant genetic disorder, is characterized by complete absence of neuronal ganglion cells from the intestinal tract. Previous studies indicate a role of NRTN gene mutations in the disease. One study showed evidence that a mutation in the NRTN gene was not enough alone to cause onset of the disease, however when coupled with a mutation in the RET gene, disease was present in family members as well as the individual. [ 10 ] A more recent study showed NRTN variants present in individuals with Hirschsprung disease. [ 11 ] However, RET associated mutations were not found and in one variant, RET phosphorylation levels were reduced, which has the potential to have downstream effects on the proliferation and differentiation of neuronal crests. Also, high levels of expression of neurturin were found to be associated with nephroblastoma indicating the possible that the growth factor could be influencing differentiation. [ 12 ] Lastly, a study also associated neurturin deficiency in mice with keratoconjunctivitis and dry eye. [ 13 ] Evidence showing neurturin’s role in neuron survival and management has made it a popular candidate for the potential treatment or reversal of neurodegeneration. In addition, mice models have shown the dying neurons exposed to trophic factors can be rescued. Neurturin is an example of a trophic factor that is difficult to utilize clinically because of its inability to cross the blood-brain barrier of the CNS (central nervous system). Ceregene sponsored a double-blind phase II clinical trial of CERE-120, a viral vector mediated gene transfer drug that allows for the continuous delivery of neurturin to the nigrostratial system. [ 14 ] The hope was to reverse damaged and diseased tissue in Parkinson's patients and overall slow the progression of the disease. However, results were inconclusive and showed that while the drug appears to be relatively safe, there was no statistically significant data supporting the improvement of motor function or neuronal health. [ citation needed ] Neurturin’s therapeutic potential is unknown and future studies aim to improve delivery of the drug. [ 15 ]
https://en.wikipedia.org/wiki/Neurturin
Neuston , also called pleuston , are organisms that live at the surface of a body of water , such as an ocean , estuary , lake , river , wetland or pond . Neuston can live on top of the water surface or submersed just below the water surface. In addition, microorganisms can exist in the surface microlayer that forms between the top- and the under-side of the water surface. Neuston has been defined as "organisms living at the air/water interface of freshwater, estuarine, and marine habitats or referring to the biota on or directly below the water’s surface layer." [ 1 ] Neustons can be informally separated into two groups: the phytoneuston , which are autotrophs floating at the water surface including cyanobacteria , filamentous algae and free-floating aquatic plant (e.g. mosquito fern , duckweed and water lettuce ); and the zooneuston , which are floating heterotrophs such as protists (e.g. ciliates ) and metazoans ( aquatic animals ). This article mainly concerns with metazoan zooneustons. The word "neuston" comes from Greek neustos , meaning "swimming", and the noun suffix -on (as in " plankton "). [ 2 ] This term first appears in the biological literature in 1917. [ 3 ] The alternative term pleuston comes from the Greek plein , meaning "to sail or float". The first known use of this word was in 1909, before the first known use of neuston. [ 4 ] In the past various authors have attempted distinctions between neuston and pleuston, but these distinctions have not been widely adopted. As of 2021, the two terms are usually used somewhat interchangeably, and neuston is used more often than pleuston. The neuston of the surface layer is one of the lesser known aquatic ecological groups. [ 5 ] The term was first used in 1917 by Naumann to describe species associated with the surface layer of freshwater habitats. [ 3 ] Later in 1971, Zaitsev identified neuston composition in marine waters. [ 6 ] These populations would include microscopic species, plus various plant and animal taxa, such as phytoplankton and zooplankton , living in this region. [ 6 ] [ 7 ] In 2002, Gladyshev further characterised the major physical and chemical dynamics of the surface layer influencing the composition and relationships with various neustonic populations" [ 8 ] [ 7 ] The neustonic community structure is conditioned by sunlight and an array of endogenous (organic matter, respiratory, photosynthetic, decompositional processes) and exogenous (atmospheric deposition, inorganic matter, winds, wave action, precipitation, UV radiation, oceanic currents, surface temperature) variables and processes affecting nutrient inputs and recycling. [ 7 ] [ 9 ] [ 10 ] Furthermore, the neuston provides a food source to the zooplankton migrating from deeper layers to the surface, [ 11 ] as well as to seabirds roaming over the oceans. [ 12 ] For these reasons, the neustonic community is believed to play a critical role on the structure and function of marine food webs . Yet, research on neuston communities to date focused predominantly on geographically limited regions of the ocean [ 13 ] [ 11 ] [ 14 ] [ 15 ] [ 10 ] or coastal areas. [ 16 ] [ 17 ] [ 18 ] Consequently, neuston complexity is still poorly understood as studies on the community structure and the taxonomical composition of organisms inhabiting this ecological niche remain few, [ 10 ] and global scale analyses are yet lacking. [ 5 ] There are different ways neuston can be categorised. Kennish divides them by their physical position into two groups: [ 1 ] To this can be added the organisms living in the microlayer at the interface between air and water: Marshall and Burchardt divide neuston into three ecological categories: [ 7 ] [ 5 ] Freshwater neuston, organisms living at lake or pond surfaces or slow moving parts of rivers and streams, include beetles (see whirligig beetle ), protozoans , bacteria and spiders (see fishing spider and diving bell spider ). Springtails in the genera Podura and Sminthurides are almost exclusively neustonic, while Hypogastrura species often aggregate on pond surfaces. Water striders such as Gerris are common examples of insects that support their weight on water's surface tension . There are different terrestrial environmental factors such as flood pulses and droughts, and these environmental factors affect species such as neuston, whether the effects lead to more or less variations in the species. When flood pulses (an abiotic factor) occur, connectivity between different aquatic environments occur. Species that live in environments with irregular flood patterns tend to have more variations, or even decrease species and variations; similar idea to what happens when droughts occur. [ 19 ] Red fire ants have adapted to contend with both flooding and drought conditions. If the ants sense increased water levels in their nests, they link together and form a ball or raft that floats, with the workers on the outside and the queen inside. [ 20 ] [ 21 ] [ 22 ] The brood is transported to the highest surface. [ 23 ] They are also used as the founding structure of the raft, except for the eggs and smaller larvae. Before submerging, the ants will tip themselves into the water and sever connections with the dry land. In some cases, workers may deliberately remove all males from the raft, resulting in the males drowning. The longevity of a raft can be as long as 12 days. Ants that are trapped underwater escape by lifting themselves to the surface using bubbles which are collected from submerged substrate. [ 23 ] Owing to their greater vulnerability to predators, red imported fire ants are significantly more aggressive when rafting. Workers tend to deliver higher doses of venom, which reduces the threat of other animals attacking. Due to this, and because a higher workforce of ants is available, rafts are potentially dangerous to those that encounter them. [ 24 ] The marine neuston, organisms living at the ocean surface, are one of the least studied planktonic groups. Neuston occupies a restricted ecological niche and is affected by a wide range of endogenous and exogenous processes while also being a food source to zooplankton and fish migrating from the deep layers and seabirds. [ 5 ] The neustonic animals form a subset of the zooplankton community, which plays a pivotal role in the functioning of marine ecosystems. Zooplankton are partially responsible for the active energy flux between superficial and deep layers of the ocean. [ 25 ] [ 26 ] [ 27 ] Zooplankton species composition , biomass, and secondary production influence a wide range of trophic levels in marine communities, as they constitute a link between primary production and secondary consumers. [ 28 ] [ 29 ] [ 30 ] Copepods constitute the most abundant zooplankton taxon in terms of biomass and diversity worldwide. [ 31 ] [ 32 ] Consequently, changes in their community composition can impact the biogeochemical cycles [ 33 ] and might be indicative of climate variability impacts on ecosystem functioning. [ 34 ] [ 5 ] Historically, zooplankton assemblages research has focused mainly on taxonomic studies and those related to community structure. [ 35 ] However, recently, research has veered toward an alternative trait-based approach , [ 35 ] [ 29 ] [ 36 ] providing a perspective more focused on groups of species with analogous functional traits . This allows individuals to be classified into types characterized by the presence/absence of certain alleles of a gene , into size classes, ecological guilds , or functional groups (FGs). [ 37 ] Functional traits are phenotypes affecting organism fitness, growth, survival, and reproductive ability. [ 38 ] [ 30 ] These are regulated by the expression of genes within species, and the expression of traits regulate, in turn, the species fitness under contrasting biotic and abiotic circumstances . [ 39 ] Moreover, a specific functional trait can also develop from the interactions between other traits and environmental conditions, [ 31 ] leading to a given trait grouping being favoured under certain conditions. Zooplankton traits can be classified in accordance to ecological functions – feeding, growth, reproduction, survival, and other characteristics such as morphology , physiology , behaviour, or life history. [ 28 ] [ 40 ] [ 41 ] Particularly, feeding strategies and trophic groups are relevant to establish feeding efficiency and associated predation risk. [ 42 ] Additionally, they facilitate the understanding of ecosystem services associated with zooplankton, such as the distribution of fisheries or biogeochemical cycling [ 43 ] while also allowing the positioning of zooplankton taxa in the food web. [ 29 ] [ 44 ] [ 5 ] Coral-treaders are a genus of quite rare wingless marine bugs known only from coral reefs in the Indo-Pacific region. During low tide they move over water surfaces around coral atolls and reefs similar to the more familiar water-striders, staying submerged in reef crevices during high tide.
https://en.wikipedia.org/wiki/Neuston
A neutral atom quantum computer is a modality of quantum computers built out of Rydberg atoms ; [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] this modality has many commonalities with trapped-ion quantum computers . As of December 2023, the concept has been used to demonstrate a 48 logical qubit processor. [ 6 ] [ 7 ] To perform computation, the atoms are first trapped in a magneto-optical trap . [ 6 ] Qubits are then encoded in the energy levels of the atoms. Initialization and operation of the computer is performed via the application of lasers on the qubits. [ 8 ] For example, the laser can accomplish arbitrary single qubit gates and a C Z {\displaystyle CZ} gate for universal quantum computation . The C Z {\displaystyle CZ} gate is carried out by leveraging the Rydberg blockade which leads to strong interactions when the qubits are physically close to each other. To perform a C Z {\displaystyle CZ} gate a Rydberg π {\displaystyle \pi } pulse is applied to the control qubit, a 2 π {\displaystyle 2\pi } on the target qubit and then a π {\displaystyle \pi } on the control. [ 1 ] Measurement is enforced at the end of the computation with a camera that generates an image of the outcome by measuring the fluorescence of the atoms. [ 6 ] Neutral atom quantum computing makes use of several technological advancements in the field laser cooling , magneto-optical trapping and optical tweezers . In one example of the architecture, [ 9 ] an array of atoms is loaded into a laser cooled at micro-kelvin temperatures. In each of these atoms, two levels of hyperfine ground subspace are isolated. The qubits are prepared in some initial state using optical pumping . Logic gates are performed using optical or microwave frequency fields and the measurements are done using resonance fluorescence . Most of these architecture are based on rubidium , [ 10 ] caesium , [ 11 ] ytterbium , [ 12 ] [ 13 ] and strontium [ 14 ] atoms. Global single qubit gates on all the atoms can be done either by applying a microwave field for qubits encoded in the Hyperfine manifold such as Rb and Cs or by applying an RF magnetic field for qubits encoded in the nuclear spin such as Yb and Sr. Focused laser beams can be used to do single-site one qubit rotation using a lambda-type three level Raman scheme (see figure). In this scheme, the rotation between the qubit states is mediated by an intermediate excited state. Single qubit gate fidelities have been shown to be as high as .999 in state-of-the-art experiments. [ 15 ] [ 13 ] [ 16 ] To do universal quantum computation , we need at least one two-qubit entangling gate. [ 17 ] Early proposals for gates included gates that depended on inter-atomic forces. [ 2 ] These forces are weak and the gates were predicted to be slow. The first fast gate [ citation needed ] based on Rydberg states was proposed for charged atoms [ 18 ] making use of the principle of Rydberg Blockade. The principle was later transferred and developed further for neutral atoms. [ 5 ] Since then, most gates that have been proposed use this principle. [ citation needed ] Atoms that have been excited to very large principal quantum number n {\displaystyle n} are known as Rydberg atoms . These highly excited atoms have several desirable properties including high decay life-time and amplified couplings with electromagnetic fields. [ 19 ] The basic principle for Rydberg mediated gates is called the Rydberg blockade. [ 20 ] Consider two neutral atoms in their respective ground states. When they are close to each other, their interaction potential is dominated by van Der Waals force V q q ≈ μ B 2 R 6 {\displaystyle V_{qq}\approx {\frac {\mu _{B}^{2}}{R^{6}}}} where μ B {\displaystyle \mu _{B}} is the Bohr Magneton and R {\displaystyle R} is the distance between the atoms. This interaction is very weak, around 10 − 5 {\displaystyle 10^{-5}} Hz for R = 10 μ m {\displaystyle R=10\mu m} . When one of the atoms is put into a Rydberg state (state with very high principal quantum number), the interaction between the two atoms is dominated by second order dipole-dipole interaction which is also weak. When both of the atoms are excited to a Rydberg state, then the resonant dipole-dipole interaction becomes V r r = ( n 2 e a 0 ) 2 R 3 {\displaystyle V_{rr}={\frac {(n^{2}ea_{0})^{2}}{R^{3}}}} where a 0 {\displaystyle a_{0}} is the Bohr radius . This interaction is around 100 {\displaystyle 100} MHz at R = 10 μ m {\displaystyle R=10\mu m} , around twelve orders of magnitude larger. This interaction potential induces a blockade, where-in, if one atom is excited to a Rydberg state, the other nearby atoms cannot be excited to a Rydberg state because the two-atom Rydberg state is far detuned. This phenomenon is called the Rydberg blockade. Rydberg mediated gates make use of this blockade as a control mechanism to implement two qubit controlled gates. Let's consider the physics induced by the this blockade. Suppose we are considering two isolated neutral atoms in a magneto-optical trap. Ignoring the coupling of hyperfine levels that make the qubit and motional degrees of freedom, the Hamiltonian of this system can be written as: H = H 1 + H 2 + V r r | r ⟩ 1 ⟨ r | ⊗ | r ⟩ 2 ⟨ r | ) {\displaystyle H=H_{1}+H_{2}+V_{rr}|r\rangle _{1}\langle r|\otimes |r\rangle _{2}\langle r|)} where, H i = 1 2 ( ( Ω | 1 ⟩ i ⟨ r | + Ω ∗ | r ⟩ i ⟨ 1 | ) − Δ | r ⟩ i ⟨ r | {\displaystyle H_{i}={\frac {1}{2}}((\Omega |1\rangle _{i}\langle r|+\Omega ^{*}|r\rangle _{i}\langle 1|)-\Delta |r\rangle _{i}\langle r|} is the Hamiltonian of i-th atom, Ω {\displaystyle \Omega } is the Rabi frequency of coupling between the Rydberg states and the | 1 ⟩ {\displaystyle |1\rangle } state and Δ {\displaystyle \Delta } is the detuning (see figure to the right for level diagram). When | V r r | >> | Ω | , | Δ | {\displaystyle |V_{rr}|>>|\Omega |,|\Delta |} , we are in the so-called Rydberg Blockade regime. In this regime, the | r r ⟩ {\displaystyle |rr\rangle } state is highly detuned from the rest of the system and thus is effectively decoupled. For the rest of this article, we consider only the Rydberg Blockade regime. The physics of this Hamiltonian can be divided into several subspaces depending on the initial state. The | 00 ⟩ {\displaystyle |00\rangle } state is decoupled and does not evolve. Suppose only the i-th the atom is in | 1 ⟩ {\displaystyle |1\rangle } state ( | 10 ⟩ {\displaystyle |10\rangle } , | 01 ⟩ {\displaystyle |01\rangle } ), then the Hamiltonian is given by H i {\displaystyle H_{i}} . This Hamiltonian is the standard two-level Rabi hamiltonian. It characterizes the "light shift" in a two level system and has eigenvalues E L S ( 1 ) = 1 2 ( Δ ± Ω 2 + Δ 2 ) {\displaystyle E_{LS}^{(1)}={\frac {1}{2}}(\Delta \pm {\sqrt {\Omega ^{2}+\Delta ^{2}}})} . If both atoms are in the excited state | 11 ⟩ {\displaystyle |11\rangle } the effective system evolves in the subspace of { | 1 r ⟩ , | r 1 ⟩ , | 11 ⟩ } {\displaystyle \{|1r\rangle ,|r1\rangle ,|11\rangle \}} . It is convenient to rewrite the Hamiltonian in terms of bright | b ⟩ = 1 2 ( | r 1 ⟩ + | 1 r ⟩ ) {\displaystyle |b\rangle ={\frac {1}{\sqrt {2}}}(|r1\rangle +|1r\rangle )} and dark | d ⟩ = 1 2 ( | r 1 ⟩ − | 1 r ⟩ ) {\displaystyle |d\rangle ={\frac {1}{\sqrt {2}}}(|r1\rangle -|1r\rangle )} basis states, along with | 11 ⟩ {\displaystyle |11\rangle } . In this basis, the Hamiltonian is given by H = − Δ ( | b ⟩ ⟨ b | + | d ⟩ ⟨ d | ) + 2 2 ( Ω | b ⟩ ⟨ 11 | + Ω ∗ | 11 ⟩ ⟨ b | ) {\displaystyle H=-\Delta (|b\rangle \langle b|+|d\rangle \langle d|)+{\frac {\sqrt {2}}{2}}(\Omega |b\rangle \langle 11|+\Omega ^{*}|11\rangle \langle b|)} . Note that the dark state is decoupled from the bright state and the | 11 ⟩ {\displaystyle |11\rangle } state. Thus we can ignore it and the effective evolution reduces to a two-level system consisting of the bright state and | 11 ⟩ {\displaystyle |11\rangle } state. In this basis, the dressed eigenvalues and eigenvectors of the hamiltonian are given by: E L S ( 2 ) = 1 2 ( Δ ± 2 Ω 2 + Δ 2 ) {\displaystyle E_{LS}^{(2)}={\frac {1}{2}}(\Delta \pm {\sqrt {2\Omega ^{2}+\Delta ^{2}}})} | 11 ~ ⟩ = cos ⁡ ( θ / 2 ) | 11 ⟩ + sin ⁡ ( θ / 2 ) | b ⟩ {\displaystyle |{\tilde {11}}\rangle =\cos(\theta /2)|11\rangle +\sin(\theta /2)|b\rangle } | b ~ ⟩ = cos ⁡ ( θ / 2 ) | b ⟩ − sin ⁡ ( θ / 2 ) | 11 ⟩ {\displaystyle |{\tilde {b}}\rangle =\cos(\theta /2)|b\rangle -\sin(\theta /2)|11\rangle } , where, θ {\displaystyle \theta } depends on the Rabi frequency and detuning. We will make use of these considerations in the gates below. The level diagrams of these subspaces have been shown in the figure above. We can use the Rydberg blockade to implement a controlled-phase gate by applying standard Rabi pulses between the | 1 ⟩ {\displaystyle |1\rangle } and | r ⟩ {\displaystyle |r\rangle } levels. Consider the following protocol: [ 5 ] The figure on the right shows what this pulse sequence does. When the state is | 00 ⟩ {\displaystyle |00\rangle } , both levels are uncoupled from the Rydberg states and so the pulses do nothing. When either of the atoms is in | 0 ⟩ {\displaystyle |0\rangle } state, the other one picks up a − 1 {\displaystyle -1} phase due to the 2 π {\displaystyle 2\pi } pulse. When the state is | 11 ⟩ {\displaystyle |11\rangle } , the second atom is off-resonant to its Rydberg state and thus does not pick up any phase, however the first one does. The truth table of this gate is given below. This is equivalent to a controlled-z gate up-to a local rotation to the hyperfine levels. The adiabatic gate was introduced as an alternative to the Jaksch gate. [ 21 ] It is global and symmetric and thus it does not require locally focused lasers. Moreover, the Adiabatic Gate prevents the problem of spurious phase accumulation when the atom is in Rydberg state. In the Adiabatic Gate, instead of doing fast pulses, we dress the atom with an adiabatic pulse sequence that takes the atom on a trajectory around the Bloch sphere and back. The levels pick up a phase on this trip due to the so-called "light shift" induced by the lasers. The shapes of pulses can be chosen to control this phase. If both atoms are in the | 00 ⟩ {\displaystyle |00\rangle } state, nothing happens so | 00 ⟩ → | 00 ⟩ {\displaystyle |00\rangle \rightarrow |00\rangle } . If one of them is in the | 0 ⟩ {\displaystyle |0\rangle } state, the other atom picks up a phase due to light shift: | 01 ⟩ → e i ϕ 1 | 01 ⟩ {\displaystyle |01\rangle \rightarrow e^{i\phi _{1}}|01\rangle } and similarly | 10 ⟩ → e i ϕ 1 | 10 ⟩ {\displaystyle |10\rangle \rightarrow e^{i\phi _{1}}|10\rangle } with: ϕ 1 = ∫ E L S ( 1 ) ( t ) d t = ∫ 1 2 ( Δ ( t ) − Ω 2 ( t ) + Δ 2 ( t ) ) d t {\displaystyle \phi _{1}=\int E_{LS}^{(1)}(t)dt=\int {\frac {1}{2}}(\Delta (t)-{\sqrt {\Omega ^{2}(t)+\Delta ^{2}(t)}})dt} . When both of the atoms are in | 1 ⟩ {\displaystyle |1\rangle } states, the atoms pick up a phase due to the two-atom light shift as seen by the eigenvalues of Hamiltonian above, then | 11 ⟩ → e i ϕ 2 | 11 ⟩ {\displaystyle |11\rangle \rightarrow e^{i\phi _{2}}|11\rangle } with ϕ 2 = ∫ E L S ( 2 ) ( t ) d t = ∫ 1 2 ( Δ ( t ) − 2 Ω 2 ( t ) + Δ 2 ( t ) ) d t {\displaystyle \phi _{2}=\int E_{LS}^{(2)}(t)dt=\int {\frac {1}{2}}(\Delta (t)-{\sqrt {2\Omega ^{2}(t)+\Delta ^{2}(t)}})dt} . Note that this light shift is not equal to twice the single atom light shifts. The single atom light-shifts are then cancelled by a global pulse that implements U = exp ⁡ ( − i ϕ 1 | 1 ⟩ ⟨ 1 | ) {\displaystyle U=\exp(-i\phi _{1}|1\rangle \langle 1|)} to get rid of the single qubit light shifts. The truth table for this gate is given to the right. This protocol leaves a total phase of ∫ ( E L S ( 2 ) ( t ) − 2 E L S ( 1 ) ( t ) ) d t {\displaystyle \int (E_{LS}^{(2)}(t)-2E_{LS}^{(1)}(t))dt} phase on the | 11 ⟩ {\displaystyle |11\rangle } state. We can choose the pulses so that this phase equals π {\displaystyle \pi } , making it a controlled-Z gate. An extension to this gate was introduced to make it robust against errors in reference. [ 22 ] The adiabatic gate is global but it is slow (due to adiabatic condition). The Levine-Pichler gate was introduced as a fast diabatic substitute to the global Adiabatic Gate. [ 23 ] This gate uses carefully chosen pulse sequences to perform a controlled-phase gate. In this protocol, we apply the following pulse sequence: The intuition of this gate is best understood in terms of the picture given above. When the state of the system is | 11 ⟩ {\displaystyle |11\rangle } , the pulses send the state around the Bloch sphere twice and accumulates a net phase ϕ 2 = 4 π Δ Δ 2 + 2 Ω 2 {\displaystyle \phi _{2}={\frac {4\pi \Delta }{\sqrt {\Delta ^{2}+2\Omega ^{2}}}}} . When one of the atoms is in | 0 ⟩ {\displaystyle |0\rangle } state, the other atom does not go around the Bloch sphere fully after the first pulse due to the mismatch in Rabi frequency. The second pulse corrects for this effect by rotating the state around a different axis. This puts the atom back into the | 1 ⟩ {\displaystyle |1\rangle } state with a net phase ϕ 1 {\displaystyle \phi _{1}} , which can be calculated easily. The pulses can be chosen to make e i ϕ 2 = e i ( 2 ϕ 1 + π ) {\displaystyle e^{i\phi _{2}}=e^{i(2\phi _{1}+\pi )}} . Doing so makes this gate equivalent to a controlled-z gate up-to a local rotation. The truth table of Levine-Pichler gate is given on the right. This gate has been improved using the methods of quantum optimal controls recently. [ 24 ] [ 25 ] Entangling gates in state-of-the art neutral atom quantum computing platforms have been implemented with up-to .995 quantum fidelity. [ 10 ]
https://en.wikipedia.org/wiki/Neutral_atom_quantum_computer
The neutral density ( γ n {\displaystyle \gamma ^{n}\,} ) or empirical neutral density is a density variable used in oceanography , introduced in 1997 by David R. Jackett and Trevor McDougall . [ 1 ] It is a function of the three state variables ( salinity , temperature , and pressure ) and the geographical location ( longitude and latitude ). It has the typical units of density (M/V). Isosurfaces of γ n {\displaystyle \gamma ^{n}\,} form “neutral density surfaces”, which are closely aligned with the "neutral tangent plane". It is widely believed, although this has yet to be rigorously proven, that the flow in the deep ocean is almost entirely aligned with the neutral tangent plane, and strong lateral mixing occurs along this plane ("epineutral mixing") vs weak mixing across this plane ("dianeutral mixing"). These surfaces are widely used in water mass analyses. Neutral density is a density variable that depends on the particular state of the ocean, and hence is also a function of time, though this is often ignored. In practice, its construction from a given hydrographic dataset is achieved by means of a computational code (available for Matlab and Fortran ), that contains the computational algorithm developed by Jackett and McDougall. Use of this code is currently restricted to the present day ocean. The neutral tangent plane is the plane along which a given water parcel can move infinitesimally while remaining neutrally buoyant with its immediate environment. [ 1 ] This is well-defined at every point in the ocean. A neutral surface is a surface that is everywhere parallel to the neutral tangent plane. McDougall [ 2 ] demonstrated that the neutral tangent plane, and hence also neutral surfaces, are normal to the dianeutral vector where S {\displaystyle S} is the salinity , θ {\displaystyle \theta \,} is the potential temperature , α {\displaystyle \alpha \,} the thermal expansion coefficient and β {\displaystyle \beta \,} the saline concentration coefficient. Thus, neutral surfaces are defined as surfaces everywhere perpendicular to N {\displaystyle \mathbf {N} } . The contribution to density caused by gradients of S {\displaystyle S} and θ {\displaystyle \theta } within the surface exactly compensates. That is, with ∇ n {\displaystyle \nabla _{n}} the 2D gradient within the neutral surface, If such a neutral surface exists, the neutral helicity H = N ⋅ ∇ × N {\displaystyle H=\mathbf {N} \cdot \nabla \times \mathbf {N} } (related in form to hydrodynamical helicity ) must be zero everywhere on that surface, a condition arising from non-linearity of the equation of state. [ 3 ] A continuum of such neutral surfaces could be usefully represented as isosurfaces of a 3D scalar field γ n {\displaystyle \gamma ^{n}} that satisfies [ 1 ] if the residual R = 0 {\displaystyle {\cal {R}}=0} . Here, b {\displaystyle b} is an integrating scalar factor that is function of space. A necessary condition for the existence of γ n {\displaystyle \gamma ^{n}} with R = 0 {\displaystyle {\cal {R}}=0} is that H = 0 {\displaystyle H=0} everywhere in the ocean. [ 1 ] However, islands complicate the topology such that this is not a sufficient condition. [ 4 ] In the real ocean, the neutral helicity H {\displaystyle H} is generally small but not identically zero. [ 5 ] Therefore, it is impossible to create analytically a well-defined neutral surfaces, nor a 3D neutral density variable such as γ n {\displaystyle \gamma ^{n}} . [ 6 ] There will always be flow through any well-defined surface caused by neutral helicity. Therefore, it is only possible to obtain approximately neutral surfaces, which are everywhere _approximately_ perpendicular to N {\displaystyle \mathbf {N} } . Similarly, it is only possible to define γ n {\displaystyle \gamma ^{n}} satisfying ( 2 ) with R ≠ 0 {\displaystyle {\cal {R}}\neq 0} . Numerical techniques can be used to solve the coupled system of first-order partial differential equations ( 2 ) while minimizing some norm of R {\displaystyle {\cal {R}}} . Jackett and McDougall [ 1 ] provided such a γ n {\displaystyle \gamma ^{n}\,} having small R {\displaystyle {\cal {R}}} , and demonstrated that the inaccuracy due to the non-exact neutrality ( R ≠ 0 {\displaystyle {\cal {R}}\neq 0} ) is below the present instrumentation error in density. [ 7 ] Neutral density surfaces stay within a few tens meters of an ideal neutral surface anywhere in the world. [ 8 ] Given how γ n {\displaystyle \gamma ^{n}\,} has been defined, neutral density surfaces can be considered the continuous analog of the commonly used potential density surfaces, which are defined over various discrete values of pressures (see for example [ 9 ] and [ 10 ] ). Neutral density is a function of latitude and longitude. This spatial dependence is a fundamental property of neutral surfaces. From ( 1 ), the gradients of S {\displaystyle S} and θ {\displaystyle \theta } within a neutral surface are aligned, hence their contours are aligned, hence there is a functional relationship between these variables on the neutral surface. However, this function is multivalued . It is only single-valued within regions where there is at most one contour of θ {\displaystyle \theta } per θ {\displaystyle \theta } value (or, equivalently expressed by S {\displaystyle S} ). Thus, the connectedness of level sets of θ {\displaystyle \theta } on a neutral surface is a vital topological consideration. These regions are precisely those regions associated with the edges of the Reeb graph of θ {\displaystyle \theta } on the surface, as shown by Stanley. [ 4 ] Given this spatial dependence, calculating neutral density requires knowledge of the spatial distribution of temperature and salinity in the ocean. Therefore, the definition of γ n {\displaystyle \gamma ^{n}\,} has to be linked with a global hydrographic dataset, based on the climatology of the world's ocean (see World Ocean Atlas and [ 11 ] ). In this way, the solution of ( 2 ) provides values of γ n {\displaystyle \gamma ^{n}\,} for a referenced global dataset. The solution of the system for a high resolution dataset would be computationally very expensive. In this case, the original dataset can be sub-sampled and ( 2 ) can be solved over a more limited set of data. Jackett and McDougall constructed the variable γ n {\displaystyle \gamma ^{n}\,} using the data in the "Levitus dataset". [ 12 ] As this dataset consists of measurements of S and T at 33 standard depth levels at a 1° resolution, the solution of ( 2 ) for such a large dataset would be computationally very expensive. Therefore, they sub-sampled the data of the original dataset onto a 4°x4° grid and solved ( 2 ) on the nodes of this grid. The authors suggested to solve this system by using a combination of the method of characteristics in nearly 85% of the ocean (the characteristic surfaces of ( 2 ) are neutral surfaces along which γ n {\displaystyle \gamma ^{n}\,} is constant) and the finite differences method in the remaining 15%. The output of these calculations is a global dataset labeled with values of γ n {\displaystyle \gamma ^{n}\,} . The field of γ n {\displaystyle \gamma ^{n}\,} values resulting from the solution of the differential system ( 2 ) satisfies ( 2 ) an order of magnitude better (on average) than the present instrumentation error in density . [ 13 ] The labeled dataset is then used to assign γ n {\displaystyle \gamma ^{n}\,} values to any arbitrary hydrographic data at new locations, where values are measured as a function of depth by interpolation to the four closest points in the Levitus atlas. The formation of neutral density surfaces from a given hydrographic observation requires only a call to a computational code that contains the algorithm developed by Jackett and McDougall. [ 14 ] The Neutral Density code comes as a package of Matlab or as a Fortran routine. It enables the user to fit neutral density surfaces to arbitrary hydrographic data and just 2 MBytes of storage are required to obtain an accurately pre-labelled world ocean. Then, the code permits to interpolate the labeled data in terms of spatial location and hydrography . By taking a weighted average of the four closest casts from the labeled data set, it enables to assign γ n {\displaystyle \gamma ^{n}\,} values to any arbitrary hydrographic data. Another function provided in the code, given a vertical profile of labeled data and γ n {\displaystyle \gamma ^{n}\,} surfaces, finds the positions of the specified γ n {\displaystyle \gamma ^{n}\,} surfaces within the water column , together with error bars . Comparisons between the approximated neutral surfaces obtained by using the variable γ n {\displaystyle \gamma ^{n}\,} and the previous commonly used methods to obtain discretely referenced neutral surfaces (see for example Reid (1994), [ 10 ] that proposed to approximate neutral surfaces by a linked sequence of potential density surfaces referred to a discrete set of reference pressures) have shown an improvement of accuracy (by a factor of about 5) [ 15 ] and an easier and computationally less expensive algorithm to form neutral surfaces. A neutral surface defined using γ n {\displaystyle \gamma ^{n}\,} differs only slightly from an ideal neutral surface. In fact, if a parcel moves around a gyre on the neutral surface and returns to its starting location, its depth at the end will differ by around 10m from the depth at the start. [ 8 ] If potential density surfaces are used, the difference can be hundreds of meters, a far larger error. [ 8 ]
https://en.wikipedia.org/wiki/Neutral_density
Neutral detergent fiber ( NDF ) is the most common measure of fiber used for animal feed analysis, but it does not represent a unique class of chemical compounds. NDF measures most of the structural components in plant cells (i.e. lignin , hemicellulose and cellulose ), but not pectin . [ 1 ] [ 2 ] Further analysis can be done to the sample to determine individual components such as acid detergent fiber (ADF) analysis. [ 3 ] The process of determining NDF content involves a neutral detergent that dissolves plant pectins, proteins, sugars and lipids. This leaves behind the fibrous parts such as cellulose, lignin and hemicellulose. Recent nutritional requirement tables for ruminants report limits for NDF intake. The level of NDF in the animal ration influences the animal's intake of dry matter and the time of rumination. The concentration of NDF in feeds is negatively correlated with energy concentration.
https://en.wikipedia.org/wiki/Neutral_detergent_fiber
Neutral mutations are changes in DNA sequence that are neither beneficial nor detrimental to the ability of an organism to survive and reproduce. In population genetics , mutations in which natural selection does not affect the spread of the mutation in a species are termed neutral mutations. Neutral mutations that are inheritable and not linked to any genes under selection will be lost or will replace all other alleles of the gene. That loss or fixation of the gene proceeds based on random sampling known as genetic drift . A neutral mutation that is in linkage disequilibrium with other alleles that are under selection may proceed to loss or fixation via genetic hitchhiking and/or background selection . While many mutations in a genome may decrease an organism’s ability to survive and reproduce, also known as fitness , those mutations are selected against and are not passed on to future generations . The most commonly-observed mutations that are detectable as variation in the genetic makeup of organisms and populations appear to have no visible effect on the fitness of individuals and are therefore neutral. The identification and study of neutral mutations has led to the development of the neutral theory of molecular evolution , which is an important and often-controversial theory that proposes that most molecular variation within and among species is essentially neutral and not acted on by selection. Neutral mutations are also the basis for using molecular clocks to identify such evolutionary events as speciation and adaptive or evolutionary radiations . Charles Darwin commented on the idea of neutral mutation in his work, hypothesizing that mutations that do not give an advantage or disadvantage may fluctuate or become fixed apart from natural selection . "Variations neither useful nor injurious would not be affected by natural selection, and would be left either a fluctuating element, as perhaps we see in certain polymorphic species, or would ultimately become fixed, owing to the nature of the organism and the nature of the conditions." While Darwin is widely credited with introducing the idea of natural selection which was the focus of his studies, he also saw the possibility for changes that did not benefit or hurt an organism. [ 1 ] Darwin's view of change being mostly driven by traits that provide advantage was widely accepted until the 1960s. [ 2 ] While researching mutations that produce nucleotide substitutions in 1968, Motoo Kimura found that the rate of substitution was so high that if each mutation improved fitness, the gap between the most fit and typical genotype would be implausibly large. However, Kimura explained this rapid rate of mutation by suggesting that the majority of mutations were neutral, i.e. had little or no effect on the fitness of the organism. Kimura developed mathematical models of the behavior of neutral mutations subject to random genetic drift in biological populations. This theory has become known as the neutral theory of molecular evolution. [ 3 ] As technology has allowed for better analysis of genomic data, research has continued in this area. While natural selection may encourage adaptation to a changing environment, neutral mutation may push divergence of species due to nearly random genetic drift. [ 2 ] Neutral mutation has become a part of the neutral theory of molecular evolution, proposed in the 1960s. This theory suggests that neutral mutations are responsible for a large portion of DNA sequence changes in a species. For example, bovine and human insulin, while differing in amino acid sequence are still able to perform the same function. The amino acid substitutions between species were seen therefore to be neutral or not impactful to the function of the protein. Neutral mutation and the neutral theory of molecular evolution are not separate from natural selection but add to Darwin's original thoughts. Mutations can give an advantage, create a disadvantage, or make no measurable difference to an organism's survival. [ 4 ] A number of observations associated with neutral mutation were predicted in neutral theory including: amino acids with similar biochemical properties should be substituted more often than biochemically different amino acids; synonymous base substitutions should be observed more often than nonsynonymous substitutions; introns should evolve at the same rate as synonymous mutations in coding exons ; and pseudogenes should also evolve at a similar rate. These predictions have been confirmed with the introduction of additional genetic data since the theory’s introduction. [ 2 ] When an incorrect nucleotide is inserted during replication or transcription of a coding region, it can affect the eventual translation of the sequence into amino acids. Since multiple codons are used for the same amino acids, a change in a single base may still lead to translation of the same amino acid. This phenomenon is referred to as degeneracy and allows for a variety of codon combinations leading to the same amino acid being produced. For example, the codes TCT, TCC, TCA, TCG, AGT, and AGC all code for the amino acid serine . This can be explained by the wobble concept. Francis Crick proposed this theory to explain why specific tRNA molecules could recognize multiple codons. The area of the tRNA that recognizes the codon called the anticodon is able to bind multiple interchangeable bases at its 5' end due to its spatial freedom. A fifth base called inosine can also be substituted on a tRNA and is able to bind with A, U, or C. This flexibility allows for changes in bases in codons leading to translation of the same amino acid. [ 5 ] The changing of a base in a codon without the changing of the translated amino acid is called a synonymous mutation. Since the amino acid translated remains the same a synonymous mutation has traditionally been considered a neutral mutation. [ 6 ] Some research has suggested that there is bias in selection of base substitution in synonymous mutation. This could be due to selective pressure to improve translation efficiency associated with the most available tRNAs or simply mutational bias. [ 7 ] If these mutations influence the rate of translation or an organism’s ability to manufacture protein they may actually influence the fitness of the affected organism. [ 6 ] While substitution of a base in a noncoding area of a genome may make little difference and be considered neutral, base substitutions in or around genes may impact the organism. Some base substitutions lead to synonymous mutation and no difference in the amino acid translated as noted above. However, a base substitution can also change the genetic code so that a different amino acid is translated. This sort of substitution usually has a negative effect on the protein being formed and will be eliminated from the population through purifying selection . However, if the change has a positive influence, the mutation may become more and more common in a population until it becomes a fixed genetic piece of that population. Organisms changing via these two options comprise the classic view of natural selection. A third possibility is that the amino acid substitution makes little or no positive or negative difference to the affected protein. [ 13 ] Proteins demonstrate some tolerance to changes in amino acid structure. This is somewhat dependent on where in the protein the substitution takes place. If it occurs in an important structural area or in the active site , one amino acid substitution may inactivate or substantially change the functionality of the protein. Substitutions in other areas may be nearly neutral and drift randomly over time. [ 14 ] Neutral mutations are measured in population and evolutionary genetics often by looking at variation in populations. These have been measured historically by gel electrophoresis to determine allozyme frequencies. [ 15 ] Statistical analyses of this data is used to compare variation to predicted values based on population size, mutation rates and effective population size. Early observations that indicated higher than expected heterozygosity and overall variation within the protein isoforms studied, drove arguments as to the role of selection in maintaining this variation versus the existence of variation through the effects of neutral mutations arising and their random distribution due to genetic drift. [ 16 ] [ 17 ] [ 18 ] The accumulation of data based on observed polymorphism led to the formation of the neutral theory of evolution. [ 16 ] According to the neutral theory of evolution, the rate of fixation in a population of a neutral mutation will be directly related to the rate of formation of the neutral allele. [ 19 ] In Kimura’s original calculations, mutations with |2 N s |<1 or | s |≤1/(2N) are defined as neutral. [ 16 ] [ 18 ] In this equation, N is the effective population size and is a quantitative measurement of the ideal population size that assumes such constants as equal sex ratios and no emigration, migration, mutation nor selection. [ 20 ] Conservatively, it is often assumed that effective population size is approximately one fifth of the total population size. [ 21 ] s is the selection coefficient and is a value between 0 and 1. It is a measurement of the contribution of a genotype to the next generation where a value of 1 would be completely selected against and make no contribution and 0 is not selected against at all. [ 22 ] This definition of neutral mutation has been criticized due to the fact that very large effective population sizes can make mutations with small selection coefficients appear non neutral. Additionally, mutations with high selection coefficients can appear neutral in very small populations. [ 18 ] The testable hypothesis of Kimura and others showed that polymorphism within species are approximately that which would be expected in a neutral evolutionary model. [ 18 ] [ 23 ] [ 24 ] For many molecular biology approaches, as opposed to mathematical genetics, neutral mutations are generally assumed to be those mutations that cause no appreciable effect on gene function. This simplification eliminates the effect of minor allelic differences in fitness and avoids problems when a selection has only a minor effect. [ 18 ] Early convincing evidence of this definition of neutral mutation was shown through the lower mutational rates in functionally important parts of genes such as cytochrome c versus less important parts [ 25 ] and the functionally interchangeable nature of mammalian cytochrome c in in vitro studies. [ 26 ] Nonfunctional pseudogenes provide more evidence for the role of neutral mutations in evolution. The rates of mutation in mammalian globin pseudogenes has been shown to be much higher than rates in functional genes. [ 27 ] [ 28 ] According to neo-Darwinian evolution, such mutations should rarely exist as these sequences are functionless and positive selection would not be able to operate. [ 18 ] The McDonald–Kreitman test [ 29 ] has been used to study selection over long periods of evolutionary time. This is a statistical test that compares polymorphism in neutral and functional sites and estimates what fraction of substitutions have been acted on by positive selection. [ 30 ] The test often uses synonymous substitutions in protein coding genes as the neutral component; however, synonymous mutations have been shown to be under purifying selection in many instances. [ 31 ] [ 32 ] Molecular clocks can be used to estimate the amount of time since divergence of two species and for placing evolutionary events in time. [ 33 ] Pauling and Zuckerkandl , proposed the idea of the molecular clock in 1962 based on the observation that the random mutation process occurs at an approximate constant rate. Individual proteins were shown to have linear rates of amino acid changes over evolutionary time. [ 34 ] Despite controversy from some biologists arguing that morphological evolution would not proceed at a constant rate, many amino acid changes were shown to accumulate in a constant fashion. Kimura and Ohta explained these rates as part of the framework of the neutral theory. These mutations were reasoned to be neutral as positive selection should be rare and deleterious mutations should be eliminated quickly from a population. [ 35 ] By this reasoning, the accumulation of these neutral mutations should only be influenced by the mutation rate. Therefore, the neutral mutation rate in individual organisms should match the molecular evolution rate in species over evolutionary time. The neutral mutation rate is affected by the amount of neutral sites in a protein or DNA sequence versus the amount of mutation in sites that are functionally constrained. By quantifying these neutral mutations in protein and/or DNA and comparing them between species or other groups of interest, rates of divergence can be determined. [ 33 ] [ 36 ] Molecular clocks have caused controversy due to the dates they derive for events such as explosive radiations seen after extinction events like the Cambrian explosion and the radiations of mammals and birds. Two-fold differences exist in dates derived from molecular clocks and the fossil record. While some paleontologists argue that molecular clocks are systemically inaccurate, others attribute the discrepancies to lack of robust fossil data and bias in sampling. [ 37 ] While not without constancy and discrepancies with the fossil record, the data from molecular clocks have shown how evolution is dominated by the mechanisms of a neutral model and is less influenced by the action of natural selection. [ 33 ]
https://en.wikipedia.org/wiki/Neutral_mutation
A neutral network is a set of genes all related by point mutations that have equivalent function or fitness . [ 1 ] Each node represents a gene sequence and each line represents the mutation connecting two sequences. Neutral networks can be thought of as high, flat plateaus in a fitness landscape . During neutral evolution , genes can randomly move through neutral networks and traverse regions of sequence space which may have consequences for robustness and evolvability . Neutral networks exist in fitness landscapes since proteins are robust to mutations. This leads to extended networks of genes of equivalent function, linked by neutral mutations . [ 2 ] [ 3 ] Proteins are resistant to mutations because many sequences can fold into highly similar structural folds . [ 4 ] A protein adopts a limited ensemble of native conformations because those conformers have lower energy than unfolded and mis-folded states (ΔΔG of folding). [ 5 ] [ 6 ] This is achieved by a distributed, internal network of cooperative interactions ( hydrophobic , polar and covalent ). [ 7 ] Protein structural robustness results from few single mutations being sufficiently disruptive to compromise function. Proteins have also evolved to avoid aggregation [ 8 ] as partially folded proteins can combine to form large, repeating, insoluble protein fibrils and masses. [ 9 ] There is evidence that proteins show negative design features to reduce the exposure of aggregation-prone beta-sheet motifs in their structures. [ 10 ] Additionally, there is some evidence that the genetic code itself may be optimised such that most point mutations lead to similar amino acids ( conservative ). [ 11 ] [ 12 ] Together these factors create a distribution of fitness effects of mutations that contains a high proportion of neutral and nearly-neutral mutations. [ 13 ] Neutral networks are a subset of the sequences in sequence space that have equivalent function, and so form a wide, flat plateau in a fitness landscape . Neutral evolution can therefore be visualised as a population diffusing from one set of sequence nodes, through the neutral network, to another cluster of sequence nodes. Since the majority of evolution is thought to be neutral, [ 14 ] [ 15 ] a large proportion of gene change is the movement though expansive neutral networks. The more neutral neighbours a sequence has, the more robust to mutations it is since mutations are more likely to simply neutrally convert it into an equally functional sequence. [ 1 ] Indeed, if there are large differences between the number of neutral neighbours of different sequences within a neutral network, the population is predicted to evolve towards these robust sequences. This is sometimes called circum-neutrality and represents the movement of populations away from cliffs in the fitness landscape . [ 16 ] In addition to in silico models, [ 17 ] these processes are beginning to be confirmed by experimental evolution of cytochrome P450s [ 18 ] and B-lactamase . [ 19 ] Interest in the interplay between genetic drift and selection has been around since the 1930s when the shifting-balance theory proposed that in some situations, genetic drift could facilitate later adaptive evolution. [ 20 ] Although the specifics of the theory were largely discredited, [ 21 ] it drew attention to the possibility that drift could generate cryptic variation that, though neutral to current function, may affect selection for new functions ( evolvability ). [ 22 ] By definition, all genes in a neutral network have equivalent function, however some may exhibit promiscuous activities which could serve as starting points for adaptive evolution towards new functions. [ 23 ] [ 24 ] In terms of sequence space , current theories predict that if the neutral networks for two different activities overlap, a neutrally evolving population may diffuse to regions of the neutral network of the first activity that allow it to access the second. [ 25 ] This would only be the case when the distance between activities is smaller than the distance that a neutrally evolving population can cover. The degree of interpenetration of the two networks will determine how common cryptic variation for the promiscuous activity is in sequence space. [ 26 ] The fact that neutral mutations were probably widespread was proposed by Freese and Yoshida in 1965. [ 27 ] Motoo Kimura later crystallized a theory of neutral evolution in 1968 [ 28 ] with King and Jukes independently proposing a similar theory (1969). [ 29 ] Kimura computed the rate of nucleotide substitutions in a population (i.e. the average time for one base pair replacement to occur within a genome) and found it to be ~1.8 years. Such a high rate would not be tolerated by any mammalian population according to Haldane 's formula. He thus concluded that, in mammals, neutral (or nearly neutral) nucleotide substitution mutations of DNA must dominate. He computed that such mutations were occurring at the rate of roughly 0-5 per year per gamete. In later years, a new paradigm emerged, that placed RNA as a precursor molecule to DNA . A primordial molecule principle was put forth as early as 1968 by Crick , [ 30 ] and lead to what is now known as The RNA World Hypothesis . [ 31 ] DNA is found, predominantly, as fully base paired double helices, while biological RNA is single stranded and often exhibits complex base-pairing interactions. These are due to its increased ability to form hydrogen bonds , a fact which stems from the existence of the extra hydroxyl group in the ribose sugar. In the 1970s, Stein and M. Waterman laid the groundwork for the combinatorics of RNA secondary structures. [ 32 ] Waterman gave the first graph theoretic description of RNA secondary structures and their associated properties, and used them to produce an efficient minimum free energy (MFE) folding algorithm. [ 33 ] An RNA secondary structure can be viewed as a diagram over N labeled vertices with its Watson-Crick base pairs represented as non-crossing arcs in the upper half plane. Therefore, a secondary structure is a scaffold having many sequences compatible with its implied base pairing constraints. Later, Smith and Waterman developed an algorithm that performed local sequence alignment. [ 34 ] Another prediction algorithm for RNA secondary structure was given by Nussinov [ 35 ] Nussinov's algorithm described the folding problem over a two letter alphabet as a planar graph optimization problem, where the quantity to be maximized is the number of matchings in the sequence string. Come the year 1980, Howell et al. computed a generating function of all foldings of a sequence [ 36 ] while D. Sankoff (1985) described algorithms for alignment of finite sequences, the prediction of RNA secondary structures (folding), and the reconstruction of proto-sequences on a phylo-genetic tree. [ 37 ] Later, Waterman and Temple (1986) produced a polynomial time dynamic programming (DP) algorithm for predicting general RNA secondary structure. [ 38 ] while in the year 1990, John McCaskill presented a polynomial time DP algorithm for computing the full equilibrium partition function of an RNA secondary structure. [ 39 ] This changed the dominant calculation of RNA folding from a mapping of sequence to a particular 3D structure, to a mapping of sequence to a whole weighted ensemble of structures, which smooths RNA fitness, which depends on sequence via folding, facilitating more nearly neutral nets. M. Zuker, implemented algorithms for computation of MFE RNA secondary structures [ 40 ] based on the work of Nussinov et al., [ 35 ] Smith and Waterman [ 34 ] and Studnicka, et al. [ 41 ] Later L. Hofacker (et al., 1994), [ 42 ] presented The Vienna RNA package , a software package that integrated MFE folding and the computation of the partition function as well as base pairing probabilities. Peter Schuster and W. Fontana (1994) shifted the focus towards sequence to structure maps ( genotype–phenotype ) . They used an inverse folding algorithm, to produce computational evidence that RNA sequences sharing the same structure are distributed randomly in sequence space . They observed that common structures can be reached from a random sequence by just a few mutations. These two facts lead them to conclude that the sequence space seemed to be percolated by neutral networks of nearest neighbor mutants that fold to the same structure. [ 43 ] In 1997, C. Reidys Stadler and Schuster laid the mathematical foundations for the study and modelling of neutral networks of RNA secondary structures. Using a random graph model they proved the existence of a threshold value for connectivity of random sub-graphs in a configuration space, parametrized by λ, the fraction of neutral neighbors. They showed that the networks are connected and percolate sequence space if the fraction of neutral nearest neighbors exceeds λ*, a threshold value. Below this threshold the networks are partitioned into a largest giant component and several smaller ones. Key results of this analysis where concerned with threshold functions for density and connectivity for neutral networks as well as Schuster 's shape space conjecture. [ 43 ] [ 44 ] [ 45 ]
https://en.wikipedia.org/wiki/Neutral_network_(evolution)
In physics , a neutral particle is a particle without an electric charge , such as a neutron . Long-lived neutral particles provide a challenge in the construction of particle detectors , because they do not interact electromagnetically , except possibly through their magnetic moments . This means that they do not leave tracks of ionized particles or curve in magnetic fields . Examples of such particles include photons , [ PDG 1 ] neutrons , [ PDG 2 ] and neutrinos . [ PDG 3 ] Other neutral particles are very short-lived and decay before they could be detected even if they were charged. They have been observed only indirectly. They include: This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutral_particle
In particle physics , neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number , via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais . [ 1 ] For example, a neutron cannot transmute into an antineutron as that would violate the conservation of baryon number . But in those hypothetical extensions of the Standard Model which include interactions that do not strictly conserve baryon number, neutron–antineutron oscillations are predicted to occur. [ 2 ] [ 3 ] [ 4 ] There is a project to search for neutron-antineutron oscillations using ultracold neutrons. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Such oscillations can be classified into two types: In those cases where the particles decay to some final product, then the system is not purely oscillatory, and an interference between oscillation and decay is observed. After the striking evidence for parity violation provided by Wu et al . in 1957, it was assumed that CP (charge conjugation-parity) is the quantity which is conserved. [ 10 ] However, in 1964 Cronin and Fitch reported CP violation in the neutral Kaon system. [ 11 ] They observed the long-lived K L (with CP = −1 ) undergoing decays into two pions (with CP = [−1]·[−1] = +1 ) thereby violating CP conservation. In 2001, CP violation in the B 0 ⇄ B 0 system was confirmed by the BaBar and the Belle experiments. [ 12 ] [ 13 ] Direct CP violation in the B 0 ⇄ B 0 system was reported by both the labs by 2005. [ 14 ] [ 15 ] The K 0 ⇄ K 0 and the B 0 ⇄ B 0 systems can be studied as two state systems, considering the particle and its antiparticle as two states of a single particle. The pp chain in the sun produces an abundance of ν e . In 1968, R. Davis et al . first reported the results of the Homestake experiment . [ 16 ] [ 17 ] Also known as the Davis experiment , it used a huge tank of perchloroethylene in Homestake mine (it was deep underground to eliminate background from cosmic rays), South Dakota . Chlorine nuclei in the perchloroethylene absorb ν e to produce argon via the reaction which is essentially The experiment collected argon for several months. Because the neutrino interacts very weakly, only about one argon atom was collected every two days. The total accumulation was about one third of Bahcall's theoretical prediction. In 1968, Bruno Pontecorvo showed that if neutrinos are not considered massless, then ν e (produced in the sun) can transform into some other neutrino species ( ν μ or ν τ ), to which Homestake detector was insensitive. This explained the deficit in the results of the Homestake experiment. The final confirmation of this solution to the solar neutrino problem was provided in April 2002 by the SNO ( Sudbury Neutrino Observatory ) collaboration, which measured both ν e flux and the total neutrino flux. [ 19 ] This 'oscillation' between the neutrino species can first be studied considering any two, and then generalized to the three known flavors. Let H 0 {\displaystyle \ H_{0}\ } be the Hamiltonian of the two-state system, and | 1 ⟩ {\displaystyle \ \left|1\right\rangle \ } and | 2 ⟩ {\displaystyle \ \left|2\right\rangle \ } be its orthonormal eigenvectors with eigenvalues E 1 {\displaystyle \ E_{1}\ } and E 2 {\displaystyle \ E_{2}\ } respectively. Let | Ψ ( t ) ⟩ {\displaystyle \ \left|\Psi (t)\right\rangle \ } be the state of the system at time t . {\displaystyle \ t~.} If the system starts as an energy eigenstate of H 0 , {\displaystyle \ H_{0}\ ,} for example, say then the time evolved state, which is the solution of the Schrödinger equation H ^ 0 | Ψ ( t ) ⟩ = i ℏ ∂ ∂ t | Ψ ( t ) ⟩ {\displaystyle {\hat {H}}_{0}\left|\Psi (t)\right\rangle \ =\ i\hbar {\frac {\partial }{\partial t}}\left|\Psi \left(t\right)\right\rangle \ } ( 1 ) will be [ 20 ] But this is physically same as | 1 ⟩ , {\displaystyle \ \left|1\right\rangle \ ,} since the exponential term is just a phase factor: It does not produce an observable new state. In other words, energy eigenstates are stationary eigenstates, that is, they do not yield observably distinct new states under time evolution. Define { | 1 ⟩ , | 2 ⟩ } , {\displaystyle \ \left\{\ \left|1\right\rangle \ ,\ \left|2\right\rangle \ \right\}\ ,} to be a basis in which the unperturbed Hamiltonian operator, H 0 {\displaystyle \ H_{0}\ } , is diagonal: It can be shown, that oscillation between states will occur if and only if off-diagonal terms of the Hamiltonian are not zero . Hence let us introduce a general perturbation W {\displaystyle \ W\ } imposed on H 0 {\displaystyle \ H\ _{0}\ } such that the resultant Hamiltonian H {\displaystyle \ H\ } is still Hermitian . Then where W 11 , W 22 ∈ R {\displaystyle \ W_{11},W_{22}\in \mathbb {R} \ } and W 12 ∈ C {\displaystyle \ W_{12}\in \mathbb {C} \ } and H = H 0 + W = ( E 1 + W 11 W 12 W 12 ∗ E 2 + W 22 ) {\displaystyle \ H=H_{0}+W={\begin{pmatrix}E_{1}+W_{11}&W_{12}\\W_{12}^{*}&E_{2}+W_{22}\\\end{pmatrix}}\ } ( 2 ) The eigenvalues of the perturbed Hamiltonian, H , {\displaystyle \ H\ ,} then change to E + {\displaystyle \ E_{+}\ } and E − , {\displaystyle \ E_{-}\ ,} where [ 21 ] E ± = 1 2 [ E 1 + W 11 + E 2 + W 22 ± ( E 1 + W 11 − E 2 − W 22 ) 2 + 4 | W 12 | 2 ] {\displaystyle \ E_{\pm }={\frac {1}{\ 2\ }}\left[E_{1}+W_{11}+E_{2}+W_{22}\pm {\sqrt {{\left(E_{1}+W_{11}-E_{^{2}}-W_{22}\right)}^{2}+4\left|W_{12}\right|^{2}}}\right]\ } ( 3 ) Since H {\displaystyle \ H\ } is a general Hamiltonian matrix, it can be written as [ 22 ] H ′ = a → ⋅ σ → = | a | n ^ ⋅ σ → , {\displaystyle \ H'={\vec {a}}\cdot {\vec {\sigma }}=\left|a\right|{\hat {n}}\cdot {\vec {\sigma }}\ ,} n ^ {\displaystyle \ {\hat {n}}\ } is a real unit vector in 3 dimensions in the direction of a → , {\displaystyle \ {\vec {a}}\ ,} a → = ( a 1 , a 2 , a 3 ) , {\displaystyle \ {\vec {a}}=\left(a_{1},a_{2},a_{3}\right)\ ,} and σ 0 = I = ( 1 0 0 1 ) , σ 1 = σ x = ( 0 1 1 0 ) , σ 2 = σ y = i ( 0 − 1 1 0 ) , σ 3 = σ z = ( 1 0 0 − 1 ) {\displaystyle {\begin{aligned}\sigma _{0}&=~I~=~\;{\begin{pmatrix}1&~\;0\\0&~\;1\\\end{pmatrix}}\ ,\\\sigma _{1}&=\sigma _{x}=~\;{\begin{pmatrix}0&~\;1\\1&~\;0\\\end{pmatrix}}\ ,\\\sigma _{2}&=\sigma _{y}=i\ {\begin{pmatrix}0&-1\\1&~\;0\\\end{pmatrix}}\ ,\\\sigma _{3}&=\sigma _{z}=~\;{\begin{pmatrix}1&~\;0\\0&-1\\\end{pmatrix}}\end{aligned}}\ } are the Pauli spin matrices . The following two results are clear: With the following parametrization [ 22 ] (this parametrization helps as it normalizes the eigenvectors and also introduces an arbitrary phase ϕ {\displaystyle \phi } making the eigenvectors most general) and using the above pair of results the orthonormal eigenvectors of H ′ {\displaystyle \ H'\ } and consequently those of H {\displaystyle \ H\ } are obtained as | + ⟩ = ( cos ⁡ θ 2 e − i ϕ 2 sin ⁡ θ 2 e + i ϕ 2 ) ≡ cos ⁡ θ 2 e − i ϕ 2 | 1 ⟩ + sin ⁡ θ 2 e + i ϕ 2 | 2 ⟩ | − ⟩ = ( − sin ⁡ θ 2 e + i ϕ 2 cos ⁡ θ 2 e − i ϕ 2 ) ≡ − sin ⁡ θ 2 e − i ϕ 2 | 1 ⟩ + cos ⁡ θ 2 e + i ϕ 2 | 2 ⟩ {\displaystyle \ {\begin{aligned}\left|+\right\rangle \ &=\ {\begin{pmatrix}\;~\;\cos {\tfrac {\theta }{2}}\;e^{-i{\frac {\phi }{2}}}\\\;~\;\sin {\tfrac {\theta }{2}}\;e^{+i{\frac {\phi }{2}}}\\\end{pmatrix}}\ \equiv ~~~\;\cos {\tfrac {\theta }{2}}\;e^{-i{\frac {\phi }{2}}}\ \left|1\right\rangle \ +~\;\sin {\tfrac {\theta }{2}}\;e^{+i{\frac {\phi }{2}}}\ \left|2\right\rangle \\\left|-\right\rangle \ &=\ {\begin{pmatrix}-\sin {\frac {\theta }{2}}\;e^{+i{\frac {\phi }{2}}}\\~\cos {\frac {\theta }{2}}\;e^{-i{\frac {\phi }{2}}}\\\end{pmatrix}}\ \equiv \ -\sin {\frac {\theta }{2}}\;e^{-i{\frac {\phi }{2}}}\ \left|1\right\rangle \ +~\cos {\frac {\theta }{2}}\;e^{+i{\frac {\phi }{2}}}\ \left|2\right\rangle \\\end{aligned}}\ } ( 4 ) and W 12 = | W 12 | e i ϕ {\displaystyle \ W_{12}=\left|W_{12}\right|e^{i\phi }\ } Writing the eigenvectors of H 0 {\displaystyle \ H_{0}\ } in terms of those of H {\displaystyle \ H\ } we get | 1 ⟩ = e i ϕ 2 ( cos ⁡ θ 2 | + ⟩ − sin ⁡ θ 2 | − ⟩ ) | 2 ⟩ = e − i ϕ 2 ( sin ⁡ θ 2 | + ⟩ + cos ⁡ θ 2 | − ⟩ ) {\displaystyle \ {\begin{aligned}\left|\ 1\ \right\rangle \ &=\ e^{i{\frac {\phi }{2}}}\left(\cos {\tfrac {\theta }{2}}\left|+\right\rangle -\sin {\tfrac {\theta }{2}}\left|-\right\rangle \right)\\\left|\ 2\ \right\rangle \ &=\ e^{-i{\frac {\phi }{2}}}\left(\sin {\tfrac {\theta }{2}}\left|+\right\rangle +\cos {\tfrac {\theta }{2}}\left|-\right\rangle \right)\\\end{aligned}}\ } ( 5 ) Now if the particle starts out as an eigenstate of H 0 {\displaystyle \ H_{0}\ } (say, | 1 ⟩ {\displaystyle \ \left|1\right\rangle \ } ), that is then under time evolution we get [ 21 ] which unlike the previous case, is distinctly different from | 1 ⟩ . {\displaystyle \ \left|1\right\rangle ~.} We can then obtain the probability of finding the system in state | 2 ⟩ {\displaystyle \ \left|2\right\rangle \ } at time t {\displaystyle \ t\ } as [ 21 ] P 21 ( t ) = | ⟨ 2 | Ψ ( t ) ⟩ | 2 = sin 2 θ sin 2 ( E + − E − 2 ℏ t ) = 4 | W 12 | 2 4 | W 12 | 2 + ( E 1 − E 2 ) 2 sin 2 ( 4 | W 12 | 2 + ( E 1 − E 2 ) 2 2 ℏ t ) {\displaystyle \ {\begin{aligned}P_{21}\!(t)&={\Bigl |}\ \left\langle \ 2\ |\ \Psi (t)\ \right\rangle \ {\Bigr |}^{2}=\sin ^{2}\!\theta \ \sin ^{2}\!\!\left({\frac {\ E_{+}-E_{-}\ }{\ 2\ \hbar \ }}\ t\ \right)\\&={\frac {4\left|W_{12}\right|^{2}}{4\left|W_{12}\right|^{2}+\left(E_{1}-E_{2}\right)^{2}}}\sin ^{2}\!\!\left(\ {\frac {\ {\sqrt {4\ \left|W_{12}\right|^{2}+\left(E_{1}-E_{2}\right)^{2}\ }}\ }{\ 2\ \hbar \ }}\ t\ \right)\\\end{aligned}}\ } ( 6 ) which is called Rabi's formula . Hence, starting from one eigenstate of the unperturbed Hamiltonian H 0 , {\displaystyle \ H_{0}\ ,} the state of the system oscillates between the eigenstates of H 0 {\displaystyle \ H_{0}\ } with a frequency (known as Rabi frequency ), ω = E 1 − E 2 2 ℏ = 4 | W 12 | 2 + ( E 1 − E 2 ) 2 2 ℏ {\displaystyle \,\omega ={\frac {\ E_{1}-E_{2}\ }{\ 2\ \hbar \ }}={\frac {\ {\sqrt {4\left|W_{12}\right|^{2}+\left(E_{1}-E_{2}\right)^{2}\ }}\ }{\ 2\ \hbar \ }}\ } ( 7 ) From equation (6), for P 21 ( t ) , {\displaystyle \ P_{21}\!(t)\ ,} we can conclude that oscillation will exist only if | W 12 | 2 ≠ 0 . {\displaystyle \ \left|W_{12}\right|^{2}\neq 0~.} So W 12 {\displaystyle \ W_{12}\ } is known as the coupling term as it connects the two eigenstates of the unperturbed Hamiltonian H 0 {\displaystyle \ H_{0}\ } and thereby facilitates oscillation between the two. Oscillation will also cease if the eigenvalues of the perturbed Hamiltonian H {\displaystyle \ H\ } are degenerate, i.e. E + = E − . {\displaystyle \ E_{+}=E_{-}~.} But this is a trivial case as in such a situation, the perturbation itself vanishes and H {\displaystyle \ H\ } takes the form (diagonal) of H 0 {\displaystyle \ H_{0}\ } and we're back to square one. Hence, the necessary conditions for oscillation are: If the particle(s) under consideration undergoes decay, then the Hamiltonian describing the system is no longer Hermitian. [ 23 ] Since any matrix can be written as a sum of its Hermitian and anti-Hermitian parts, H {\displaystyle \ H\ } can be written as, M {\displaystyle \ M\ } and Γ {\displaystyle \ \Gamma \ } are Hermitian. Hence M 21 = M 12 ∗ {\displaystyle ~~M_{21}=M_{12}^{*}~~} and Γ 21 = Γ 12 ∗ {\displaystyle ~~\Gamma _{21}=\Gamma _{12}^{*}\ } CPT conservation (symmetry) implies CPT conservation implies that the Hamiltonian H {\displaystyle \ H\ } and hence M {\displaystyle \ M\ } and Γ {\displaystyle \ \Gamma \ } are invariant under the following transformation: Θ {\displaystyle \ \Theta \ } is an anti-Unitary operator [ 24 ] and satisfies the relation hence and similarly for the diagonal elements of Γ . {\displaystyle \ \Gamma ~.} Hermiticity of M {\displaystyle \ M\ } and Γ {\displaystyle \ \Gamma \ } also implies that their diagonal elements are real. The eigenvalues of H {\displaystyle \ H\ } are μ H = M 11 − i 2 Γ 11 + 1 2 ( Δ m − i 2 Δ Γ ) , μ L = M 11 − i 2 Γ 11 − 1 2 ( Δ m − i 2 Δ Γ ) {\displaystyle \ {\begin{aligned}\mu _{\mathsf {H}}&=M_{11}-{\tfrac {i}{2}}\Gamma _{11}+{\tfrac {1}{2}}\left(\Delta m-{\frac {i}{2}}\Delta \Gamma \right),\\\mu _{\mathsf {L}}&=M_{11}-{\tfrac {i}{2}}\Gamma _{11}-{\tfrac {1}{2}}\left(\Delta m-{\frac {i}{2}}\Delta \Gamma \right)\end{aligned}}\ } ( 8 ) The suffixes stand for Heavy and Light respectively (by convention) and this implies that Δ m {\displaystyle \Delta m} is positive. The normalized eigenstates corresponding to μ L {\displaystyle \ \mu _{\mathsf {L}}\ } and μ H {\displaystyle \ \mu _{\mathsf {H}}\ } respectively, in the natural basis { | P ⟩ , | P ¯ ⟩ } ≡ { ( 1 , 0 ) , ( 0 , 1 ) } {\displaystyle \ {\bigl \{}\left|P\right\rangle \ ,\ \left|{\bar {P}}\right\rangle {\bigr \}}~\equiv ~{\bigl \{}\ (1,0)\ ,\ (0,1)\ {\bigr \}}\ } are | P L ⟩ = p | P ⟩ + q | P ¯ ⟩ | P H ⟩ = p | P ⟩ − q | P ¯ ⟩ {\displaystyle \ {\begin{aligned}\left|P_{\mathsf {L}}\right\rangle \ &=\ p\ \left|P\right\rangle \ +\ q\left|\ {\bar {P}}\right\rangle \\\left|P_{\mathsf {H}}\right\rangle \ &=\ p\ \left|P\right\rangle \ -\ q\left|\ {\bar {P}}\right\rangle \end{aligned}}\ } ( 9 ) p {\displaystyle \ p\ } and q {\displaystyle \ q\ } are the mixing terms. Note that these eigenstates are no longer orthogonal. Let the system start in the state | P ⟩ . {\displaystyle \ \left|P\right\rangle ~.} That is Under time evolution we then get Similarly, if the system starts in the state | P ¯ ⟩ {\displaystyle \left|{\bar {P}}\right\rangle } , under time evolution we obtain If in a system | P ⟩ {\displaystyle \left|P\right\rangle } and | P ¯ ⟩ {\displaystyle \left|{\bar {P}}\right\rangle } represent CP conjugate states (i.e. particle-antiparticle) of one another (i.e. C P | P ⟩ = e i δ | P ¯ ⟩ {\displaystyle CP\left|P\right\rangle =e^{i\delta }\left|{\bar {P}}\right\rangle } and C P | P ¯ ⟩ = e − i δ | P ⟩ {\displaystyle CP\left|{\bar {P}}\right\rangle =e^{-i\delta }\left|P\right\rangle } ), and certain other conditions are met, then CP violation can be observed as a result of this phenomenon. Depending on the condition, CP violation can be classified into three types: [ 23 ] [ 25 ] Consider the processes where { | P ⟩ , | P ¯ ⟩ } {\displaystyle \left\{\left|P\right\rangle ,\left|{\bar {P}}\right\rangle \right\}} decay to final states { | f ⟩ , | f ¯ ⟩ } {\displaystyle \left\{\left|f\right\rangle ,\left|{\bar {f}}\right\rangle \right\}} , where the barred and the unbarred kets of each set are CP conjugates of one another. The probability of | P ⟩ {\displaystyle \left|P\right\rangle } decaying to | f ⟩ {\displaystyle \left|f\right\rangle } is given by, and that of its CP conjugate process by, If there is no CP violation due to mixing, then | q p | = 1 {\displaystyle \left|{\frac {q}{p}}\right|=1} . Now, the above two probabilities are unequal if, | A ¯ f ¯ A f | ≠ 1 {\displaystyle \left|{\frac {{\bar {A}}_{\bar {f}}}{A_{f}}}\right|\neq 1} and | A f ¯ A f ¯ | ≠ 1 {\displaystyle \left|{\frac {A_{\bar {f}}}{\bar {A_{f}}}}\right|\neq 1} ( 10 ) . Hence, the decay becomes a CP violating process as the probability of a decay and that of its CP conjugate process are not equal. The probability (as a function of time) of observing | P ¯ ⟩ {\displaystyle \left|{\bar {P}}\right\rangle } starting from | P ⟩ {\displaystyle \left|P\right\rangle } is given by, and that of its CP conjugate process by, The above two probabilities are unequal if, | q p | ≠ 1 {\displaystyle \left|{\frac {q}{p}}\right|\neq 1} ( 11 ) Hence, the particle-antiparticle oscillation becomes a CP violating process as the particle and its antiparticle (say, | P ⟩ {\displaystyle \left|P\right\rangle } and | P ¯ ⟩ {\displaystyle \left|{\bar {P}}\right\rangle } respectively) are no longer equivalent eigenstates of CP. Let | f ⟩ {\displaystyle \left|f\right\rangle } be a final state (a CP eigenstate) that both | P ⟩ {\displaystyle \left|P\right\rangle } and | P ¯ ⟩ {\displaystyle \left|{\bar {P}}\right\rangle } can decay to. Then, the decay probabilities are given by, and, From the above two quantities, it can be seen that even when there is no CP violation through mixing alone (i.e. | q p | = 1 {\displaystyle \ \left|{\tfrac {q}{p}}\right|=1\ } ) and neither is there any CP violation through decay alone (i.e. | A ¯ f A f | = 1 {\displaystyle \ \left|{\tfrac {{\bar {A}}_{f}}{A_{f}}}\right|=1\ } ) and thus | λ f | = 1 , {\displaystyle \ \left|\lambda _{f}\right|=1\ ,} the probabilities will still be unequal, provided that I m { λ f } = I m { q p A ¯ f A f } ≠ 0 {\displaystyle \operatorname {{\mathcal {I}}_{m}} \!\left\{\ \lambda _{f}\ \right\}\ =\ \operatorname {{\mathcal {I}}_{m}} \!\left\{\ {\frac {q}{p}}{\frac {{\bar {A}}_{f}}{A_{f}}}\ \right\}\neq 0} ( 12 ) The last terms in the above expressions for probability are thus associated with interference between mixing and decay. Usually, an alternative classification of CP violation is made: [ 25 ] Considering a strong coupling between two flavor eigenstates of neutrinos (for example, ν e – ν μ , ν μ – ν τ , etc.) and a very weak coupling between the third (that is, the third does not affect the interaction between the other two), equation ( 6 ) gives the probability of a neutrino of type α {\displaystyle \alpha } transmuting into type β {\displaystyle \beta } as, where, E + {\displaystyle E_{+}} and E − {\displaystyle E_{-}} are energy eigenstates. The above can be written as, P β α ( x ) = sin 2 ⁡ θ sin 2 ⁡ ( Δ m 2 c 3 4 E ℏ x ) = sin 2 ⁡ θ sin 2 ⁡ ( 2 π λ osc x ) {\displaystyle P_{\beta \alpha }\left(x\right)=\sin ^{2}\theta \sin ^{2}\left({\frac {\Delta m^{2}c^{3}}{4E\hbar }}x\right)=\sin ^{2}\theta \sin ^{2}\left({\frac {2\pi }{\lambda _{\text{osc}}}}x\right)} ( 13 ) where, p {\displaystyle p} is the momentum with which the neutrino was created. Now, E ≃ p c {\displaystyle E\simeq pc} and t ≃ x / c {\displaystyle t\simeq x/c} . Hence, where, λ osc = 8 π E ℏ Δ m 2 c 3 {\displaystyle \lambda _{\text{osc}}={\frac {8\pi E\hbar }{\Delta m^{2}c^{3}}}} Thus, a coupling between the energy (mass) eigenstates produces the phenomenon of oscillation between the flavor eigenstates. One important inference is that neutrinos have a finite mass, although very small . Hence, their speed is not exactly the same as that of light but slightly lower. With three flavors of neutrinos, there are three mass splittings: But only two of them are independent, because ( Δ m 2 ) 12 + ( Δ m 2 ) 23 + ( Δ m 2 ) 31 = 0 {\displaystyle \left(\Delta m^{2}\right)_{12}+\left(\Delta m^{2}\right)_{23}+\left(\Delta m^{2}\right)_{31}=0~} . This implies that two of the three neutrinos have very closely placed masses. Since only two of the three Δ m 2 {\displaystyle \Delta m^{2}} are independent, and the expression for probability in equation ( 13 ) is not sensitive to the sign of Δ m 2 {\displaystyle \Delta m^{2}} (as sine squared is independent of the sign of its argument), it is not possible to determine the neutrino mass spectrum uniquely from the phenomenon of flavor oscillation. That is, any two out of the three can have closely spaced masses. Moreover, since the oscillation is sensitive only to the differences (of the squares) of the masses, direct determination of neutrino mass is not possible from oscillation experiments. Equation ( 13 ) indicates that an appropriate length scale of the system is the oscillation wavelength λ osc {\displaystyle \lambda _{\text{osc}}} . We can draw the following inferences: The 1964 paper by Christenson et al. [ 11 ] provided experimental evidence of CP violation in the neutral Kaon system. The so-called long-lived Kaon (CP = −1) decayed into two pions (CP = (−1)(−1) = 1), thereby violating CP conservation. | K 0 ⟩ {\displaystyle \left|K^{0}\right\rangle } and | K ¯ 0 ⟩ {\displaystyle \left|{\bar {K}}^{0}\right\rangle } being the strangeness eigenstates (with eigenvalues +1 and −1 respectively), the energy eigenstates are, These two are also CP eigenstates with eigenvalues +1 and −1 respectively. From the earlier notion of CP conservation (symmetry), the following were expected: Since the two pion decay is much faster than the three pion decay, | K 1 0 ⟩ {\displaystyle \left|K_{^{1}}^{0}\right\rangle } was referred to as the short-lived Kaon | K S 0 ⟩ {\displaystyle \left|K_{S}^{0}\right\rangle } , and | K 2 0 ⟩ {\displaystyle \left|K_{2}^{0}\right\rangle } as the long-lived Kaon | K L 0 ⟩ {\displaystyle \left|K_{L}^{0}\right\rangle } . The 1964 experiment showed that contrary to what was expected, | K L 0 ⟩ {\displaystyle \left|K_{L}^{0}\right\rangle } could decay to two pions. This implied that the long lived Kaon cannot be purely the CP eigenstate | K 2 0 ⟩ {\displaystyle \left|K_{2}^{0}\right\rangle } , but must contain a small admixture of | K 1 0 ⟩ {\displaystyle \left|K_{^{1}}^{0}\right\rangle } , thereby no longer being a CP eigenstate. [ 26 ] Similarly, the short-lived Kaon was predicted to have a small admixture of | K 2 0 ⟩ {\displaystyle \left|K_{2}^{0}\right\rangle } . That is, where, ε {\displaystyle \varepsilon } is a complex quantity and is a measure of departure from CP invariance. Experimentally, | ε | = ( 2.228 ± 0.011 ) × 10 − 3 {\displaystyle \left|\varepsilon \right|=\left(2.228\pm 0.011\right)\times 10^{-3}} . [ 27 ] Writing | K 1 0 ⟩ {\displaystyle \left|K_{^{1}}^{0}\right\rangle } and | K 2 0 ⟩ {\displaystyle \left|K_{2}^{0}\right\rangle } in terms of | K 0 ⟩ {\displaystyle \left|K^{0}\right\rangle } and | K ¯ 0 ⟩ {\displaystyle \left|{\bar {K}}^{0}\right\rangle } , we obtain (keeping in mind that m K L 0 > m K S 0 {\displaystyle m_{K_{L}^{0}}>m_{K_{S}^{0}}} [ 27 ] ) the form of equation ( 9 ): where, q p = 1 − ε 1 + ε {\displaystyle {\frac {q}{p}}={\frac {1-\varepsilon }{1+\varepsilon }}} . Since | ε | ≠ 0 {\displaystyle \left|\varepsilon \right|\neq 0} , condition ( 11 ) is satisfied and there is a mixing between the strangeness eigenstates | K 0 ⟩ {\displaystyle \left|K^{0}\right\rangle } and | K ¯ 0 ⟩ {\displaystyle \left|{\bar {K}}^{0}\right\rangle } giving rise to a long-lived and a short-lived state. The K 0 L and K 0 S have two modes of two pion decay: π 0 π 0 or π + π − . Both of these final states are CP eigenstates of themselves. We can define the branching ratios as, [ 25 ] Experimentally, η + − = ( 2.232 ± 0.011 ) × 10 − 3 {\displaystyle \eta _{+-}=\left(2.232\pm 0.011\right)\times 10^{-3}} [ 27 ] and η 00 = ( 2.220 ± 0.011 ) × 10 − 3 {\displaystyle \eta _{00}=\left(2.220\pm 0.011\right)\times 10^{-3}} . That is η + − ≠ η 00 {\displaystyle \eta _{+-}\neq \eta _{00}} , implying | A π + π − / A ¯ π + π − | ≠ 1 {\displaystyle \left|A_{\pi ^{+}\pi ^{-}}/{\bar {A}}_{\pi ^{+}\pi ^{-}}\right|\neq 1} and | A π 0 π 0 / A ¯ π 0 π 0 | ≠ 1 {\displaystyle \left|A_{\pi ^{0}\pi ^{0}}/{\bar {A}}_{\pi ^{0}\pi ^{0}}\right|\neq 1} , and thereby satisfying condition ( 10 ). In other words, direct CP violation is observed in the asymmetry between the two modes of decay. If the final state (say f C P {\displaystyle f_{CP}} ) is a CP eigenstate (for example π + π − ), then there are two different decay amplitudes corresponding to two different decay paths: [ 28 ] CP violation can then result from the interference of these two contributions to the decay as one mode involves only decay and the other oscillation and decay. The above description refers to flavor (or strangeness) eigenstates and energy (or CP) eigenstates. But which of them represents the "real" particle? What do we really detect in a laboratory? Quoting David J. Griffiths : [ 26 ] The neutral Kaon system adds a subtle twist to the old question, 'What is a particle?' Kaons are typically produced by the strong interactions, in eigenstates of strangeness ( K 0 and K 0 ), but they decay by the weak interactions, as eigenstates of CP (K 1 and K 2 ). Which, then, is the 'real' particle? If we hold that a 'particle' must have a unique lifetime, then the 'true' particles are K 1 and K 2 . But we need not be so dogmatic. In practice, it is sometimes more convenient to use one set, and sometimes, the other. The situation is in many ways analogous to polarized light. Linear polarization can be regarded as a superposition of left-circular polarization and right-circular polarization. If you imagine a medium that preferentially absorbs right-circularly polarized light, and shine on it a linearly polarized beam, it will become progressively more left-circularly polarized as it passes through the material, just as a K 0 beam turns into a K 2 beam. But whether you choose to analyze the process in terms of states of linear or circular polarization is largely a matter of taste. If the system is a three state system (for example, three species of neutrinos ν e ⇄ ν μ ⇄ ν τ , three species of quarks d ⇄ s ⇄ b ), then, just like in the two state system, the flavor eigenstates (say | φ α ⟩ {\displaystyle \left|{\varphi _{\alpha }}\right\rangle } , | φ β ⟩ {\displaystyle \left|{\varphi _{\beta }}\right\rangle } , | φ γ ⟩ {\displaystyle \left|{\varphi _{\gamma }}\right\rangle } ) are written as a linear combination of the energy (mass) eigenstates (say | ψ 1 ⟩ {\displaystyle \left|\psi _{1}\right\rangle } , | ψ 2 ⟩ {\displaystyle \left|\psi _{2}\right\rangle } , | ψ 3 ⟩ {\displaystyle \left|\psi _{3}\right\rangle } ). That is, In case of leptons (neutrinos for example) the transformation matrix is the PMNS matrix , and for quarks it is the CKM matrix . [ 29 ] [ a ] The off diagonal terms of the transformation matrix represent coupling, and unequal diagonal terms imply mixing between the three states. The transformation matrix is unitary and appropriate parameterization (depending on whether it is the CKM or PMNS matrix) is done and the values of the parameters determined experimentally.
https://en.wikipedia.org/wiki/Neutral_particle_oscillation
In mechanics , the neutral plane or neutral surface is a conceptual plane within a beam or cantilever . When loaded by a bending force, the beam bends so that the inner surface is in compression and the outer surface is in tension . The neutral plane is the surface within the beam between these zones, where the material of the beam is not under stress , either compression or tension. [ 1 ] As there is no lengthwise stress force on the neutral plane, there is no strain or extension either: when the beam bends, the length of the neutral plane remains constant. Any line within the neutral plane parallel to the axis of the beam is called the deflection curve of the beam. To show that every beam must have a neutral plane, the material of the beam can be imagined to be divided into narrow fibers parallel to its length. When the beam is bent, at any given cross-section the region of fibers near the concave side will be under compression, while the region near the convex side will be under tension. Because the stress in the material must be continuous across any cross section, there must be a boundary between the regions of compression and tension at which the fibers have no stress. This is the neutral plane. [ 1 ] The location of the neutral plane can be an important factor in monocoque structures and pressure vessels . If the structure is a membrane supported by strength ribs, then placing the skin along the neutral surface avoids either compression or tension forces upon it. If the skin is already under external pressure, then this reduces the total force to which it is subject. In the design of submarines this has been an important, although subtle, issue. The US Fleet submarines of World War II had a hull section that was not quite circular, causing the nodal circle to separate from the neutral plane, giving rise to additional stresses. The original design was framed internally: this needed trial-and-error design refinement to produce acceptable dimensions for the rib scantlings . The designer Andrew I. McKee at Portsmouth Naval Shipyard developed an improved design. By placing the frames partly inside the hull and partly outside, the neutral axis could be rearranged to coincide with the nodal circle once more. This gave no resultant bending moment on the frames and so allowed a lighter and more efficient structure. [ 2 ] The property of remaining a constant length under load has been made use of in length metrology . When metal bars were developed as physical standards for length measures, they were calibrated as marks made on a length measured along the neutral plane. This avoided the minuscule changes in length, owing to the bar sagging under its own weight. The first length standards to use this technique were rectangular section solid bars. A blind hole was bored at each end, to the depth of the neutral plane, and the calibration marks were made at this depth. This was inconvenient, as it was impossible to measure directly between the two marks, but only with an offset trammel down the wells. A more convenient approach was used for the international prototype metre of 1870, a bar of platinum-iridium alloy which served as the definition of the meter from 1889 to 1960, when the CGPM redefined the meter based on a krypton standard . This bar was made with a splayed cross section called the Tresca section resembling an X with a connecting bar, or alternatively a H with the sides bent at an angle. One surface of the centre crossbar of the H was designed to coincide with the neutral plane, and the calibration marks defining the meter were scribed into this surface. [ 4 ]
https://en.wikipedia.org/wiki/Neutral_plane
Neutral red ( toluylene red , Basic Red 5 , or C.I. 50040 ) is a eurhodin dye used for staining in histology . It stains lysosomes red. [ 1 ] It is used as a general stain in histology, as a counterstain in combination with other dyes, and for many staining methods. Together with Janus Green B , it is used to stain embryonal tissues and supravital staining of blood . It can be used for staining the Golgi apparatus in cells and Nissl granules in neurons . In microbiology , it is used in the MacConkey agar to differentiate bacteria for lactose fermentation . Neutral red can be used as a vital stain . [ 2 ] The Neutral Red Cytotoxicity Assay was first developed by Ellen Borenfreund in 1984. In the Neutral Red Assay live cells incorporate neutral red into their lysosomes. As cells begin to die, their ability to incorporate neutral red diminishes. Thus, loss of neutral red uptake corresponds to loss of cell viability. [ 3 ] The neutral red is also used to stain cell cultures for plate titration of viruses . Neutral red is added to some growth media for bacterial and cell cultures . It usually is available as a chloride salt. Neutral red acts as a pH indicator , changing from red to yellow between pH 6.8 and 8.0.
https://en.wikipedia.org/wiki/Neutral_red
In chemistry, neutralization or neutralisation (see spelling differences ) is a chemical reaction in which acid and a base react with an equivalent quantity of each other. In a reaction in water, neutralization results in there being no excess of hydrogen or hydroxide ions present in the solution. The pH of the neutralized solution depends on the acid strength of the reactants. In the context of a chemical reaction the term neutralization is used for a reaction between an acid and a base or alkali . Historically, this reaction was represented as For example: The statement is still valid as long as it is understood that in an aqueous solution the substances involved are subject to dissociation , which changes the ionization state of the substances. The arrow sign, →, is used because the reaction is complete, that is, neutralization is a quantitative reaction. A more general definition is based on Brønsted–Lowry acid–base theory . Electrical charges are omitted from generic expressions such as this, as each species A, AH, B, or BH may or may not carry an electrical charge. Neutralization of sulfuric acid provides a specific example. Two partial neutralization reactions are possible in this instance. After an acid AH has been neutralized there are no molecules of the acid (or hydrogen ions produced by dissociation of the molecule) left in solution. When an acid is neutralized the amount of base added to it must be equal to the amount of acid present initially. This amount of base is said to be the equivalent amount. In a titration of an acid with a base, the point of neutralization can also be called the equivalence point . The quantitative nature of the neutralization reaction is most conveniently expressed in terms of the concentrations of acid and alkali . At the equivalence point: In general, for an acid AH n at concentration c 1 reacting with a base B(OH) m at concentration c 2 the volumes are related by: An example of a base being neutralized by an acid is as follows. The same equation relating the concentrations of acid and base applies. The concept of neutralization is not limited to reactions in solution. For example, the reaction of limestone with acid such as sulfuric acid is also a neutralization reaction. Such reactions are important in soil chemistry . A strong acid is one that is fully dissociated in aqueous solution. For example, hydrochloric acid , HCl, is a strong acid. A strong base is one that is fully dissociated in aqueous solution. For example, sodium hydroxide , NaOH, is a strong base. Therefore, when a strong acid reacts with a strong base the neutralization reaction can be written as For example, in the reaction between hydrochloric acid and sodium hydroxide the sodium and chloride ions, Na + and Cl − take no part in the reaction. The reaction is consistent with the Brønsted–Lowry definition because in reality the hydrogen ion exists as the hydronium ion , so that the neutralization reaction may be written as When a strong acid is neutralized by a strong base there are no excess hydrogen ions left in the solution. The solution is said to be neutral as it is neither acidic nor alkaline. The pH of such a solution is close to a value of 7; the exact pH value is dependent on the temperature of the solution. Neutralization is an exothermic reaction. The standard enthalpy change for the reaction H + + OH − → H 2 O is −57.30 kJ/mol. The term fully dissociated is applied to a solute when the concentration of undissociated solute is below the detection limits , that is, when the undissociated solute's concentration is too low to measured. Quantitatively, this is expressed as log K < −2 , or in some texts log K < −1.76 . This means that the value of the dissociation constant cannot be obtained from experimental measurements. The value can, however, be estimated theoretically. For example the value of log K ≈ −6 has been estimated for hydrogen chloride in aqueous solution at room temperature. [ 1 ] A chemical compound may behave as a strong acid in solution when its concentration is low and as a weak acid when its concentration is very high. Sulfuric acid is an example of such a compound. A weak acid HA is one that does not dissociate fully when it is dissolved in water. Instead an equilibrium mixture is formed: Acetic acid is an example of a weak acid. The pH of the neutralized solution resulting from is not close to 7, as with a strong acid, but depends on the acid dissociation constant , K a , of the acid. The pH at the end-point or equivalence point in a titration may be calculated as follows. At the end-point the acid is completely neutralized so the analytical hydrogen ion concentration, T H , is zero and the concentration of the conjugate base, A − , is equal to the analytical or formal concentration T A of the acid: [A − ] = T A . When a solution of an acid, HA, is at equilibrium , by definition the concentrations are related by the expression The solvent (e.g. water) is omitted from the defining expression on the assumption that its concentration is very much greater than the concentration of dissolved acid, [H 2 O] ≫ T A . The equation for mass-balance in hydrogen ions can then be written as where K w represents the self-dissociation constant of water. Since K w = [H + ][OH − ], the term ⁠ K w / [H + ] ⁠ is equal to [OH − ], the concentration of hydroxide ions. At neutralization, T H is zero. After multiplying both sides of the equation by [H + ], it becomes and, after rearrangement and taking logarithms, With a dilute solution of the weak acid, the term 1 + ⁠ T A / K a ⁠ is equal to ⁠ T A / K a ⁠ to a good approximation. If p K w = 14, This equation explains the following facts: In a titration of a weak acid with a strong base the pH rises more steeply as the end-point is approached. At the end-point, the slope of the curve of pH with respect to amount of titrant is a maximum. Since the end-point occurs at pH greater than 7, the most suitable indicator to use is one, like phenolphthalein , that changes color at high pH. [ 2 ] The situation is analogous to that of weak acids and strong bases. Amines are examples of weak bases. The pH of the neutralized solution depends on the acid dissociation constant of the protonated base, p K a , or, equivalently, on the base association constant, p K b . The most suitable indicator to use for this type of titration is one, such as methyl orange , that changes color at low pH. When a weak acid reacts with an equivalent amount of a weak base, complete neutralization does not always occur. The concentrations of the species in equilibrium with each other will depend on the equilibrium constant , K , for the reaction, which is defined as follows: The neutralization reaction can be considered as the difference of the following two acid dissociation reactions with the dissociation constants K a,A and K a,B of the acids HA and BH + , respectively. Inspection of the reaction quotients shows that A weak acid cannot always be neutralized by a weak base, and vice versa. However, for the neutralization of benzoic acid ( K a,A = 6.5 × 10 −5 ) with ammonia ( K a,B = 5.6 × 10 −10 for ammonium ), K = 1.2 × 10 5 >> 1, and more than 99% of the benzoic acid is converted to benzoate. Chemical titration methods are used for analyzing acids or bases to determine the unknown concentration . Either a pH meter or a pH indicator which shows the point of neutralization by a distinct color change can be employed. Simple stoichiometric calculations with the known volume of the unknown and the known volume and molarity of the added chemical gives the molarity of the unknown. In wastewater treatment , chemical neutralization methods are often applied to reduce the damage that an effluent may cause upon release to the environment. For pH control, popular chemicals include calcium carbonate , calcium oxide , magnesium hydroxide , and sodium bicarbonate . The selection of an appropriate neutralization chemical depends on the particular application. There are many uses of neutralization reactions that are acid-alkali reactions. A very common use is antacid tablets. These are designed to neutralize excess gastric acid in the stomach ( HCl ) that may be causing discomfort in the stomach or lower esophagus. This can also be remedied by the ingestion of sodium bicarbonate (NaHCO 3 ). Sodium bicarbonate is also commonly used to neutralise acid spills in laboratories, as well as acid burns . In chemical synthesis of nanomaterials, the heat of neutralization reaction can be used to facilitate the chemical reduction of metal precursors. [ 3 ] Also in the digestive tract, neutralization reactions are used when food is moved from the stomach to the intestines. In order for the nutrients to be absorbed through the intestinal wall, an alkaline environment is needed, so the pancreas produce an antacid bicarbonate to cause this transformation to occur. Another common use, though perhaps not as widely known, is in fertilizers and control of soil pH . Slaked lime ( calcium hydroxide ) or limestone ( calcium carbonate ) may be worked into soil that is too acidic for plant growth. Fertilizers that improve plant growth are made by neutralizing sulfuric acid (H 2 SO 4 ) or nitric acid (HNO 3 ) with ammonia gas (NH 3 ), making ammonium sulfate or ammonium nitrate . These are salts utilized in the fertilizer. Industrially, a by-product of the burning of coal , sulfur dioxide gas, may combine with water vapor in the air to eventually produce sulfuric acid, which falls as acid rain. To prevent the sulfur dioxide from being released, a device known as a scrubber gleans the gas from smoke stacks. This device first blows calcium carbonate into the combustion chamber where it decomposes into calcium oxide (lime) and carbon dioxide. This lime then reacts with the sulfur dioxide produced forming calcium sulfite . A suspension of lime is then injected into the mixture to produce a slurry, which removes the calcium sulfite and any remaining unreacted sulfur dioxide. In the final manufacturing quality control process products often undergo acid treatment & the acid treatments have to be neutralized to prevent corrosion of the product. Neutralization is covered in most general chemistry textbooks. Detailed treatments may be found in textbooks on analytical chemistry such as Applications
https://en.wikipedia.org/wiki/Neutralization_(chemistry)
Neutrino astronomy is a branch of astronomy that gathers information about astronomical objects by observing and studying neutrinos emitted by them with the help of neutrino detectors in special Earth observatories. [ 1 ] It is an emerging field in astroparticle physics providing insights into the high-energy and non-thermal processes in the universe. Neutrinos are nearly massless and electrically neutral or chargeless elementary particles . They are created as a result of certain types of radioactive decay , nuclear reactions such as those that take place in the Sun or high energy astrophysical phenomena, in nuclear reactors , or when cosmic rays hit atoms in the atmosphere. Neutrinos rarely interact with matter (only via the weak nuclear force), travel at nearly the speed of light in straight lines, pass through large amounts of matter without any notable absorption or without being deflected by magnetic fields. Unlike photons, neutrinos rarely scatter along their trajectory. But like photons, neutrinos are some of the most common particles in the universe. Because of this, neutrinos offer a unique opportunity to observe processes that are inaccessible to optical telescopes , such as reactions in the Sun's core. Neutrinos that are created in the Sun’s core are barely absorbed, so a large quantity of them escape from the Sun and reach the Earth. Neutrinos can also offer a very strong pointing direction compared to charged particle cosmic rays. Neutrinos are very hard to detect due to their non-interactive nature. In order to detect neutrinos, scientists have to shield the detectors from cosmic rays, which can penetrate hundreds of meters of rock. Neutrinos, on the other hand, can go through the entire planet without being absorbed, like "ghost particles". That's why neutrino detectors are placed many hundreds of meter underground, usually at the bottom of mines. There a neutrino detection liquid such as a Chlorine-rich solution is placed; the neutrinos react with a Chlorine isotope and can create radioactive Argon. Gallium to Germanium conversion has also been used. [ 2 ] The IceCube Neutrino Observatory built in 2010 in the south pole is the biggest neutrino detector, consisting of thousands of optical sensors buried 500 meters underneath a cubic kilometer of deep, ultra-transparent ice, detects light emitted by charged particles that are produced when a single neutrino collides with a proton or neutron inside an atom. The resulting nuclear reaction produces secondary particles traveling at high speeds that give off a blue light called Cherenkov radiation . [ 3 ] Super-Kamiokande in Japan and ANTARES and KM3NeT in the Mediterranean are some other important neutrino detectors. Since neutrinos interact weakly, neutrino detectors must have large target masses (often thousands of tons). The detectors also must use shielding and effective software to remove background signal. Since neutrinos are very difficult to detect, the only bodies that have been studied in this way are the sun and the supernova SN1987A, which exploded in 1987. Scientist predicted that supernova explosions would produce bursts of neutrinos, and a similar burst was actually detected from Supernova 1987A. In the future neutrino astronomy promises to discover other aspects of the universe, including coincidental gravitational waves , gamma ray bursts , the cosmic neutrino background , origins of ultra-high-energy neutrinos, neutrino properties (such as neutrino mass hierarchy), dark matter properties, etc. It will become an integral part of multi-messenger astronomy , complementing gravitational astronomy and traditional telescopic astronomy. Neutrinos were first recorded in 1956 by Clyde Cowan and Frederick Reines in an experiment employing a nearby nuclear reactor as a neutrino source. [ 4 ] Their discovery was acknowledged with a Nobel Prize in Physics in 1995. [ 5 ] This was followed by the first atmospheric neutrino detection in 1965 by two groups almost simultaneously. One was led by Frederick Reines who operated a liquid scintillator - the Case-Witwatersrand-Irvine or CWI detector - in the East Rand gold mine in South Africa at an 8.8 km water depth equivalent. [ 6 ] The other was a Bombay-Osaka-Durham collaboration that operated in the Indian Kolar Gold Field mine at an equivalent water depth of 7.5 km. [ 7 ] Although the KGF group detected neutrino candidates two months later than Reines CWI, they were given formal priority due to publishing their findings two weeks earlier. [ 8 ] In 1968, Raymond Davis, Jr. and John N. Bahcall successfully detected the first solar neutrinos in the Homestake experiment . [ 9 ] Davis, along with Japanese physicist Masatoshi Koshiba were jointly awarded half of the 2002 Nobel Prize in Physics "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos (the other half went to Riccardo Giacconi for corresponding pioneering contributions which have led to the discovery of cosmic X-ray sources)." [ 10 ] The first generation of undersea neutrino telescope projects began with the proposal by Moisey Markov in 1960 "...to install detectors deep in a lake or a sea and to determine the location of charged particles with the help of Cherenkov radiation ." [ 8 ] [ 11 ] The first underwater neutrino telescope began as the DUMAND project. DUMAND stands for Deep Underwater Muon and Neutrino Detector. The project began in 1976 and although it was eventually cancelled in 1995, it acted as a precursor to many of the following telescopes in the following decades. [ 8 ] The Baikal Neutrino Telescope is installed in the southern part of Lake Baikal in Russia. The detector is located at a depth of 1.1 km and began surveys in 1980. In 1993, it was the first to deploy three strings to reconstruct the muon trajectories as well as the first to record atmospheric neutrinos underwater. [ 12 ] AMANDA (Antarctic Muon And Neutrino Detector Array) used the 3 km thick ice layer at the South Pole and was located several hundred meters from the Amundsen-Scott station . Holes 60 cm in diameter were drilled with pressurized hot water in which strings with optical modules were deployed before the water refroze. The depth proved to be insufficient to be able to reconstruct the trajectory due to the scattering of light on air bubbles. A second group of 4 strings were added in 1995/96 to a depth of about 2000 m that was sufficient for track reconstruction. The AMANDA array was subsequently upgraded until January 2000 when it consisted of 19 strings with a total of 667 optical modules at a depth range between 1500 m and 2000 m. AMANDA would eventually be the predecessor to IceCube in 2005. [ 8 ] [ 12 ] An example of an early neutrino detector is the Artyomovsk Scintillation Detector [ ru ] (ASD), located in the Soledar Salt Mine in Ukraine at a depth of more than 100 m. It was created in the Department of High Energy Leptons and Neutrino Astrophysics of the Institute of Nuclear Research of the USSR Academy of Sciences in 1969 to study antineutrino fluxes from collapsing stars in the Galaxy, as well as the spectrum and interactions of muons of cosmic rays with energies up to 10 ^ 13 eV. A feature of the detector is a 100-ton scintillation tank with dimensions on the order of the length of an electromagnetic shower with an initial energy of 100 GeV. [ 13 ] After the decline of DUMAND the participating groups split into three branches to explore deep sea options in the Mediterranean Sea. ANTARES was anchored to the sea floor in the region off Toulon at the French Mediterranean coast. It consists of 12 strings, each carrying 25 "storeys" equipped with three optical modules, an electronic container, and calibration devices down to a maximum depth of 2475 m. [ 12 ] NEMO (NEutrino Mediterranean Observatory) was pursued by Italian groups to investigate the feasibility of a cubic-kilometer scale deep-sea detector. A suitable site at a depth of 3.5 km about 100 km off Capo Passero at the South-Eastern coast of Sicily has been identified. From 2007 to 2011 the first prototyping phase tested a "mini-tower" with 4 bars deployed for several weeks near Catania at a depth of 2 km. The second phase as well as plans to deploy the full-size prototype tower will be pursued in the KM3NeT framework. [ 8 ] [ 12 ] The NESTOR Project was installed in 2004 to a depth of 4 km and operated for one month until a failure of the cable to shore forced it to be terminated. The data taken still successfully demonstrated the detector's functionality and provided a measurement of the atmospheric muon flux. The proof of concept will be implemented in the KM3Net framework. [ 8 ] [ 12 ] The second generation of deep-sea neutrino telescope projects reach or even exceed the size originally conceived by the DUMAND pioneers. IceCube , located at the South Pole and incorporating its predecessor AMANDA, was completed in December 2010. It currently consists of 5160 digital optical modules installed on 86 strings at depths of 1450 to 2550 m in the Antarctic ice. The KM3NeT in the Mediterranean Sea and the GVD are in their preparatory/prototyping phase. IceCube instruments 1 km 3 of ice. GVD is also planned to cover 1 km 3 but at a much higher energy threshold. KM3NeT is planned to cover several km 3 and have two components; ARCA ( Astroparticle Research with Cosmics in the Abyss ) and ORCA ( Oscillations Research with Cosmics in the Abyss ). [ 14 ] Both KM3NeT and GVD have completed at least part of their construction [ 14 ] [ 15 ] and it is expected that these two along with IceCube will form a global neutrino observatory. [ 12 ] In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506+056 located 3.7 billion light-years away in the direction of the constellation Orion . This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. [ 16 ] [ 17 ] [ 18 ] In November 2022, the IceCube collaboration made another significant progress towards identifying the origin of cosmic rays, reporting the observation of 79 neutrinos with an energy over 1 TeV originated from the nearby galaxy M77 . These findings in a well-known object are expected to help study the active nucleus of this galaxy, as well as serving as a baseline for future observations. [ 19 ] [ 20 ] In June 2023, astronomers reported using a new technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy . [ 21 ] [ 22 ] Neutrinos interact incredibly rarely with matter, so the vast majority of neutrinos will pass through a detector without interacting. If a neutrino does interact, it will only do so once. Therefore, to perform neutrino astronomy, large detectors must be used to obtain enough statistics. [ 23 ] The method of neutrino detection depends on the energy and type of the neutrino. A famous example is that anti-electron neutrinos can interact with a nucleus in the detector by inverse beta decay and produce a positron and a neutron. The positron immediately will annihilate with an electron, producing two 511keV photons. The neutron will attach to another nucleus and give off a gamma with an energy of a few MeV. [ 24 ] In general, neutrinos can interact through neutral-current and charged-current interactions. In neutral-current interactions, the neutrino interacts with a nucleus or electron and the neutrino retains its original flavor. In charged-current interactions, the neutrino is absorbed by the nucleus and produces a lepton corresponding to the neutrino's flavor ( ν e ⟶ e − {\displaystyle {\ce {\nu_{e}-> e^-}}} , ν μ ⟶ μ − {\displaystyle {\ce {\nu_{\mu}-> \mu^{-}}}} , etc.). If the charged resultants are moving fast enough, they can create Cherenkov light . [ 25 ] To observe neutrino interactions, detectors use photomultiplier tubes (PMTs) to detect individual photons. From the timing of the photons, it is possible to determine the time and place of the neutrino interaction. [ 23 ] If the neutrino creates a muon during its interaction, then the muon will travel in a line, creating a "track" of Cherenkov photons. The data from this track can be used to reconstruct the directionality of the muon. For high-energy interactions, the neutrino and muon directions are the same, so it's possible to tell where the neutrino came from. This is pointing direction is important in extra-solar system neutrino astronomy. [ 26 ] Along with time, position, and possibly direction, it's possible to infer the energy of the neutrino from the interactions. The number of photons emitted is related to the neutrino energy, and neutrino energy is important for measuring the fluxes from solar and geo-neutrinos. [ 23 ] Due to the rareness of neutrino interactions, it is important to maintain a low background signal. For this reason, most neutrino detectors are constructed under a rock or water overburden. This overburden shields against most cosmic rays in the atmosphere; only some of the highest-energy muons are able to penetrate to the depths of our detectors. Detectors must include ways of dealing with data from muons so as to not confuse them with neutrinos. Along with more complicated measures, if a muon track is first detected outside of the desired "fiducial" volume, the event is treated as a muon and not considered. Ignoring events outside the fiducial volume also decreases the signal from radiation outside the detector. [ 23 ] Despite shielding efforts, it is inevitable that some background will make it into the detector, many times in the form of radioactive impurities within the detector itself. At this point, if it is impossible to differentiate between the background and true signal, a Monte Carlo simulation must be used to model the background. While it may be unknown if an individual event is background or signal, it is possible to detect an excess about the background, signifying existence of the desired signal. [ 27 ] When astronomical bodies, such as the Sun , are studied using light, only the surface of the object can be directly observed. Any light produced in the core of a star will interact with gas particles in the outer layers of the star, taking hundreds of thousands of years to make it to the surface, making it impossible to observe the core directly. Since neutrinos are also created in the cores of stars (as a result of stellar fusion ), the core can be observed using neutrino astronomy. [ 28 ] [ 29 ] Other sources of neutrinos- such as neutrinos released by supernovae- have been detected. Several neutrino experiments have formed the Supernova Early Warning System (SNEWS), where they search for an increase of neutrino flux that could signal a supernova event. [ 30 ] There are currently goals to detect neutrinos from other sources, such as active galactic nuclei (AGN), as well as gamma-ray bursts and starburst galaxies . Neutrino astronomy may also indirectly detect dark matter. Seven neutrino experiments (Super-K, LVD, IceCube, KamLAND, Borexino , Daya Bay, and HALO) work together as the Supernova Early Warning System ( SNEWS ). [ 31 ] In a core collapse supernova, ninety-nine percent of the energy released will be in neutrinos. While photons can be trapped in the dense supernova for hours, neutrinos are able to escape on the order of seconds. Since neutrinos travel at roughly the speed of light, they can reach Earth before photons do. If two or more of SNEWS detectors observe a coincidence of an increased flux of neutrinos, an alert is sent to professional and amateur astronomers to be on the lookout for supernova light. By using the distance between detectors and the time difference between detections, the alert can also include directionality as to the supernova's location in the sky. The Sun, like other stars, is powered by nuclear fusion in its core. The core is incredibly large, meaning that photons produced in the core will take a long time to diffuse outward. Therefore, neutrinos are the only way that we can obtain real-time data about the nuclear processes in the Sun. [ 32 ] There are two main processes for stellar nuclear fusion. The first is the Proton-Proton (PP) chain, in which protons are fused together into helium, sometimes temporarily creating the heavier elements of lithium, beryllium, and boron along the way. The second is the CNO cycle, in which carbon, nitrogen, and oxygen are fused with protons, and then undergo alpha decay (helium nucleus emission) to begin the cycle again. The PP chain is the primary process in the Sun, while the CNO cycle is more dominant in stars more massive than the Sun. [ 27 ] Each step in the process has an allowed spectra of energy for the neutrino (or a discrete energy for electron capture processes). The relative rates of the Sun's nuclear processes can be determined by observations in its flux at different energies. This would shed insight into the Sun's properties, such as metallicity , which is the composition of heavier elements. [ 27 ] Borexino is one of the detectors studying solar neutrinos. In 2018, they found 5σ significance for the existence of neutrinos from the fusing of two protons with an electron (pep neutrinos). [ 32 ] In 2020, they found for the first time evidence of CNO neutrinos in the Sun. Improvements on the CNO measurement will be especially helpful in determining the Sun's metallicity. [ 27 ] The interior of Earth contains radioactive elements such as K 40 {\displaystyle {\ce {^{40}K}}} and the decay chains of U 238 {\displaystyle {\ce {^{238}U}}} and Th 232 {\displaystyle {\ce {^{232}Th}}} . These elements decay via Beta decay , which emits an anti-neutrino. The energies of these anti-neutrinos are dependent on the parent nucleus. Therefore, by detecting the anti-neutrino flux as a function of energy, we can obtain the relative compositions of these elements and set a limit on the total power output of Earth's geo-reactor. Most of our current data about the core and mantle of Earth comes from seismic data, which does not provide any information as to the nuclear composition of these layers. [ 33 ] Borexino has detected these geo-neutrinos through the process ν ¯ + p + ⟶ e + + n {\displaystyle {\ce {{\bar {\nu }}+p^{+}\longrightarrow e^{+}{+n}}}} . The resulting positron will immediately annihilate with an electron and produce two gamma-rays each with an energy of 511keV (the rest mass of an electron). The neutron will later be captured by another nucleus, which will lead to a 2.22MeV gamma-ray as the nucleus de-excites. This process on average takes on the order of 256 microseconds. By searching for time and spatial coincidence of these gamma rays, the experimenters can be certain there was an event. [ 33 ] Using over 3,200 days of data, Borexino used geoneutrinos to place constraints on the composition and power output of the mantle. They found that the ratio of U 238 {\displaystyle {\ce {^{238}U}}} to Th 232 {\displaystyle {\ce {^{232}Th}}} is the same as chondritic meteorites. The power output from uranium and thorium in Earth's mantle was found to be 14.2-35.7 TW with a 68% confidence interval. [ 23 ] Neutrino tomography also provides insight into the interior of Earth. For neutrinos with energies of a few TeV, the interaction probability becomes non-negligible when passing through Earth. The interaction probability will depend on the number of nucleons the neutrino passed along its path, which is directly related to density. If the initial flux is known (as it is in the case of atmospheric neutrinos), then detecting the final flux provides information about the interactions that occurred. The density can then be extrapolated from knowledge of these interactions. This can provide an independent check on the information obtained from seismic data. [ 34 ] In 2018, one year worth of IceCube data was evaluated to perform neutrino tomography. The analysis studied upward going muons, which provide both the energy and directionality of the neutrinos after passing through the Earth. A model of Earth with five layers of constant density was fit to the data, and the resulting density agreed with seismic data. The values determined for the total mass of Earth, the mass of the core, and the moment of inertia all agree with the data obtained from seismic and gravitational data. With the current data, the uncertainties on these values are still large, but future data from IceCube and KM3NeT will place tighter restrictions on this data. Neutrinos can either be primary cosmic rays (astrophysical neutrinos), or be produced from cosmic ray interactions. In the latter case, the primary cosmic ray will produce pions and kaons in the atmosphere. As these hadrons decay, they produce neutrinos (called atmospheric neutrinos). At low energies, the flux of atmospheric neutrinos is many times greater than astrophysical neutrinos. At high energies, the pions and kaons have a longer lifetime (due to relativistic time dilation). The hadrons are now more likely to interact before they decay. Because of this, the astrophysical neutrino flux will dominate at high energies (~100TeV). To perform neutrino astronomy of high-energy objects, experiments rely on the highest energy neutrinos. [ 35 ] To perform astronomy of distant objects, a strong angular resolution is required. Neutrinos are electrically neutral and interact weakly, so they travel mostly unperturbed in straight lines. If the neutrino interacts within a detector and produces a muon, the muon will produce an observable track. At high energies, the neutrino direction and muon direction are closely correlated, so it is possible to trace back the direction of the incoming neutrino. [ 35 ] These high-energy neutrinos are either the primary or secondary cosmic rays produced by energetic astrophysical processes. Observing neutrinos could provide insights into these processes beyond what is observable with electromagnetic radiation. In the case of the neutrino detected from a distant blazar, multi-wavelength astronomy was used to show spatial coincidence, confirming the blazar as the source. In the future, neutrinos could be used to supplement electromagnetic and gravitational observations, leading to multi-messenger astronomy. [ 26 ]
https://en.wikipedia.org/wiki/Neutrino_astronomy
Neutrinoless double beta decay ( 0νββ ) is a commonly proposed and experimentally pursued theoretical radioactive decay process that would prove a Majorana nature of the neutrino particle . [ 1 ] [ 2 ] To this day, it has not been found. [ 2 ] [ 3 ] [ 4 ] The discovery of neutrinoless double beta decay could shed light on the absolute neutrino masses and on their mass hierarchy ( Neutrino mass ). It would mean the first ever signal of the violation of total lepton number conservation. [ 5 ] A Majorana nature of neutrinos would confirm that the neutrino is its own antiparticle . [ 6 ] To search for neutrinoless double beta decay, there are currently a number of experiments underway, with several future experiments for increased sensitivity proposed as well. [ 7 ] The Italian physicist Ettore Majorana first introduced the concept of a particle being its own antiparticle in 1937. [ 6 ] Particles of this nature were subsequently named after him as Majorana particles. In 1939, Wendell H. Furry proposed the idea of the Majorana nature of the neutrino, which was associated with beta decays. [ 8 ] Furry stated the transition probability to even be higher for neutrino less double beta decay. [ 8 ] [ clarification needed ] It was the first idea proposed to search for the violation of lepton number conservation. [ 1 ] It has, since then, drawn attention to it for being useful to study the nature of neutrinos (see quote). Neutrinos are conventionally produced in weak decays. [ 5 ] Weak beta decays normally produce one electron (or positron ), emit an antineutrino (or neutrino) and increase (or decrease) the nucleus ' proton number Z {\displaystyle Z} by one. The nucleus' mass (i.e. binding energy ) is then lower and thus more favorable. There exist a number of elements that can decay into a nucleus of lower mass, but they cannot emit one electron only because the resulting nucleus is kinematically (that is, in terms of energy) not favorable (its energy would be higher). [ 2 ] These nuclei can only decay by emitting two electrons (that is, via double beta decay ). There are about a dozen confirmed cases of nuclei that can only decay via double beta decay. [ 2 ] The corresponding decay equation is: It is a weak process of second order. [ 2 ] A simultaneous decay of two nucleons in the same nucleus is extremely unlikely. Thus, the experimentally observed lifetimes of such decay processes are in the range of 10 18 − 10 21 {\displaystyle 10^{18}-10^{21}} years. [ 10 ] A number of isotopes have been observed already to show this two-neutrino double beta decay. [ 3 ] This conventional double beta decay is allowed in the Standard Model of particle physics . [ 3 ] It has thus both a theoretical and an experimental foundation. If the nature of the neutrinos is Majorana, then they can be emitted and absorbed in the same process without showing up in the corresponding final state. [ 3 ] As Dirac particles , both the neutrinos produced by the decay of the W bosons would be emitted, and not absorbed after. [ 3 ] Neutrinoless double beta decay can only occur if The simplest decay process is known as the light neutrino exchange. [ 3 ] It features one neutrino emitted by one nucleon and absorbed by another nucleon (see figure to the right). In the final state, the only remaining parts are the nucleus (with its changed proton number Z {\displaystyle Z} ) and two electrons: The two electrons are emitted quasi-simultaneously. [ 10 ] The two resulting electrons are then the only emitted particles in the final state and must carry approximately the difference of the sums of the binding energies of the two nuclei before and after the process as their kinetic energy. [ 12 ] The heavy nuclei do not carry significant kinetic energy. In that case, the decay rate can be calculated with where G 0 ν {\displaystyle G^{0\nu }} denotes the phase space factor, | M 0 ν | 2 {\displaystyle \left|M^{0\nu }\right|^{2}} the (squared) matrix element of this nuclear decay process (according to the Feynman diagram), and ⟨ m β β ⟩ 2 {\displaystyle \langle m_{\beta \beta }\rangle ^{2}} the square of the effective Majorana mass. [ 5 ] First, the effective Majorana mass can be obtained by where m i {\displaystyle m_{i}} are the Majorana neutrino masses (three neutrinos ν i {\displaystyle \nu _{i}} ) and U e i {\displaystyle U_{ei}} the elements of the neutrino mixing matrix U {\displaystyle U} (see PMNS matrix ). [ 7 ] Contemporary experiments to find neutrinoless double beta decays (see section on experiments ) aim at both the proof of the Majorana nature of neutrinos and the measurement of this effective Majorana mass ⟨ m β β ⟩ {\displaystyle \langle m_{\beta \beta }\rangle } (can only be done if the decay is actually generated by the neutrino masses). [ 7 ] The nuclear matrix element (NME) | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} cannot be measured independently; [ why? ] it must, but also can, be calculated. [ 13 ] The calculation itself relies on sophisticated nuclear many-body theories and there exist different methods to do this. The NME | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} differs also from nucleus to nucleus (i.e. chemical element to chemical element). Today, the calculation of the NME is a significant problem and it has been treated by different authors in different ways. One question is whether to treat the range of obtained values for | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} as the theoretical uncertainty and whether this is then to be understood as a statistical uncertainty. [ 7 ] Different approaches are being chosen here. The obtained values for | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} often vary by factors of 2 up to about 5. Typical values lie in the range of from about 0.9 to 14, depending on the decaying nucleus/element. [ 7 ] Lastly, the phase-space factor G 0 ν {\displaystyle G^{0\nu }} must also be calculated. [ 7 ] It depends on the total released kinetic energy ( Q = M nucleus before − M nucleus after − 2 m electron {\displaystyle Q=M_{\text{nucleus}}^{\text{before}}-M_{\text{nucleus}}^{\text{after}}-2m_{\text{electron}}} , i.e. " Q {\displaystyle Q} -value") and the atomic number Z {\displaystyle Z} . Methods use Dirac wave functions , finite nuclear sizes and electron screening. [ 7 ] There exist high-precision results for G 0 ν {\displaystyle G^{0\nu }} for various nuclei, ranging from about 0.23 (for 52 128 T e → 54 128 X e {\displaystyle \mathrm {^{128}_{52}Te\rightarrow _{54}^{128}Xe} } ), and 0.90 ( 32 76 G e → 34 76 S e {\displaystyle \mathrm {^{76}_{32}Ge\rightarrow _{34}^{76}Se} } ) to about 24.14 ( 60 150 N d → 62 150 S m {\displaystyle \mathrm {^{150}_{60}Nd\rightarrow _{62}^{150}Sm} } ). [ 7 ] It is believed that, if neutrinoless double beta decay is found under certain conditions (decay rate compatible with predictions based on experimental knowledge about neutrino masses and mixing), this would indeed "likely" point at Majorana neutrinos as the main mediator (and not other sources of new physics). [ 7 ] There are 35 nuclei that can undergo neutrinoless double beta decay (according to the aforementioned decay conditions). [ 3 ] Nine different candidates of nuclei are being considered in experiments to confirm neutrinoless double beta-decay: 48 C a , 76 G e , 82 S e , 96 Z r , 100 M o , 116 C d , 130 T e , 136 X e , 150 N d {\displaystyle \mathrm {^{48}Ca,^{76}Ge,^{82}Se,^{96}Zr,^{100}Mo,^{116}Cd,^{130}Te,^{136}Xe,^{150}Nd} } . [ 3 ] They all have arguments for and against their use in an experiment. Factors to be included and revised are natural abundance , reasonably priced enrichment, and a well understood and controlled experimental technique. [ 3 ] The higher the Q {\displaystyle Q} -value, the better are the chances of a discovery, in principle. The phase-space factor G 0 ν {\displaystyle G^{0\nu }} , and thus the decay rate, grows with Q 5 {\displaystyle Q^{5}} . [ 3 ] Experimentally of interest and thus measured is the sum of the kinetic energies of the two emitted electrons. It should equal the Q {\displaystyle Q} -value of the respective nucleus for neutrinoless double beta emission. [ 3 ] The table shows a summary of the currently best limits on the lifetime of 0νββ. From this, it can be deduced that neutrinoless double beta decay is an extremely rare process, if it occurs at all. The so-called "Heidelberg-Moscow collaboration" (HDM; 1990–2003) of the German Max-Planck-Institut für Kernphysik and the Russian science center Kurchatov Institute in Moscow famously claimed to have found "evidence for neutrinoless double beta decay" ( Heidelberg-Moscow controversy ). [ 21 ] [ 22 ] Initially, in 2001 the collaboration announced a 2.2σ, or a 3.1σ (depending on the used calculation method) evidence. [ 21 ] The decay rate was found to be around 2 ⋅ 10 25 {\displaystyle 2\cdot 10^{25}} years. [ 3 ] This result has been topic of discussions between many scientists and authors. [ 3 ] To this day, no other experiment has ever confirmed or approved the result of the HDM group. [ 7 ] Instead, recent results from the GERDA experiment for the lifetime limit clearly disfavor and reject the values of the HDM collaboration. [ 7 ] Neutrinoless double beta decay has not yet been found. [ 4 ] The Germanium Detector Array (GERDA) collaboration's result of phase I of the detector was a limit of T β β 0 ν > 2.1 ⋅ 10 25 {\displaystyle T_{\beta \beta }^{0\nu }>2.1\cdot 10^{25}} years (90% C.L.). [ 23 ] It used germanium both as source and detector material. [ 23 ] Liquid argon was used for muon vetoing and as a shielding from background radiation. [ 23 ] The Q {\displaystyle Q} -value of 76 Ge for 0νββ decay is 2039 keV, but no excess of events in this region was found. [ 24 ] Phase II of the experiment started data-taking in 2015, and it used around 36 kg of germanium for the detectors. [ 24 ] The exposure analyzed until July 2020 was 10.8 kg yr. Again, no signal was found and thus a new limit was set to T β β 0 ν > 5.3 ⋅ 10 25 {\displaystyle T_{\beta \beta }^{0\nu }>5.3\cdot 10^{25}} years (90% C.L.). [ 25 ] The detector has been decommissioned and published its final results in December 2020. No neutrinoless double beta decay was observed. [ 15 ] The successor experiment is LEGEND, which uses the same technology to achieve sensitivity to longer lifetimes. [ 26 ] The Enriched Xenon Observatory-200 experiment uses xenon both as source and detector. [ 23 ] The experiment is located in New Mexico (US) and uses a time-projection chamber (TPC) for three-dimensional spatial and temporal resolution of the electron track depositions. [ 23 ] The EXO-200 experiment yielded a lifetime limit of T β β 0 ν > 3.5 ⋅ 10 25 {\displaystyle T_{\beta \beta }^{0\nu }>3.5\cdot 10^{25}} years (90% C.L.). [ 19 ] When translated to effective Majorana mass, this is a limit of the same order as that obtained by GERDA I and II. [ 23 ] The muon decays as μ + → e + + ν e + ν ¯ μ {\displaystyle \mu ^{+}\to e^{+}+\nu _{e}+{\overline {\nu }}_{\mu }} and μ − → e − + ν ¯ e + ν μ {\displaystyle \mu ^{-}\to e^{-}+{\overline {\nu }}_{e}+\nu _{\mu }} . Decays without neutrino emission, such as μ + → e + + γ {\displaystyle \mu ^{+}\to e^{+}+\gamma } , μ − → e − + γ {\displaystyle \mu ^{-}\to e^{-}+\gamma } , μ + → e + + e − + e + {\displaystyle \mu ^{+}\to e^{+}+e^{-}+e^{+}} and μ − → e − + e + + e − {\displaystyle \mu ^{-}\to e^{-}+e^{+}+e^{-}} are so unlikely that they are considered prohibited and their observation would be considered evidence of new physics . A number of experiments are pursuing this path such as Mu to E Gamma , Comet , and Mu2e for μ + → e + γ {\displaystyle \mu ^{+}\to e^{+}\gamma } and Mu3e for μ + → e + e − e + {\displaystyle \mu ^{+}\to e^{+}e^{-}e^{+}} . Neutrinoless tau conversion in the form τ → 3 μ {\displaystyle \tau \to 3\mu } has been searched for by the CMS experiment. [ 41 ] [ 42 ]
https://en.wikipedia.org/wiki/Neutrinoless_double_beta_decay
The Neutrodyne radio receiver , invented in 1922 by Louis Hazeltine , was a particular type of tuned radio frequency (TRF) receiver, in which the instability-causing inter-electrode capacitance of the triode RF tubes is cancelled out or "neutralized" [ 1 ] [ 2 ] to prevent parasitic oscillations which caused "squealing" or "howling" noises in the speakers of early radio sets. In most designs, a small extra winding on each of the RF amplifiers ' tuned anode coils was used to generate a small antiphase signal, which could be adjusted by special variable trim capacitors to cancel out the stray signal coupled to the grid via plate-to-grid capacitance. The Neutrodyne circuit was popular in radio receivers until the 1930s, when it was superseded by the superheterodyne receiver. The circuit was developed about 1922 by Harold Wheeler who worked in Louis Hazeltine 's laboratory at Stevens Institute of Technology, so Hazeltine is usually given the credit. [ 3 ] The tuned radio frequency (TRF) receiver , one of the most popular radio receiver designs of the time, consisted of several tuned radio frequency (RF) amplifier stages, followed by a detector and several audio amplifier stages. A major defect of the TRF receiver was that, due to the high interelectrode capacitance of early triode vacuum tubes , feedback within the RF amplifier stages gave them a tendency to oscillate , creating unwanted radio frequency alternating currents. These parasitic oscillations mixed with the carrier wave in the detector, creating heterodynes (beat notes) in the audio frequency range, which were heard as annoying whistles and howls from the speaker. Hazeltine's innovation was to add a circuit to each radio frequency amplifier stage which fed back a small amount of energy from the plate (output) circuit to the grid (input) circuit with opposite phase to cancel ("neutralize") the feedback which was causing the oscillation. This effectively prevented the high-pitched squeals that had plagued early radio sets. A group of more than 20 companies known as the Independent Radio Manufacturers Association licensed the circuit from Hazeltine and manufactured "Neutrodyne" receivers throughout the 1920s. [ 3 ] At the time, RCA held a virtual monopoly over commercial radio receiver production due to its ownership of the rights to the Armstrong regenerative and superheterodyne circuits. [ 3 ] The Neutrodyne ended this control, allowing competition. Compared to the technically superior superheterodyne the Neutrodyne was cheaper to build. As basically a TRF receiver, it was also considered easier for non-technical owners to use than the early superhets. After manufacture each tuned amplifier stage had to be neutralized , adjusted to cancel feedback; after this the set would not produce the parasitic oscillations which caused the objectionable noises. By 1927 some ten million of these receivers had been sold to consumers in North America . By the 1930s, advances in vacuum tube manufacturing had yielded the tetrode , which had reduced control grid to plate ( Miller ) capacitance. These advances made it possible to build TRF receivers that did not need neutralization, but also made Edwin Armstrong 's superheterodyne design practical for domestic receivers. So the TRF circuit, including the Neutrodyne, became obsolete in radio receivers and was superseded by the superheterodyne design. The Neutrodyne neutralization technique continues to be used in other applications to suppress parasitic oscillation , such as in RF power amplifiers in radio transmitters .
https://en.wikipedia.org/wiki/Neutrodyne
Neutron-induced swelling is the increase of volume and decrease of density of materials subjected to intense neutron radiation . Neutrons impacting the material's lattice rearrange its atoms, causing buildup of dislocations , voids , and Wigner energy . Together with the resulting strength reduction and embrittlement , it is a major concern for materials for nuclear reactors . [ 1 ] Materials show significant differences in their swelling resistance. This nuclear technology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron-induced_swelling
A neutron-velocity selector is a device that allows neutrons of defined velocity to pass while absorbing all other neutrons, to produce a monochromatic neutron beam. [ 1 ] It has the appearance of a many-bladed turbine. The blades are coated with a strongly neutron-absorbing material, such as Boron-10 . Neutron-velocity selectors are commonly used in neutron research facility to produce a monochromatic beam of neutrons. Due to physical limitations of materials and motors, limiting the maximum speed of rotation of the blades, these devices are only useful for relatively slow neutrons .
https://en.wikipedia.org/wiki/Neutron-velocity_selector
The Neutron Star Interior Composition ExploreR ( NICER ) is a NASA telescope on the International Space Station , designed and dedicated to the study of the extraordinary gravitational, electromagnetic, and nuclear physics environments embodied by neutron stars , exploring the exotic states of matter where density and pressure are higher than in atomic nuclei . As part of NASA's Explorer program , NICER enabled rotation-resolved spectroscopy of the thermal and non-thermal emissions of neutron stars in the soft X-ray (0.2–12 keV ) band with unprecedented sensitivity, probing interior structure, the origins of dynamic phenomena, and the mechanisms that underlie the most powerful cosmic particle accelerators known. [ 3 ] NICER achieved these goals by deploying, following the launch, and activation of X-ray timing and spectroscopy instruments. NICER was selected by NASA to proceed to formulation phase in April 2013. [ 4 ] NICER-SEXTANT uses the same instrument to test X-ray timing for positioning and navigation, [ 5 ] and MXS is a test of X-ray timing communication. [ 6 ] In January 2018, X-ray navigation was demonstrated using NICER on ISS. [ 7 ] In May 2023, NICER's thermal shields developed a leak that allowed stray light to enter the telescope. A repair kit containing specialized patches was delivered to the station by the Cygnus NG-21 resupply mission in August 2024, [ 8 ] and were applied by the stranded astronauts in a January 16, 2025 spacewalk . [ 9 ] By May 2015, NICER was on track for a 2016 launch, having passed its critical design review (CDR) and resolved an issue with the power being supplied by the ISS. [ 10 ] Following the loss of SpaceX CRS-7 in June 2015, which delayed future missions by several months, NICER was finally launched on 3 June 2017, [ 2 ] with the SpaceX CRS-11 ISS resupply mission aboard a Falcon 9 v1.2 launch vehicle. [ 11 ] NICER's primary science instrument, called the X-ray Timing Instrument (XTI), is an array of 56 X-ray photon detectors. These detectors record the energies of the collected photons as well as with their time of arrival. A Global Positioning System (GPS) receiver enables accurate timing and positioning measurements. X-ray photons can be time-tagged with a precision of less than 300 ns . [ 12 ] In August 2022 a fast X-ray follow-up observation program was started with the MAXI instrument named "OHMAN (On-orbit Hookup of MAXI and NICER)" to detect sudden bursts in X-ray phenomena. [ 13 ] During each ISS orbit, NICER will observe two to four targets. Gimbaling and a star tracker allow NICER to track specific targets while collecting science data. In order to achieve its science objectives, NICER will take over 15 million seconds of exposures over an 18-month period. [ 14 ] An enhancement to the NICER mission, the Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), will act as a technology demonstrator for X-ray pulsar-based navigation (XNAV) techniques that may one day be used for deep-space navigation. [ 15 ] As part of NICER testing, a rapid-modulation X-ray device was developed called Modulated X-ray Source (MXS), which is being used to create an X-ray communication system (XCOM) demonstration. If approved and installed on the ISS, XCOM will transmit data encoded into X-ray bursts to the NICER platform, which may lead to the development of technologies that allow for gigabit bandwidth communication throughout the Solar System. [ 6 ] As of February 2019 [update] the XCOM test is scheduled for spring 2019. [ 16 ] XCOM (inc MXS) was delivered to the ISS in May 2019. [ 17 ] Once the test was complete XCOM and the STP-H6 payload malfunctioned in September 2021. It was removed in November 2021 and disposed of on Cygnus NG-16 . [ 18 ] In May 2018, NICER discovered an X-ray pulsar in the fastest stellar orbit yet discovered. [ 19 ] The pulsar and its companion star were found to orbit each other every 38 minutes. [ 19 ] On 21 August 2019 (UTC; 20 August in the U.S.), NICER spotted the brightest X-ray burst so far observed. [ 20 ] It came from the neutron star SAX J1808.4−3658 about 11,000 light-years from Earth in the constellation Sagittarius . Astronomers using NICER found evidence that a neutron star from a low-mass X-ray binary in NGC 6624 is spinning at 716 Hz (times per second), or 42,960 revolutions per minute , the same velocity as the fastest known spinning neutron star PSR J1748−2446ad and the only one in such a binary system. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer
Neutron activation is the process in which neutron radiation induces radioactivity in materials, and occurs when atomic nuclei capture free neutrons , becoming heavier and entering excited states . The excited nucleus decays immediately by emitting gamma rays , or particles such as beta particles , alpha particles , fission products , and neutrons (in nuclear fission ). Thus, the process of neutron capture , even after any intermediate decay, often results in the formation of an unstable activation product . Such radioactive nuclei can exhibit half-lives ranging from small fractions of a second to many years. Neutron activation is the only common way that a stable material can be induced into becoming intrinsically radioactive. All naturally occurring materials, including air, water, and soil, can be induced (activated) by neutron capture into some amount of radioactivity in varying degrees, as a result of the production of neutron-rich radioisotopes . [ citation needed ] Some atoms require more than one neutron to become unstable, which makes them harder to activate because the probability of a double or triple capture by a nucleus is below that of single capture. Water, for example, is made up of hydrogen and oxygen. Hydrogen requires a double capture to attain instability as tritium ( hydrogen-3 ), while natural oxygen ( oxygen-16 ) requires three captures to become unstable oxygen-19 . Thus water is relatively difficult to activate, as compared to sodium chloride ( Na Cl ), in which both the sodium and chlorine atoms become unstable with a single capture each. These facts were experienced at the Operation Crossroads atomic test series in 1946. An example of this kind of a nuclear reaction occurs in the production of cobalt-60 within a nuclear reactor : The cobalt-60 then decays by the emission of a beta particle plus gamma rays into nickel -60. This reaction has a half-life of about 5.27 years, and due to the availability of cobalt-59 (100% of its natural abundance ), this neutron bombarded isotope of cobalt is a valuable source of nuclear radiation (namely gamma radiation) for radiotherapy . [ 1 ] In other cases, and depending on the kinetic energy of the neutron, the capture of a neutron can cause nuclear fission —the splitting of the atomic nucleus into two smaller nuclei. If the fission requires an input of energy, that comes from the kinetic energy of the neutron. An example of this kind of fission in a light element can occur when the stable isotope of lithium , lithium-7 , is bombarded with fast neutrons and undergoes the following nuclear reaction: In other words, the capture of a neutron by lithium-7 causes it to split into an energetic helium nucleus ( alpha particle ), a hydrogen-3 ( tritium ) nucleus and a free neutron. The Castle Bravo accident, in which the thermonuclear bomb test at Bikini Atoll in 1954 exploded with 2.5 times the expected yield, was caused by the unexpectedly high probability of this reaction. In the area around a pressurized water reactor or boiling water reactor during normal operation, a significant amount of radiation is produced due to the fast neutron activation of coolant water oxygen via a (n,p) reaction . The activated oxygen-16 nucleus emits a proton (hydrogen nucleus), and transmutes to nitrogen-16, which has a very short life (7.13 seconds) before decaying back to oxygen-16 (emitting 10.4 MeV beta particles and 6.13 MeV gamma radiations). [ 2 ] This activation of the coolant water requires extra biological shielding around the nuclear reactor plant. It is the high energy gamma ray in the second reaction that causes the major concern. This is why water that has recently been inside a nuclear reactor core must be shielded until this radiation subsides. One to two minutes is generally sufficient. In facilities that housed a cyclotron, the reinforced concrete foundation can become radioactive due to neutron activation. Six important long-lived radioactive isotopes ( 54 Mn , 55 Fe , 60 Co , 65 Zn , 133 Ba , and 152 Eu ) can be found within concrete nuclei affected by neutrons. [ 3 ] The residual radioactivity is predominantly due to trace elements present, and thus the amount of radioactivity derived from cyclotron activation is minuscule, i.e., pCi/g or Bq/g . The release limit for facilities with residual radioactivity is 25 mrem/year. [ 4 ] An example of 55 Fe production from the activation of iron in reinforcement bars found in concrete is shown below: Neutron activation is the only common way that a stable material can be induced into becoming intrinsically radioactive. Activation is inherently different than contamination. Neutrons are only free in quantity in the microseconds of a nuclear weapon's explosion, in an active nuclear reactor, or in a spallation neutron source. In an atomic weapon, neutrons are generated for only between 1 and 50 microseconds, but in huge numbers. Most are absorbed by the metallic bomb casing, which is only just starting to be affected by the explosion within it. The neutron activation of the soon-to-be vaporized metal is responsible for a significant portion of the nuclear fallout in nuclear bursts high in the atmosphere. In other types of activation, neutrons may irradiate soil that is dispersed in a mushroom cloud at or near the Earth's surface, resulting in fallout from activation of soil chemical elements. In any location with high neutron fluxes , such as within the cores of nuclear reactors, neutron activation contributes to material erosion and periodically the lining materials themselves must be disposed of, as low-level radioactive waste . Some materials are more subject to neutron activation than others, so a suitably chosen low-activation material can significantly reduce this problem (see International Fusion Materials Irradiation Facility ). For example, Chromium-51 will form by neutron activation in chrome steel (which contains Cr-50) that is exposed to a typical reactor neutron flux. [ 5 ] Carbon-14 , most frequently but not solely, generated by the neutron activation of atmospheric nitrogen-14 with a thermal neutron , is (together with its dominant natural production pathway from cosmic ray-air interactions and historical production from atmospheric nuclear testing ) also generated in comparatively minute amounts inside many designs of nuclear reactors which contain nitrogen gas impurities in their fuel cladding , coolant water and by neutron activation of the oxygen contained in the water itself. Fast breeder reactors (FBR) produce about an order of magnitude less C-14 than the most common reactor type, the pressurized water reactor , as FBRs do not use water as a primary coolant. [ 6 ] For physicians and radiation safety officers, activation of sodium in the human body to sodium-24, and phosphorus to phosphorus-32, can give a good immediate estimate of acute accidental neutron exposure. [ 7 ] One way to demonstrate that nuclear fusion has occurred inside a fusor device is to use a Geiger counter to measure the gamma ray radioactivity that is produced from a sheet of aluminium foil . In the ICF fusion approach, the fusion yield of the experiment (directly proportional to neutron production) is usually determined by measuring the gamma-ray emissions of aluminium or copper neutron activation targets. [ 8 ] Aluminium can capture a neutron and generate radioactive sodium-24 , which has a half life of 15 hours [ 9 ] [ 10 ] and a beta decay energy of 5.514 MeV. [ 11 ] The activation of a number of test target elements such as sulfur , copper, tantalum , and gold have been used to determine the yield of both pure fission [ 12 ] [ 13 ] and thermonuclear weapons . [ 14 ] Neutron activation analysis is one of the most sensitive and precise methods of trace element analysis. It requires no sample preparation or solubilization and can therefore be applied to objects that need to be kept intact such as a valuable piece of art. Although the activation induces radioactivity in the object, its level is typically low and its lifetime may be short, so that its effects soon disappear. In this sense, neutron activation is a non-destructive analysis method. Neutron activation analysis can be done in situ. For example, aluminium (Al-27) can be activated by capturing relatively low-energy neutrons to produce the isotope Al-28 , which decays with a half-life of 2.3 minutes with a decay energy of 4.642 MeV. [ 15 ] This activated isotope is used in oil drilling to determine the clay content (clay is generally an alumino-silicate ) of the underground area under exploration. [ 16 ] Historians can use neutron activation products to authenticate atomic artifacts and materials subjected to neutron fluxes from fission incidents. For example, one of the rare isotopes found in trinitite is barium-133 , an activation product formed from the Baratol used in the slow explosive lens employed in the Trinity device . This barium isotope can be used to authenticate trinitite samples, with its absence indicating a fraudulent sample. [ 17 ] Neutron irradiation may be used for float-zone silicon slices ( wafers ) to trigger fractional transmutation of Si atoms into phosphorus (P) and therefore doping it into n-type silicon [ 18 ] : 366
https://en.wikipedia.org/wiki/Neutron_activation
Neutron activation analysis ( NAA ) is a nuclear process used for determining the concentrations of elements in many materials. NAA allows discrete sampling of elements as it disregards the chemical form of a sample, and focuses solely on atomic nuclei. The method is based on neutron activation and thus requires a neutron source . The sample is bombarded with neutrons , causing its constituent elements to form radioactive isotopes. The radioactive emissions and radioactive decay paths for each element have long been studied and determined. Using this information, it is possible to study spectra of the emissions of the radioactive sample, and determine the concentrations of the various elements within it. A particular advantage of this technique is that it does not destroy the sample, and thus has been used for the analysis of works of art and historical artifacts. NAA can also be used to determine the activity of a radioactive sample. If NAA is conducted directly on irradiated samples it is termed instrumental neutron activation analysis ( INAA ). In some cases, irradiated samples are subjected to chemical separation to remove interfering species or to concentrate the radioisotope of interest; this technique is known as radiochemical neutron activation analysis ( RNAA ). NAA can perform non-destructive analyses on solids, liquids, suspensions, slurries, and gases with no or minimal preparation. Due to the penetrating nature of incident neutrons and resultant gamma rays, the technique provides a true bulk analysis. As different radioisotopes have different half-lives, counting can be delayed to allow interfering species to decay eliminating interference. Until the introduction of ICP-AES and PIXE , NAA was the standard analytical method for performing multi-element analyses with minimum detection limits in the sub- ppm range. [ 1 ] Accuracy of NAA is in the region of 5%, and relative precision is often better than 0.1%. [ 1 ] There are two noteworthy drawbacks to the use of NAA; even though the technique is essentially non-destructive, the irradiated sample will remain radioactive for many years after the initial analysis, requiring handling and disposal protocols for low-level to medium-level radioactive material; also, the number of suitable activation nuclear reactors is declining; with a lack of irradiation facilities, the technique has declined in popularity and become more expensive. Neutron activation analysis is a sensitive multi- element analytical technique used for both qualitative and quantitative analysis of major, minor, trace and rare elements. NAA was discovered in 1936 by Hevesy and Levi, who found that samples containing certain rare-earth elements became highly radioactive after exposure to a source of neutrons. [ 2 ] This observation led to the use of induced radioactivity for the identification of elements. NAA is significantly different from other spectroscopic analytical techniques in that it is based not on electronic transitions but on nuclear transitions. To carry out an NAA analysis, the specimen is placed into a suitable irradiation facility and bombarded with neutrons. This creates artificial radioisotopes of the elements present. Following irradiation, the artificial radioisotopes decay with emission of particles or, more importantly gamma rays , which are characteristic of the element from which they were emitted. For the NAA procedure to be successful, the specimen or sample must be selected carefully. In many cases small objects can be irradiated and analysed intact without the need of sampling. But, more commonly, a small sample is taken, usually by drilling in an inconspicuous place. About 50 mg (one-twentieth of a gram ) is a sufficient sample, so damage to the object is minimised. [ 3 ] It is often good practice to remove two samples using two different drill bits made of different materials. This will reveal any contamination of the sample from the drill bit material itself. The sample is then encapsulated in a vial made of either high purity linear polyethylene or quartz . [ 4 ] These sample vials come in many shapes and sizes to accommodate many specimen types. The sample and a standard are then packaged and irradiated in a suitable reactor at a constant, known neutron flux . A typical reactor used for activation uses uranium fission , providing a high neutron flux and the highest available sensitivities for most elements. The neutron flux from such a reactor is in the order of 10 12 neutrons cm −2 s −1 . [ 1 ] The type of neutrons generated are of relatively low kinetic energy (KE), typically less than 0.5 eV . These neutrons are termed thermal neutrons. Upon irradiation, a thermal neutron interacts with the target nucleus via a non-elastic collision, causing neutron capture. This collision forms a compound nucleus which is in an excited state. The excitation energy within the compound nucleus is formed from the binding energy of the thermal neutron with the target nucleus. This excited state is unfavourable and the compound nucleus will almost instantaneously de-excite (transmutate) into a more stable configuration through the emission of a prompt particle and one or more characteristic prompt gamma photons. In most cases, this more stable configuration yields a radioactive nucleus. The newly formed radioactive nucleus now decays by the emission of both particles and one or more characteristic delayed gamma photons. This decay process is at a much slower rate than the initial de-excitation and is dependent on the unique half-life of the radioactive nucleus. These unique half-lives are dependent upon the particular radioactive species and can range from fractions of a second to several years. Once irradiated, the sample is left for a specific decay period, then placed into a detector, which will measure the nuclear decay according to either the emitted particles, or more commonly, the emitted gamma rays. [ 1 ] NAA can vary according to a number of experimental parameters. The kinetic energy of the neutrons used for irradiation will be a major experimental parameter. The above description is of activation by slow neutrons, slow neutrons are fully moderated within the reactor and have KE <0.5 eV. Medium KE neutrons may also be used for activation, these neutrons have been only partially moderated and have KE of 0.5 eV to 0.5 MeV, and are termed epithermal neutrons. Activation with epithermal neutrons is known as Epithermal NAA (ENAA). High KE neutrons are sometimes used for activation, these neutrons are unmoderated and consist of primary fission neutrons. High KE or fast neutrons have a KE >0.5 MeV. Activation with fast neutrons is termed Fast NAA (FNAA). Another major experimental parameter is whether nuclear decay products (gamma rays or particles) are measured during neutron irradiation ( prompt gamma ), or at some time after irradiation (delayed gamma, DGNAA). PGNAA is generally performed by using a neutron stream tapped off the nuclear reactor via a beam port. Neutron fluxes from beam ports are the order of 10 6 times weaker than inside a reactor. This is somewhat compensated for by placing the detector very close to the sample reducing the loss in sensitivity due to low flux. PGNAA is generally applied to elements with extremely high neutron capture cross-sections ; elements which decay too rapidly to be measured by DGNAA; elements that produce only stable isotopes ; or elements with weak decay gamma ray intensities. PGNAA is characterised by short irradiation times and short decay times, often in the order of seconds and minutes. DGNAA is applicable to the vast majority of elements that form artificial radioisotopes. DG analyses are often performed over days, weeks or even months. This improves sensitivity for long-lived radionuclides as it allows short-lived radionuclide to decay, effectively eliminating interference. DGNAA is characterised by long irradiation times and long decay times, often in the order of hours, weeks or longer. A range of different sources can be used: Some reactors are used for the neutron irradiation of samples for radioisotope production for a range of purposes. The sample can be placed in an irradiation container which is then placed in the reactor; if epithermal neutrons are required for the irradiation then cadmium can be used to filter out the thermal neutrons. A relatively simple Farnsworth–Hirsch fusor can be used to generate neutrons for NAA experiments. The advantages of this kind of apparatus is that it is compact, often benchtop-sized, and that it can simply be turned off and on. A disadvantage is that this type of source will not produce the neutron flux that can be obtained using a reactor. For many workers in the field, a reactor is an item which is too expensive; instead, it is common to use a neutron source which uses a combination of an alpha emitter and beryllium. These sources tend to be much weaker than reactors. These can be used to create pulses of neutrons, they have been used for some activation work where the decay of the target isotope is very rapid. For instance in oil wells. [ 5 ] There are a number of detector types and configurations used in NAA. Most are designed to detect the emitted gamma radiation . The most common types of gamma detectors encountered in NAA are the gas ionisation type, scintillation type and the semiconductor type. Of these the scintillation and semiconductor type are the most widely employed. There are two detector configurations utilised, they are the planar detector, used for PGNAA and the well detector, used for DGNAA. The planar detector has a flat, large collection surface area and can be placed close to the sample. The well detector ‘surrounds’ the sample with a large collection surface area. Scintillation-type detectors use a radiation-sensitive crystal, most commonly thallium-doped sodium iodide (NaI(Tl)), which emits light when struck by gamma photons. These detectors have excellent sensitivity and stability, and a reasonable resolution. Semiconductor detectors utilise the semiconducting element germanium . The germanium is processed to form a p-i-n (positive-intrinsic-negative) diode , and when cooled to ~77 K by liquid nitrogen to reduce dark current and detector noise, produces a signal which is proportional to the photon energy of the incoming radiation. There are two types of germanium detector, the lithium-drifted germanium or Ge(Li) (pronounced ‘jelly’), and the high-purity germanium or HPGe. The semiconducting element silicon may also be used but germanium is preferred, as its higher atomic number makes it more efficient at stopping and detecting high energy gamma rays. Both Ge(Li) and HPGe detectors have excellent sensitivity and resolution, but Ge(Li) detectors are unstable at room temperature, with the lithium drifting into the intrinsic region ruining the detector. The development of undrifted high purity germanium has overcome this problem. Particle detectors can also be used to detect the emission of alpha (α) and beta (β) particles which often accompany the emission of a gamma photon but are less favourable, as these particles are only emitted from the surface of the sample and are often absorbed or attenuated by atmospheric gases requiring expensive vacuum conditions to be effectively detected. Gamma rays, however, are not absorbed or attenuated by atmospheric gases, and can also escape from deep within the sample with minimal absorption. NAA can detect up to 74 elements depending upon the experimental procedure, with minimum detection limits ranging from 0.1 to 1x10 6 ng g −1 depending on element under investigation. Heavier elements have larger nuclei, therefore they have a larger neutron capture cross-section and are more likely to be activated. Some nuclei can capture a number of neutrons and remain relatively stable, not undergoing transmutation or decay for many months or even years. Other nuclei decay instantaneously or form only stable isotopes and can only be identified by PGNAA. Neutron Activation Analysis has a wide variety of applications including within the fields of archaeology , soil science , geology , forensics , and the semiconductor industry . Forensically, hairs subjected to a detailed forensic neutron analysis to determine whether they had sourced from the same individuals was first used in the trial of John Norman Collins . [ 6 ] Archaeologists use NAA in order to determine the elements that comprise certain artifacts. This technique is used because it is nondestructive and it can relate an artifact to its source by its chemical signature. This method has proven to be very successful at determining trade routes, particularly for obsidian, with the ability of NAA to distinguish between chemical compositions. In agricultural processes, the movement of fertilizers and pesticides is influenced by surface and subsurface movement as it infiltrates the water supplies. In order to track the distribution of the fertilizers and pesticides, bromide ions in various forms are used as tracers that move freely with the flow of water while having minimal interaction with the soil. Neutron activation analysis is used to measure bromide so that extraction is not necessary for analysis. NAA is used in geology to aid in researching the processes that formed the rocks through the analysis of the rare-earth elements and trace elements. It also assists in locating ore deposits and tracking certain elements. Neutron activation analysis is also used to create standards in the semiconductor industry. Semiconductors require a high level of purity, with contamination significantly reducing the quality of the semiconductor. NAA is used to detect trace impurities and establish contamination standards, because it involves limited sample handling and high sensitivity. [ 7 ]
https://en.wikipedia.org/wiki/Neutron_activation_analysis
Neutron backscattering is one of several inelastic neutron scattering techniques. Backscattering from monochromator and analyzer crystals is used to achieve an energy resolution on the order of a microelectronvolt (μeV). Neutron backscattering experiments are performed to study atomic or molecular motion on a nanosecond time scale. Neutron backscattering was proposed by Heinz Maier-Leibnitz in 1966, [ 1 ] and realized by some of his students in a test setup at the research reactor FRM I in Garching bei München , Germany. [ 2 ] Following this successful demonstration of principle, permanent spectrometers were built at Forschungszentrum Jülich and at the Institut Laue-Langevin (ILL). Later instruments brought an extension of the accessible momentum transfer range (IN13 at ILL), the introduction of focussing optics (IN16 at ILL), and a further increase of intensity by a compact design with a phase-space transform chopper (HFBS at NIST , SPHERES at FRM II , IN16B at the Institut Laue-Langevin ). Operational backscattering spectrometers at reactors include IN10, IN13, and IN16B at the Institut Laue-Langevin , the High Flux Backscattering Spectrometer (HFBS) at the NIST Center for Neutron Research, [ 3 ] the SPHERES instrument of Forschungszentrum Jülich at FRM II [ 4 ] and EMU at ANSTO . Inverse geometry spectrometers at spallation sources include IRIS and OSIRIS at the ISIS neutron source at Rutherford-Appleton, BASIS at the Spallation Neutron Source , and MARS at the Paul Scherrer Institute Historic instruments are the first backscattering spectrometer that was a temporary setup at FRM I and the backscattering spectrometer BSS (also called PI) at the DIDO reactor of the Forschungszentrum Jülich (decommissioned). [ 5 ]
https://en.wikipedia.org/wiki/Neutron_backscattering
Neutron capture is a nuclear reaction in which an atomic nucleus and one or more neutrons collide and merge to form a heavier nucleus. [ 1 ] Since neutrons have no electric charge, they can enter a nucleus more easily than positively charged protons , which are repelled electrostatically . [ 1 ] Neutron capture plays a significant role in the cosmic nucleosynthesis of heavy elements. In stars it can proceed in two ways: as a rapid process ( r-process ) or a slow process ( s-process ). [ 1 ] Nuclei of masses greater than 56 cannot be formed by exothermic thermonuclear reactions (i.e., by nuclear fusion ) but can be formed by neutron capture. [ 1 ] Neutron capture on protons yields a line at 2.223 MeV predicted [ 2 ] and commonly observed [ 3 ] in solar flares . At small neutron flux , as in a nuclear reactor , a single neutron is captured by a nucleus. For example, when natural gold ( 197 Au) is irradiated by neutrons (n), the isotope 198 Au is formed in a highly excited state, and quickly decays to the ground state of 198 Au by the emission of gamma rays (𝛾). In this process, the mass number increases by one. This is written as a formula in the form 197 Au + n → 198 Au + γ , or in short form 197 Au(n,γ) 198 Au . If thermal neutrons are used, the process is called thermal capture. The isotope 198 Au is a beta emitter that decays into the mercury isotope 198 Hg. In this process, the atomic number rises by one. The r-process happens inside stars if the neutron flux density is so high that the atomic nucleus has no time to decay via beta emission between neutron captures. The mass number therefore rises by a large amount while the atomic number (i.e., the element) stays the same. When further neutron capture is no longer possible, the highly unstable nuclei decay via many β − decays to beta-stable isotopes of higher-numbered elements. The absorption neutron cross section of an isotope of a chemical element is the effective cross-sectional area that an atom of that isotope presents to absorption and is a measure of the probability of neutron capture. It is usually measured in barns . Absorption cross section is often highly dependent on neutron energy . In general, the likelihood of absorption is proportional to the time the neutron is in the vicinity of the nucleus. The time spent in the vicinity of the nucleus is inversely proportional to the relative velocity between the neutron and nucleus. Other more specific issues modify this general principle. Two of the most specified measures are the cross section for thermal neutron absorption and the resonance integral, which considers the contribution of absorption peaks at certain neutron energies specific to a particular nuclide , usually above the thermal range, but encountered as neutron moderation slows the neutron from an original high energy. The thermal energy of the nucleus also has an effect; as temperatures rise, Doppler broadening increases the chance of catching a resonance peak. In particular, the increase in uranium-238 's ability to absorb neutrons at higher temperatures (and to do so without fissioning) is a negative feedback mechanism that helps keep nuclear reactors under control. Neutron capture is involved in the formation of isotopes of chemical elements. The energy of neutron capture thus intervenes [ clarification needed ] in the standard enthalpy of formation of isotopes. Neutron activation analysis can be used to remotely detect the chemical composition of materials. This is because different elements release different characteristic radiation when they absorb neutrons. This makes it useful in many fields related to mineral exploration and security. In engineering, the most important neutron absorber is 10 B , used as boron carbide in nuclear reactor control rods or as boric acid as a coolant water additive in pressurized water reactors . Other neutron absorbers used in nuclear reactors are xenon , cadmium , hafnium , gadolinium , cobalt , samarium , titanium , dysprosium , erbium , europium , molybdenum and ytterbium . [ 4 ] All of these occur in nature as mixtures of various isotopes, some of which are excellent neutron absorbers. They may occur in compounds such as molybdenum boride, hafnium diboride , titanium diboride , dysprosium titanate and gadolinium titanate . Hafnium absorbs neutrons avidly and it can be used in reactor control rods . However, it is found in the same ores as zirconium , which shares the same outer electron shell configuration and thus has similar chemical properties. Their nuclear properties are profoundly different: hafnium absorbs neutrons 600 times better than zirconium. The latter, being essentially transparent to neutrons, is prized for internal reactor parts, including the metallic cladding of the fuel rods . To use these elements in their respective applications, the zirconium must be separated from the naturally co-occurring hafnium. This can be accomplished economically with ion-exchange resins . [ 5 ]
https://en.wikipedia.org/wiki/Neutron_capture
Neutron capture nucleosynthesis describes two nucleosynthesis pathways: the r-process and the s-process , for rapid and slow neutron captures , respectively. R-process describes neutron capture in a region of high neutron flux , such as during supernova nucleosynthesis after core-collapse, and yields neutron-rich nuclides . S-process describes neutron capture that is slow relative to the rate of beta decay , as for stellar nucleosynthesis in some stars, and yields nuclei with stable nuclear shells . Each process is responsible for roughly half of the observed abundances of elements heavier than iron . The importance of neutron capture to the observed abundance of the chemical elements was first described in 1957 in the B 2 FH paper . [ 1 ] This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron_capture_nucleosynthesis
Neutron capture therapy ( NCT ) is a type of radiotherapy for treating locally invasive malignant tumors such as primary brain tumors , recurrent cancers of the head and neck region , and cutaneous and extracutaneous melanomas. It is a two-step process: first , the patient is injected with a tumor-localizing drug containing the stable isotope boron-10 ( 10 B), which has a high propensity to capture low energy "thermal" neutrons . The neutron cross section of 10 B (3,837 barns ) is 1,000 times more than that of other elements, such as nitrogen, hydrogen, or oxygen, that occur in tissue. In the second step, the patient is radiated with epithermal neutrons , the sources of which in the past have been nuclear reactors and now are accelerators that produce higher energy epithermal neutrons. After losing energy as they penetrate tissue, the resultant low energy "thermal" neutrons are captured by the 10 B atoms. The resulting decay reaction yields high-energy alpha particles that kill the cancer cells that have taken up enough 10 B. All clinical experience with NCT to date is with boron-10; hence this method is known as boron neutron capture therapy ( BNCT ). [ 1 ] Use of another non-radioactive isotope, such as gadolinium , has been limited to experimental animal studies and has not been done clinically. BNCT has been evaluated as an alternative to conventional radiation therapy for malignant brain tumors such as glioblastomas , which presently are incurable, and more recently, locally advanced recurrent cancers of the head and neck region and, much less often, superficial melanomas mainly involving the skin and genital region. [ 1 ] [ 2 ] [ 3 ] James Chadwick discovered the neutron in 1932. Shortly thereafter, H. J. Taylor reported that boron-10 nuclei had a high propensity to capture low energy "thermal" neutrons. This reaction causes nuclear decay of the boron-10 nuclei into helium-4 nuclei (alpha particles) and lithium-7 ions. [ 4 ] In 1936, G.L. Locher, a scientist at the Franklin Institute in Philadelphia, Pennsylvania, recognized the therapeutic potential of this discovery and suggested that this specific type of neutron capture reaction could be used to treat cancer. [ 5 ] [ 1 ] William Sweet, a neurosurgeon at the Massachusetts General Hospital, first suggested the possibility of using BNCT to treat malignant brain tumors to evaluate BNCT for treatment of the most malignant of all brain tumors, glioblastoma multiforme (GBMs), using borax as the boron delivery agent in 1951. [ 6 ] A clinical trial subsequently was initiated by Lee Farr using a specially constructed nuclear reactor at the Brookhaven National Laboratory [ 7 ] in Long Island, New York, U.S.A. Another clinical trial was initiated in 1954 by Sweet at the Massachusetts General Hospital using the Research Reactor at the Massachusetts Institute of Technology (MIT) in Boston. [ 6 ] A number of research groups worldwide have continued the early ground-breaking clinical studies of Sweet and Farr, and subsequently the pioneering clinical studies of Hiroshi Hatanaka (畠中洋) in the 1960s, to treat patients with brain tumors. [ 8 ] Since then, clinical trials have been done in a number of countries including Japan, the United States, Sweden, Finland, the Czech Republic, Taiwan, and Argentina. After the nuclear accident at Fukushima (2011), the clinical program there transitioned from a reactor neutron source to accelerators that would produce high energy neutrons that become thermalized as they penetrate tissue. [ citation needed ] Neutron capture therapy is a binary system that consists of two separate components to achieve its therapeutic effect. Each component in itself is non-tumoricidal, but when combined they can be highly lethal to cancer cells. BNCT is based on the nuclear capture and decay reactions that occur when non-radioactive boron-10 , which makes up approximately 20% of natural elemental boron, is irradiated with neutrons of the appropriate energy to yield excited boron-11 ( 11 B*). This undergoes radioactive decay to produce high-energy alpha particles ( 4 He nuclei) and high-energy lithium-7 ( 7 Li) nuclei. The nuclear reaction is: [ citation needed ] Both the alpha particles and the lithium nuclei produce closely spaced ionizations in the immediate vicinity of the reaction, with a range of 5–9 μm . This approximately is the diameter of the target cell, and thus the lethality of the capture reaction is limited to boron-containing cells. BNCT, therefore, can be regarded as both a biologically and a physically targeted type of radiation therapy. The success of BNCT is dependent upon the selective delivery of sufficient amounts of 10 B to the tumor with only small amounts localized in the surrounding normal tissues. [ 8 ] Thus, normal tissues, if they have not taken up sufficient amounts of boron-10, can be spared from the neutron capture and decay reactions. Normal tissue tolerance, however, is determined by the nuclear capture reactions that occur with normal tissue hydrogen and nitrogen. [ 8 ] A wide variety of boron delivery agents have been synthesized. [ 9 ] The first, which has mainly been used in Japan, is a polyhedral borane anion, sodium borocaptate or BSH ( Na 2 B 12 H 11 SH ), and the second is a dihydroxyboryl derivative of phenylalanine , called boronophenylalanine or BPA. The latter has been used in many clinical trials. Following administration of either BPA or BSH by intravenous infusion, the tumor site is irradiated with neutrons, the source of which, until recently, has been specially designed nuclear reactors and now is neutron accelerators. Until 1994, low-energy (< 0.5 eV ) thermal neutron beams were used in Japan [ 10 ] and the United States, [ 6 ] [ 7 ] but since they have a limited depth of penetration in tissues, higher energy (> .5eV < 10 keV) epithermal neutron beams, which have a greater depth of penetration, were used in clinical trials in the United States, [ 11 ] [ 12 ] Europe, [ 13 ] [ 14 ] Japan, [ 15 ] [ 16 ] Argentina, Taiwan, and China until recently when accelerators replaced the reactors. In theory BNCT is a highly selective type of radiation therapy that can target tumor cells without causing radiation damage to the adjacent normal cells and tissues. Doses up to 60–70 grays (Gy) can be delivered to the tumor cells in one or two applications compared to 6–7 weeks for conventional fractionated external beam photon irradiation. However, the effectiveness of BNCT is dependent upon a relatively homogeneous cellular distribution of 10 B within the tumor, and more specifically within the constituent tumor cells, and this is still one of the main unsolved problems that have limited its success. [ 1 ] The radiation doses to tumor and normal tissues in BNCT are due to energy deposition from three types of directly ionizing radiation that differ in their linear energy transfer (LET), which is the rate of energy loss along the path of an ionizing particle: [ citation needed ] 1. Low-LET gamma rays , resulting primarily from the capture of thermal neutrons by normal tissue hydrogen atoms [ 1 H(n,γ) 2 H]; 2. High-LET protons , produced by the scattering of fast neutrons and from the capture of thermal neutrons by nitrogen atoms [ 14 N(n,p) 14 C]; and 3. High-LET, heavier charged alpha particles (stripped down helium [ 4 He] nuclei) and lithium -7 ions, released as products of the thermal neutron capture and decay reactions with 10 B [ 10 B(n,α) 7 Li]. Since both the tumor and surrounding normal tissues are present in the radiation field, even with an ideal epithermal neutron beam, there will be an unavoidable, non-specific background dose, consisting of both high- and low-LET radiation. However, a higher concentration of 10 B in the tumor will result in it getting a higher total dose than that of adjacent normal tissues, which is the basis for the therapeutic gain in BNCT. [ 17 ] The total radiation dose in Gy delivered to any tissue can be expressed in photon-equivalent units as the sum of each of the high-LET dose components multiplied by weighting factors (Gy w ), which depend on the increased radiobiological effectiveness of each of these components. [ citation needed ] Biological weighting factors have been used in all of the more recent clinical trials in patients with high-grade gliomas, using boronophenylalanine (BPA) in combination with an epithermal neutron beam. The 10 B(n,α) 7 Li part of the radiation dose to the scalp has been based on the measured boron concentration in the blood at the time of BNCT, assuming a blood: scalp boron concentration ratio of 1.5:1 and a compound biological effectiveness (CBE) factor for BPA in skin of 2.5. A relative biological effectiveness (RBE) or CBE factor of 3.2 has been used in all tissues for the high-LET components of the beam, such as alpha particles. The RBE factor is used to compare the biologic effectiveness of different types of ionizing radiation. The high-LET components include protons resulting from the capture reaction with normal tissue nitrogen, and recoil protons resulting from the collision of fast neutrons with hydrogen. [ 17 ] It must be emphasized that the tissue distribution of the boron delivery agent in humans should be similar to that in the experimental animal model in order to use the experimentally derived values for estimation of the radiation doses for clinical radiations. [ 17 ] [ 18 ] For more detailed information relating to computational dosimetry and treatment planning , interested readers are referred to a comprehensive review on this subject. [ 19 ] The development of boron delivery agents for BNCT began in the early 1960s and is an ongoing and difficult task. A number of boron-10 containing delivery agents have been synthesized for potential use in BNCT. [ 9 ] [ 20 ] [ 21 ] The most important requirements for a successful boron delivery agent are: However, as of 2021 no single boron delivery agent fulfills all of these criteria. With the development of new chemical synthetic techniques and increased knowledge of the biological and biochemical requirements needed for an effective agent and their modes of delivery, a wide variety of new boron agents has emerged (see examples in Table 1). However, only one of these compounds has ever been tested in large animals, and only boronophenylalanine (BPA) and sodium borocaptate (BSH), have been used clinically. [ 1 ] a The delivery agents are not listed in any order that indicates their potential usefulness for BNCT. None of these agents have been evaluated in any animals larger than mice and rats, except for boronated porphyrin (BOPP) that also has been evaluated in dogs. However, due to the severe toxicity of BOPP in canines, no further studies were carried out. b See Barth, R.F., Mi, P., and Yang, W., Boron delivery agents for neutron capture therapy of cancer, Cancer Communications, 38:35 ( doi : 10.1186/s40880-018-0299-7), 2018 for an updated review. c The abbreviations used in this table are defined as follows: BNCT, boron neutron capture therapy; DNA, deoxyribonucleic acid; EGF, epidermal growth factor; EGFR, epidermal growth factor receptor; MoAbs, monoclonal antibodies; VEGF, vascular endothelial growth factor. The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting in order to achieve boron concentrations (20-50 μg/g tumor) sufficient to produce therapeutic doses of radiation at the site of the tumor with minimal radiation delivered to normal tissues. The selective destruction of infliltrative tumor (glioma) cells in the presence of normal brain cells represents an even greater challenge compared to malignancies at other sites in the body. Malignant gliomas are highly infiltrative of normal brain, histologically diverse, heterogeneous in their genomic profile and therefore it is very difficult to kill all of them. [ 6 ] There also has been some interest in the possible use of gadolinium -157 ( 157 Gd) as a capture agent for NCT for the following reasons: [ 22 ] First , and foremost, has been its very high neutron capture cross section of 254,000 barns . Second , gadolinium compounds, such as Gd-DTPA (gadopentetate dimeglumine Magnevist), have been used routinely as contrast agents for magnetic resonance imaging (MRI) of brain tumors and have shown high uptake by brain tumor cells in tissue culture ( in vitro ). [ 23 ] Third , gamma rays and internal conversion and Auger electrons are products of the 157 Gd(n,γ) 158 Gd capture reaction ( 157 Gd + n th (0.025eV) → [ 158 Gd] → 158 Gd + γ + 7.94 MeV). Though the gamma rays have longer pathlengths, orders of magnitude greater depths of penetration compared with alpha particles, the other radiation products (internal conversion and Auger electrons ) have pathlengths of about one cell diameter and can directly damage DNA . Therefore, it would be highly advantageous for the production of DNA damage if the 157 Gd were localized within the cell nucleus. However, the possibility of incorporating gadolinium into biologically active molecules is very limited and only a small number of potential delivery agents for Gd NCT have been evaluated. [ 24 ] [ 25 ] Relatively few studies with Gd have been carried out in experimental animals compared to the large number with boron containing compounds (Table 1), which have been synthesized and evaluated in experimental animals ( in vivo ). Although in vitro activity has been demonstrated using the Gd-containing MRI contrast agent Magnevist as the Gd delivery agent, [ 26 ] there are very few studies demonstrating the efficacy of Gd NCT in experimental animal tumor models, [ 25 ] [ 27 ] and, as evidenced by a lack of citations in the literature, Gd NCT has not, as of 2019, been used clinically in humans. Until 2014, neutron sources for NCT were limited to nuclear reactors . [ 28 ] Reactor-derived neutrons are classified according to their energies as thermal (E n < 0.5 eV), epithermal (0.5 eV < E n < 10 keV), or fast (E n >10 keV). Thermal neutrons are the most important for BNCT since they usually initiate the 10 B(n,α) 7 Li capture reaction. However, because they have a limited depth of penetration, epithermal neutrons, which lose energy and fall into the thermal range as they penetrate tissues, are now preferred for clinical therapy, other than for skin tumors such as melanoma. [ 29 ] A number of nuclear reactors with very good neutron beam quality have been developed and used clinically. These include: [ citation needed ] As of May 2021, only the reactors in Argentina, China, and Taiwan are still being used clinically. It is anticipated that, beginning some time in 2022, clinical studies in Finland will utilize an accelerator neutron source designed and fabricated in the United States by Neutron Therapeutics, Danvers, Massachusetts. It was not until the 1950s that the first clinical trials were initiated by Farr at the Brookhaven National Laboratory (BNL) in New York [ 7 ] and by Sweet and Brownell at the Massachusetts General Hospital (MGH) using the Massachusetts Institute of Technology (MIT) nuclear reactor (MITR) [ 32 ] and several different low molecular weight boron compounds as the boron delivery agent. [ 33 ] However, the results of these studies were disappointing, and no further clinical trials were carried out in the United States until the 1990s. Following a two-year Fulbright fellowship in Sweet's laboratory at the MGH, clinical studies were initiated by Hiroshi Hatanaka in Japan in 1967. He used a low-energy thermal neutron beam, which had low tissue penetrating properties, and sodium borocaptate (BSH) as the boron delivery agent, which had been evaluated as a boron delivery agent by Albert Soloway at the MGH. [ 34 ] In Hatanaka's procedure, [ 35 ] as much as possible of the tumor was surgically resected ("debulking"), and at some time thereafter, BSH was administered by a slow infusion, usually intra-arterially, but later intravenously. Twelve to 14 hours later, BNCT was carried out at one or another of several different nuclear reactors using low-energy thermal neutron beams. The poor tissue-penetrating properties of the thermal neutron beams necessitated reflecting the skin and raising a bone flap in order to directly irradiate the exposed brain, a procedure first used by Sweet and his collaborators. Approximately 200+ patients were treated by Hatanaka, and subsequently by his associate, Nakagawa. [ 10 ] Due to the heterogeneity of the patient population, in terms of the microscopic diagnosis of the tumor and its grade , size, and the ability of the patients to carry out normal daily activities (Karnofsky performance status ), it was not possible to come up with definitive conclusions about therapeutic efficacy. However, the survival data were no worse than those obtained by standard therapy at the time, and there were several patients who were long-term survivors, and most probably they were cured of their brain tumors. [ 10 ] BNCT of patients with brain tumors was resumed in the United States in the mid-1990s by Chanana, Diaz, and Coderre [ 11 ] and their co-workers at the Brookhaven National Laboratory using the Brookhaven Medical Research Reactor (BMRR) and at Harvard/Massachusetts Institute of Technology (MIT) using the MIT Research Reactor (MITR) . [ 12 ] For the first time, BPA was used as the boron delivery agent, and patients were irradiated with a collimated beam of higher energy epithermal neutrons, which had greater tissue-penetrating properties than thermal neutrons. A research group headed up by Zamenhof at the Beth Israel Deaconess Medical Center/Harvard Medical School and MIT was the first to use an epithermal neutron beam for clinical trials. Initially patients with cutaneous melanomas were treated and this was expanded to include patients with brain tumors, specifically melanoma metastatic to the brain and primary glioblastomas (GBMs). Included in the research team were Otto Harling at MIT and the Radiation Oncologist Paul Busse at the Beth Israel Deaconess Medical Center in Boston. A total of 22 patients were treated by the Harvard-MIT research group. Five patients with cutaneous melanomas were also treated using an epithermal neutron beam at the MIT research reactor (MITR-II) and subsequently patients with brain tumors were treated using a redesigned beam at the MIT reactor that possessed far superior characteristics to the original MITR-II beam and BPA as the capture agent. The clinical outcome of the cases treated at Harvard-MIT has been summarized by Busse. [ 12 ] Although the treatment was well tolerated, there were no significant differences in the mean survival times (MSTs)of patients that had received BNCT compared to those who received conventional external beam X-irradiation. [ 12 ] Shin-ichi Miyatake and Shinji Kawabata at Osaka Medical College in Japan [ 15 ] [ 16 ] have carried out extensive clinical studies employing BPA (500 mg/kg) either alone or in combination with BSH (100 mg/kg), infused intravenously (i.v.) over 2 h, followed by neutron irradiation at Kyoto University Research Reactor Institute (KURRI) on patients with newly diagnosed and recurrent glioblastomas. The Mean Survival Time (MST) of 10 patients with recurrent high grade gliomas in the first of their trials was 15.6 months, with one long-term survivor (>5 years). [ 16 ] Based on experimental animal data, [ 36 ] which showed that BNCT in combination with X-irradiation produced enhanced survival compared to BNCT alone, in another study, Miyatake and Kawabata combined BNCT, as described above, with an X-ray boost. [ 15 ] A total dose of 20 to 30 Gy was administered, divided into 2 Gy daily fractions. The MST of this group of patients (with newly diagnosed glioblastomas) was 23.5 months and no significant toxicity was observed, other than hair loss (alopecia). However, a significant subset of these patients, a high proportion of which had small cell variant glioblastomas, developed cerebrospinal fluid dissemination of their tumors. [ 37 ] In another Japanese trial with patients with newly diagnosed glioblastomas, carried out by Yamamoto et al., BPA and BSH were infused over 1 h, followed by BNCT at the Japan Research Reactor (JRR)-4 reactor. [ 38 ] Patients subsequently received an X-ray boost after completion of BNCT. The overall median survival time (MeST) was 27.1 months, and the 1 year and 2-year survival rates were 87.5 and 62.5%, respectively. Based on the reports of Miyatake, Kawabata, and Yamamoto, combining BNCT with an X-ray boost can produce a significant therapeutic gain. However, further studies are needed to optimize this combined therapy alone or in combination with other approaches including chemo- and immunotherapy, and to evaluate it using a larger patient population. [ 39 ] Miyatake and his co-workers also have treated a cohort of 44 patients with recurrent high grade meningiomas (HGM) that were refractory to all other therapeutic approaches. [ 40 ] The clinical regimen consisted of intravenous administration of boronophenylalanine two hours before neutron irradiation at the Kyoto University Research Reactor Institute in Kumatori, Japan. Effectiveness was determined using radiographic evidence of tumor shrinkage, overall survival (OS) after initial diagnosis, OS after BNCT, and radiographic patterns associated with treatment failure. The median OS after BNCT was 29.6 months and 98.4 months after diagnosis. Better responses were seen in patients with lower grade tumors. In 35 of 36 patients, there was tumor shrinkage, and the median progression-free survival (PFS) was 13.7 months. There was good local control of the patients' tumors, as evidenced by the fact that only 22.2% of them experienced local recurrence of their tumors. From these results, it was concluded that BNCT was effective in locally controlling tumor growth, shrinking tumors, and improving survival with acceptable safety in patients with therapeutically refractory HGMs. The technological and physical aspects of the Finnish BNCT program have been described in considerable detail by Savolainen et al. [ 44 ] A team of clinicians led by Heikki Joensuu and Leena Kankaanranta and nuclear engineers led by Iro Auterinen and Hanna Koivunoro at the Helsinki University Central Hospital and VTT Technical Research Center of Finland have treated approximately 200+ patients with recurrent malignant gliomas ( glioblastomas ) and head and neck cancer who had undergone standard therapy, recurred, and subsequently received BNCT at the time of their recurrence using BPA as the boron delivery agent. [ 13 ] [ 14 ] The median time to progression in patients with gliomas was 3 months, and the overall MeST was 7 months. It is difficult to compare these results with other reported results in patients with recurrent malignant gliomas, but they are a starting point for future studies using BNCT as salvage therapy in patients with recurrent tumors. Due to a variety of reasons, including financial, [ 45 ] no further studies have been carried out at this facility, which has been decommissioned. However, a new facility for BNCT treatment has been installed using an accelerator designed and fabricated by Neutron Therapeutics. [ 46 ] This accelerator was specifically designed to be used in a hospital, and the BNCT treatment and clinical studies will be carried out there after dosimetric studies have been completed in 2021. Both Finnish and foreign patients are expected to be treated at the facility. [ 47 ] [ 48 ] [ 49 ] To conclude this section on treating brain tumors with BNCT using reactor neutron sources, a clinical trial that was carried out by Stenstam, Sköld, Capala and their co-workers in Studsvik, Sweden, using an epithermal neutron beam produced by the Studsvik nuclear reactor, which had greater tissue penetration properties than the thermal beams originally used in the United States and Japan, will be briefly summarized. This study differed significantly from all previous clinical trials in that the total amount of BPA administered was increased (900 mg/kg), and it was infused i.v. over 6 hours. This was based on experimental animal studies in glioma bearing rats demonstrating enhanced uptake of BPA by infiltrating tumor cells following a 6-hour infusion. [ 34 ] [ 41 ] [ 42 ] [ 50 ] The longer infusion time of the BPA was well tolerated by the 30 patients who were enrolled in this study. All were treated with 2 fields, and the average whole brain dose was 3.2–6.1 Gy (weighted), and the minimum dose to the tumor ranged from 15.4 to 54.3 Gy (w). There has been some disagreement among the Swedish investigators regarding the evaluation of the results. Based on incomplete survival data, the MeST was reported as 14.2 months and the time to tumor progression was 5.8 months. [ 41 ] However, more careful examination [ 42 ] of the complete survival data revealed that the MeST was 17.7 months compared to 15.5 months that has been reported for patients who received standard therapy of surgery, followed by radiotherapy (RT) and the drug temozolomide (TMZ). [ 51 ] Furthermore, the frequency of adverse events was lower after BNCT (14%) than after radiation therapy (RT) alone (21%) and both of these were lower than those seen following RT in combination with TMZ. If this improved survival data, obtained using the higher dose of BPA and a 6-hour infusion time, can be confirmed by others, preferably in a randomized clinical trial , it could represent a significant step forward in BNCT of brain tumors, especially if combined with a photon boost. The single most important clinical advance over the past 15 years [ 52 ] has been the application of BNCT to treat patients with recurrent tumors of the head and neck region who had failed all other therapy. These studies were first initiated by Kato et al. in Japan. [ 52 ] [ 53 ] and subsequently followed by several other Japanese groups and by Kankaanranta, Joensuu, Auterinen, Koivunoro and their co-workers in Finland. [ 14 ] All of these studies employed BPA as the boron delivery agent, usually alone but occasionally in combination with BSH. A very heterogeneous group of patients with a variety of histopathologic types of tumors have been treated, the largest number of which had recurrent squamous cell carcinomas. Kato et al. have reported on a series of 26 patients with far-advanced cancer for whom there were no further treatment options. [ 52 ] Either BPA + BSH or BPA alone were administered by a 1 or 2 h i.v. infusion, and this was followed by BNCT using an epithermal beam. In this series, there were complete regressions in 12 cases, 10 partial regressions, and progression in 3 cases. The MST was 13.6 months, and the 6-year survival was 24%. Significant treatment related complications ("adverse" events) included transient mucositis, alopecia and, rarely, brain necrosis and osteomyelitis. Kankaanranta et al. have reported their results in a prospective Phase I/II study of 30 patients with inoperable, locally recurrent squamous cell carcinomas of the head and neck region. [ 14 ] Patients received either two or, in a few instances, one BNCT treatment using BPA (400 mg/kg), administered i.v. over 2 hours, followed by neutron irradiation. Of 29 evaluated patients, there were 13 complete and 9 partial remissions, with an overall response rate of 76%. The most common adverse event was oral mucositis, oral pain, and fatigue. Based on the clinical results, it was concluded that BNCT was effective for the treatment of inoperable, previously irradiated patients with head and neck cancer. Some responses were durable but progression was common, usually at the site of the previously recurrent tumor. As previously indicated in the section on neutron sources, all clinical studies have ended in Finland, for variety of reasons including economic difficulties of the two companies directly involved, VTT and Boneca. However, clinical studies using an accelerator neutron source designed and fabricated by Neutron Therapeutics and installed at the Helsinki University Hospital should be fully functional by 2022. [ 46 ] Finally, a group in Taiwan , led by Ling-Wei Wang and his co-workers at the Taipei Veterans General Hospital, have treated 17 patients with locally recurrent head and neck cancers at the Tsing Hua Open-pool Reactor (THOR) of the National Tsing Hua University . [ 54 ] Two-year overall survival was 47% and two-year loco-regional control was 28%. Further studies are in progress to further optimize their treatment regimen. Other extracranial tumors that have been treated with BNCT include malignant melanomas . The original studies were carried out in Japan by the late Yutaka Mishima and his clinical team in the Department of Dermatology at Kobe University [ 55 ] using locally injected BPA and a thermal neutron beam. It is important to point out that it was Mishima who first used BPA as a boron delivery agent, and this approach subsequently was extended to other types of tumors based on the experimental animal studies of Coderre et al. at the Brookhaven National Laboratory. [ 56 ] Local control was achieved in almost all patients, and some were cured of their melanomas. Patients with melanoma of the head and neck region, vulva, and extramammary Paget's disease of the genital region have been treated by Hiratsuka et al. with promising clinical results. [ 57 ] The first clinical trial of BNCT in Argentina for the treatment of melanomas was performed in October 2003 [ 58 ] and since then several patients with cutaneous melanomas have been treated as part of a Phase II clinical trial at the RA-6 nuclear reactor in Bariloche. The neutron beam has a mixed thermal-hyperthermal neutron spectrum that can be used to treat superficial tumors. [ 58 ] The In-Hospital Neutron Irradiator (IHNI) in Beijing has been used to treat a small number of patients with cutaneous melanomas with a complete response of the primary lesion and no evidence of late radiation injury during a 24+-month follow-up period. [ 59 ] Two patients with colon cancer, which had spread to the liver, have been treated by Zonta and his co-workers at the University of Pavia in Italy. [ 60 ] The first was treated in 2001 and the second in mid-2003. The patients received an i.v. infusion of BPA, followed by removal of the liver (hepatectomy), which was irradiated outside of the body (extracorporeal BNCT) and then re-transplanted into the patient. The first patient did remarkably well and survived for over 4 years after treatment, but the second died within a month of cardiac complications. [ 61 ] Clearly, this is a very challenging approach for the treatment of hepatic metastases, and it is unlikely that it will ever be widely used. Nevertheless, the good clinical results in the first patient established proof of principle . Finally, Yanagie and his colleagues at Meiji Pharmaceutical University in Japan have treated several patients with recurrent rectal cancer using BNCT. Although no long-term results have been reported, there was evidence of short-term clinical responses. [ 62 ] Accelerators now are the primary source of epithermal neutrons for clinical BNCT. The first papers relating to their possible use were published in the 1980s, and, as summarized by Blue and Yanch, [ 63 ] this topic became an active area of research in the early 2000s. However, it was the Fukushima nuclear disaster in Japan in 2011 that gave impetus to their development for clinical use. Accelerators also can be used to produce epithermal neutrons. Today several accelerator-based neutron sources (ABNS) are commercially available or under development. Most existing or planned systems use either the lithium-7 reaction, 7 Li(p,n) 7 Be or the beryllium-9 reaction, 9 Be(p,n) 9 B, to generate neutrons, though other nuclear reactions also have been considered. [ 64 ] The lithium-7 reaction requires a proton accelerator with energies between 1.9 and 3.0 MeV, while the beryllium-9 reaction typically uses accelerators with energies between 5 and 30 MeV. Aside from the lower proton energy that the lithium-7 reaction requires, its main benefit is the lower energy of the neutrons produced. This in turn allows the use of smaller moderators, "cleaner" neutron beams, and reduced neutron activation. Benefits of the beryllium-9 reaction include simplified target design and disposal, long target lifetime, and lower required proton beam current. Since the proton beams for BNCT are quite powerful (~20-100 kW), the neutron generating target must incorporate cooling systems capable of removing the heat safely and reliably to protect the target from damage. In the case of the lithium-7, this requirement is especially important due to the low melting point and chemical volatility of the target material. Liquid jets, micro-channels and rotating targets have been employed to solve this problem.Several researchers have proposed the use of liquid lithium-7 targets in which the target material doubles as the coolant. [ 65 ] [ 66 ] In the case of beryllium-9, "thin" targets, in which the protons come to rest and deposit much of their energy in the cooling fluid, can be employed. Target degradation due to beam exposure ("blistering") is another problem to be solved, either by using layers of materials resistant to blistering or by spreading the protons over a large target area. Since the nuclear reactions yield neutrons with energies ranging from < 100keV to tens of MeV, a Beam Shaping Assembly (BSA) [ 67 ] must be used to moderate, filter, reflect and collimate the neutron beam to achieve the desired epithermal energy range, neutron beam size and direction. BSAs are typically composed of a range of materials with desirable nuclear properties for each function. A well-designed BSA should maximize neutron yield per proton while minimizing fast neutron, thermal neutron and gamma contamination. It should also produce a sharply delimited and generally forward directed beam enabling flexible positioning of the patient relative to the aperture. [ 68 ] One key challenge for an ABNS is the duration of treatment time: depending on the neutron beam intensity, treatments can take up to an hour or more. Therefore, it is desirable to reduce the treatment time both for patient comfort during immobilization and to increase the number of patients that could be treated in a 24-hour period. Increasing the neutron beam intensity for the same proton current by adjusting the BSA is often achieved at the cost of reduced beam quality (higher levels of unwanted fast neutrons or gamma rays in the beam or poor beam collimation). Therefore, increasing the proton current delivered by ABNS BNCT systems remains a key goal of technology development programs. The table below summarizes the existing or planned ABNS installations for clinical use (Updated November, 2024). Product Reaction Accelerator Type NeuCure 9 Be(p,n) 9 B Cyclotron Fukushima, Japan 7 Li(p,n) 7 Be RFQ NeuPex 7 Li(p,n) 7 Be Tandem Electrostatic 9 Be(p,n) 9 B LINAC iBNCT [ 73 ] 9 Be(p,n) 9 B LINAC nuBeam 7 Li(p,n) 7 Be Single-ended electrostatic Kanagawa, Japan D-BNCT 7 Li(p,n) 7 Be LINAC 7 Li(p,n) 7 Be LINAC Alphabeam 7 Li(p,n) 7 Be Tandem Electrostatic Treatment of Recurrent Malignant Gliomas The single greatest advance in moving BNCT forward clinically has been the introduction of cyclotron-based neutron sources (c-BNS) in Japan. Shin-ichi Miyatake and Shinji Kawabata have led the way with the treatment of patients with recurrent glioblastomas (GBMs). [ 75 ] [ 76 ] In their Phase II clinical trial, they used the Sumitomo Heavy Industries accelerator at the Osaka Medical College, Kansai BNCT Medical Center to treat a total of 24 patients. [ 75 ] These patients ranged in age from 20 to 75 years, and all previously had received standard treatment consisting of surgery followed by chemotherapy with temozolomide (TMZ) and conventional radiation therapy. They were candidates for treatment with BNCT because their tumors had recurred and were progressing in size. They received an intravenous infusion of a proprietary formulation of 10 B-enriched boronophenylalanine ("Borofalan," StellaPharma Corporation, Osaka, Japan) prior to neutron irradiation. The primary endpoint of this study was the 1-year survival rate after BNCT, which was 79.2%, and the median overall survival rate was 18.9 months. Based on these results, it was concluded that c-BNS BNCT was safe and resulted in increased survival of patients with recurrent gliomas. Although there was an increased risk of brain edema due to re-irradiation, this was easily controlled. [ 75 ] As a result of this trial, the Sumitomo accelerator was approved by the Japanese regulatory authority having jurisdiction over medical devices, and further studies are being carried out with patients who have recurrent, high-grade (malignant) meningiomas. However, further studies for the treatment of patients with GBMs have been put on hold pending additional analysis of the results. Treatment of Recurrent or Locally Advanced Cancers of the Head and Neck Katsumi Hirose and his co-workers at the Southern Tohoku BNCT Research Center in Koriyama, Japan, recently have reported on their results after treating 21 patients with recurrent tumors of the head and neck region. [ 77 ] All of these patients had received surgery, chemotherapy, and conventional radiation therapy. Eight of them had recurrent squamous cell carcinomas (R-SCC), and 13 had either recurrent (R) or locally advanced (LA) non-squamous cell carcinomas (nSCC). The overall response rate was 71%, and the complete response and partial response rates were 50% and 25%, respectively, for patients with R-SCC and 80% and 62%, respectively, for those with R or LA SCC. The overall 2-year survival rates for patients with R-SCC or R/LA nSCC were 58% and 100%, respectively. The treatment was well tolerated, and adverse events were those usually associated with conventional radiation treatment of these tumors. These patients had received a proprietary formulation of 10 B-enriched boronophenylalanine (Borofalan), which was administered intravenously. Although the manufacturer of the accelerator was not identified, it presumably was the one manufactured by Sumitomo Heavy Industries, Ltd., which was indicated in the Acknowledgements of their report. [ 77 ] Based on this Phase II clinical trial, the authors suggested that BNCT using Borofalan and c-BENS was a promising treatment for recurrent head and neck cancers, although further studies would be required to firmly establish this. Clinical BNCT first was used to treat highly malignant brain tumors and subsequently for melanomas of the skin that were difficult to treat by surgery. Later, it was used as a type of "salvage" therapy for patients with recurrent tumors of the head and neck region. The clinical results were sufficiently promising to lead to the development of accelerator neutron sources, which will be used almost exclusively in the future. [ 46 ] Challenges for the future clinical success of BNCT that need to be met include the following: [ 78 ] [ 79 ] [ 1 ] [ 2 ] [ 80 ] [ 81 ]
https://en.wikipedia.org/wiki/Neutron_capture_therapy_of_cancer
In nuclear physics , the concept of a neutron cross section is used to express the likelihood of interaction between an incident neutron and a target nucleus. The neutron cross section σ can be defined as the area in cm 2 for which the number of neutron-nuclei reactions taking place is equal to the product of the number of incident neutrons that would pass through the area and the number of target nuclei. [ 1 ] [ page needed ] In conjunction with the neutron flux , it enables the calculation of the reaction rate, for example to derive the thermal power of a nuclear power plant . The standard unit for measuring the cross section is the barn , which is equal to 10 −28 m 2 or 10 −24 cm 2 . The larger the neutron cross section, the more likely a neutron will react with the nucleus. An isotope (or nuclide ) can be classified according to its neutron cross section and how it reacts to an incident neutron. Nuclides that tend to absorb a neutron and either decay or keep the neutron in its nucleus are neutron absorbers and will have a capture cross section for that reaction. Isotopes that undergo fission are fissionable fuels and have a corresponding fission cross section . The remaining isotopes will simply scatter the neutron, and have a scatter cross section . Some isotopes, like uranium-238 , have nonzero cross sections of all three. Isotopes which have a large scatter cross section and a low mass are good neutron moderators (see chart below). Nuclides which have a large absorption cross section are neutron poisons if they are neither fissile nor undergo decay. A poison that is purposely inserted into a nuclear reactor for controlling its reactivity in the long term and improve its shutdown margin is called a burnable poison. The neutron cross section, and therefore the probability of a neutron–nucleus interaction, depends on: and, to a lesser extent, of: The neutron cross section is defined for a given type of target particle. For example, the capture cross section of deuterium 2 H is much smaller than that of common hydrogen 1 H . [ 2 ] This is the reason why some reactors use heavy water (in which most of the hydrogen is deuterium) instead of ordinary light water as moderator : fewer neutrons are lost by capture inside the medium, hence enabling the use of natural uranium instead of enriched uranium . This is the principle of a CANDU reactor . The likelihood of interaction between an incident neutron and a target nuclide, independent of the type of reaction, is expressed with the help of the total cross section σ T . However, it may be useful to know if the incoming particle bounces off the target (and therefore continue travelling after the interaction) or disappears after the reaction. For that reason, the scattering and absorption cross sections σ S and σ A are defined and the total cross section is simply the sum of the two partial cross sections: [ 3 ] If the neutron is absorbed when approaching the nuclide, the atomic nucleus moves up on the table of isotopes by one position. For instance, 235 U becomes 236* U with the * indicating the nucleus is highly energized. This energy has to be released and the release can take place through any of several mechanisms. The scattering cross-section can be further subdivided into coherent scattering and incoherent scattering, which is caused by the spin dependence of the scattering cross-section and, for a natural sample, presence of different isotopes of the same element in the sample. Because neutrons interact with the nuclear potential , the scattering cross-section varies for different isotopes of the element in question. A very prominent example is hydrogen and its isotope deuterium . The total cross-section for hydrogen is over 10 times that of deuterium, mostly due to the large incoherent scattering length of hydrogen. Some metals are rather transparent to neutrons, aluminum and zirconium being the two best examples of this. For a given target and reaction, the cross section is strongly dependent on the neutron speed. In the extreme case, the cross section can be, at low energies, either zero (the energy for which the cross section becomes significant is called threshold energy ) or much larger than at high energies. Therefore, a cross section should be defined either at a given energy or should be averaged in an energy range (or group). As an example, the plot on the right shows that the fission cross section of uranium-235 is low at high neutron energies but becomes higher at low energies. Such physical constraints explain why most operational nuclear reactors use a neutron moderator to reduce the energy of the neutron and thus increase the probability of fission which is essential to produce energy and sustain the chain reaction . A simple estimation of energy dependence of any kind of cross section is provided by the Ramsauer model, [ 4 ] which is based on the idea that the effective size of a neutron is proportional to the breadth of the probability density function of where the neutron is likely to be, which itself is proportional to the neutron's thermal de Broglie wavelength . Taking λ {\displaystyle \lambda } as the effective radius of the neutron, we can estimate the area of the circle σ {\displaystyle \sigma } in which neutrons hit the nuclei of effective radius R {\displaystyle R} as While the assumptions of this model are naive, it explains at least qualitatively the typical measured energy dependence of the neutron absorption cross section. For neutrons of wavelength much larger than typical radius of atomic nuclei (1–10 fm, E = 10–1000 keV) R {\displaystyle R} can be neglected. For these low energy neutrons (such as thermal neutrons) the cross section σ ( E ) {\displaystyle \sigma (E)} is inversely proportional to neutron velocity. This explains the advantage of using a neutron moderator in fission nuclear reactors. On the other hand, for very high energy neutrons (over 1 MeV), λ {\displaystyle \lambda } can be neglected, and the neutron cross section is approximately constant, determined just by the cross section of atomic nuclei. However, this simple model does not take into account so called neutron resonances, which strongly modify the neutron cross section in the energy range of 1 eV–10 keV, nor the threshold energy of some nuclear reactions. Cross sections are usually measured at 20 °C. To account for the dependence with temperature of the medium (viz. the target), the following formula is used: [ 3 ] where σ is the cross section at temperature T , and σ 0 the cross section at temperature T 0 ( T and T 0 in kelvins ). The energy is defined at the most likely energy and velocity of the neutron. The neutron population consists of a Maxwellian distribution, and hence the mean energy and velocity will be higher. Consequently, also a Maxwellian correction-term 1 ⁄ 2 √π has to be included when calculating the cross-section Equation 38 . The Doppler broadening of neutron resonances is a very important phenomenon and improves nuclear reactor stability. The prompt temperature coefficient of most thermal reactors is negative, owing to the nuclear Doppler effect . Nuclei are located in atoms which are themselves in continual motion owing to their thermal energy (temperature). As a result of these thermal motions, neutrons impinging on a target appears to the nuclei in the target to have a continuous spread in energy. This, in turn, has an effect on the observed shape of resonance. The resonance becomes shorter and wider than when the nuclei are at rest. Although the shape of resonances changes with temperature, the total area under the resonance remains essentially constant. But this does not imply constant neutron absorption. Despite the constant area under resonance a resonance integral, which determines the absorption, increases with increasing target temperature. This, of course, decreases coefficient k (negative reactivity is inserted). Imagine a spherical target (shown as the dashed grey and red circle in the figure) and a beam of particles (in blue) "flying" at speed v (vector in blue) in the direction of the target. We want to know how many particles impact it during time interval d t . To achieve it, the particles have to be in the green cylinder in the figure (volume V ). The base of the cylinder is the geometrical cross section of the target perpendicular to the beam (surface σ in red) and its height the length travelled by the particles during d t (length v d t ): Noting n the number of particles per unit volume , there are n V particles in the volume V , which will, per definition of V , undergo a reaction. Noting r the reaction rate onto one target, it gives: It follows directly from the definition of the neutron flux [ 3 ] Φ {\displaystyle \Phi } = n v : Assuming that there is not one but N targets per unit volume, the reaction rate R per unit volume is: Knowing that the typical nuclear radius r is of the order of 10 −12 cm, the expected nuclear cross section is of the order of π r 2 or roughly 10 −24 cm 2 (thus justifying the definition of the barn ). However, if measured experimentally ( σ = R / ( Φ N ) ), the experimental cross sections vary enormously. As an example, for slow neutrons absorbed by the (n, γ) reaction the cross section in some cases ( xenon-135 ) is as much as 2,650,000 barns, while the cross sections for transmutations by gamma-ray absorption are in the neighborhood of 0.001 barn ( § Typical cross sections has more examples). The so-called nuclear cross section is consequently a purely conceptual quantity representing how big the nucleus should be to be consistent with this simple mechanical model. Cross sections depend strongly on the incoming particle speed. In the case of a beam with multiple particle speeds, the reaction rate R is integrated over the whole range of energy: Where σ ( E ) is the continuous cross section, Φ ( E ) the differential flux and N the target atom density. In order to obtain a formulation equivalent to the mono energetic case, an average cross section is defined: Where Φ = ∫ {\textstyle \int } Φ ( E ) d E is the integral flux. Using the definition of the integral flux Φ and the average cross section σ , the same formulation as before is found: Up to now, the cross section referred to in this article corresponds to the microscopic cross section σ . However, it is possible to define the macroscopic cross section [ 3 ] Σ which corresponds to the total "equivalent area" of all target particles per unit volume: where N is the atomic density of the target. Therefore, since the cross section can be expressed in cm 2 and the density in cm −3 , the macroscopic cross section is usually expressed in cm −1 . Using the equation derived above , the reaction rate R can be derived using only the neutron flux Φ and the macroscopic cross section Σ : The mean free path λ of a random particle is the average length between two interactions. The total length L that non perturbed particles travel during a time interval dt in a volume dV is simply the product of the length l covered by each particle during this time with the number of particles N in this volume: Noting v the speed of the particles and n is the number of particles per unit volume: It follows: Using the definition of the neutron flux [ 3 ] Φ It follows: This average length L is however valid only for unperturbed particles. To account for the interactions, L is divided by the total number of reactions R to obtain the average length between each collision λ : From § Microscopic versus macroscopic cross section : It follows: where λ is the mean free path and Σ is the macroscopic cross section. Because 8 Li and 12 Be form natural stopping points on the table of isotopes for hydrogen fusion , it is believed that all of the higher elements are formed in very hot stars where higher orders of fusion predominate. A star like the Sun produces energy by the fusion of simple 1 H into 4 He through a series of reactions . It is believed that when the inner core exhausts its 1 H fuel, the Sun will contract, slightly increasing its core temperature until 4 He can fuse and become the main fuel supply. Pure 4 He fusion leads to 8 Be , which decays back to 2 4 He; therefore the 4 He must fuse with isotopes either more or less massive than itself to result in an energy producing reaction. When 4 He fuses with 2 H or 3 H , it forms stable isotopes 6 Li and 7 Li respectively. The higher order isotopes between 8 Li and 12 C are synthesized by similar reactions between hydrogen, helium, and lithium isotopes. Some cross sections that are of importance in a nuclear reactor are given in the following table. The cross sections were taken from the JEFF-3.1.1 library using JANIS software. [ 5 ] * negligible, less than 0.1% of the total cross section and below the Bragg scattering cutoff
https://en.wikipedia.org/wiki/Neutron_cross_section
The neutron electric dipole moment ( nEDM ), denoted d n , is a measure for the distribution of positive and negative charge inside the neutron . A nonzero electric dipole moment can only exist if the centers of the negative and positive charge distribution inside the particle do not coincide. So far, no neutron EDM has been found. The current best measured limit for d n is (0.0 ± 1.1) × 10 −26 e ⋅cm . [ 1 ] A permanent electric dipole moment of a fundamental particle violates both parity (P) and time reversal symmetry (T). These violations can be understood by examining the neutron's magnetic dipole moment and hypothetical electric dipole moment. Under time reversal, the magnetic dipole moment changes its direction, whereas the electric dipole moment stays unchanged. Under parity, the electric dipole moment changes its direction but not the magnetic dipole moment. As the resulting system under P and T is not symmetric with respect to the initial system, these symmetries are violated in the case of the existence of an EDM. Having also CPT symmetry , the combined symmetry CP is violated as well. As it is depicted above, in order to generate a nonzero nEDM one needs processes that violate CP symmetry . CP violation has been observed in weak interactions and is included in the Standard Model of particle physics via the CP-violating phase in the CKM matrix. However, the amount of CP violation is very small and therefore also the contribution to the nEDM: | d n | ~ 10 −31 e ⋅cm . [ 2 ] From the asymmetry between matter and antimatter in the universe, one suspects that there must be a sizeable amount of CP-violation . Measuring a neutron electric dipole moment at a much higher level than predicted by the Standard Model would therefore directly confirm this suspicion and improve our understanding of CP-violating processes. As the neutron is built up of quarks , it is also susceptible to CP violation stemming from strong interactions . Quantum chromodynamics – the theoretical description of the strong force – naturally includes a term that breaks CP-symmetry. The strength of this term is characterized by the angle θ . The current limit on the nEDM constrains this angle to be less than 10 −10 radians . This fine-tuning of the angle θ , which is naturally expected to be of order 1, is the strong CP problem . Supersymmetric extensions to the Standard Model, such as the Minimal Supersymmetric Standard Model , generally lead to a large CP-violation. Typical predictions for the neutron EDM arising from the theory range between 10 −25 e ⋅cm and 10 −28 e ⋅cm . [ 3 ] [ 4 ] As in the case of the strong interaction , the limit on the neutron EDM is already constraining the CP violating phases. The fine-tuning is, however, not as severe yet. In order to extract the neutron EDM, one measures the Larmor precession of the neutron spin in the presence of parallel and antiparallel magnetic and electric fields. The precession frequency for each of the two cases is given by the addition or subtraction of the frequencies stemming from the precession of the magnetic moment around the magnetic field and the precession of the electric dipole moment around the electric field . From the difference of those two frequencies one readily obtains a measure of the neutron EDM: The biggest challenge of the experiment (and at the same time the source of the biggest systematic false effects) is to ensure that the magnetic field does not change during these two measurements. The first experiments searching for the electric dipole moment of the neutron used beams of thermal (and later cold ) neutrons to conduct the measurement. It started with the experiment by James Smith , Purcell , and Ramsey in 1951 (and published in 1957) at ORNL's Graphite Reactor (as the three researchers were from Harvard University , this experiment is called ORNL/Harvard or something similar, see figure in this section), obtaining a limit of | d n | < 5 × 10 −20 e ⋅cm . [ 5 ] [ 6 ] Beams of neutrons were used until 1977 for nEDM experiments. At this point, systematic effects related to the high velocities of the neutrons in the beam became insurmountable. The final limit obtained with a neutron beam amounts to | d n | < 3 × 10 −24 e ⋅cm . [ 7 ] After that, experiments with ultracold neutrons (UCN) took over. It started in 1980 with an experiment at the Leningrad Nuclear Physics Institute [ ru ] (LNPI) obtaining a limit of | d n | < 1.6 × 10 −24 e ⋅cm . [ 8 ] This experiment and especially the experiment starting in 1984 at the Institut Laue-Langevin (ILL) pushed the limit down by another two orders of magnitude yielding the best upper limit in 2006, revised in 2015. During these 70 years of experiments, six orders of magnitude have been covered, thereby putting stringent constraints on theoretical models. [ 9 ] The latest best limit of | d n | < 1.8 × 10 −26 e ⋅cm has been published 2020 by the nEDM collaboration at Paul Scherrer Institute (PSI). [ 1 ] Currently, there are at least six experiments aiming at improving the current limit (or measuring for the first time) on the neutron EDM with a sensitivity down to 10 −28 e ⋅cm over the next 10 years, thereby covering the range of prediction coming from supersymmetric extensions to the Standard Model. The Cryogenic neutron EDM experiment or CryoEDM was under development at the Institut Laue-Langevin but its activities were stopped in 2013/2014. [ 22 ]
https://en.wikipedia.org/wiki/Neutron_electric_dipole_moment
Neutron embrittlement , sometimes more broadly radiation embrittlement , is the embrittlement of various materials due to the action of neutrons . This is primarily seen in nuclear reactors , where the release of high-energy neutrons causes the long-term degradation of the reactor materials. The embrittlement is caused by the microscopic movement of atoms that are hit by the neutrons; this same action also gives rise to neutron-induced swelling causing materials to grow in size, and the Wigner effect causing energy buildup in certain materials that can lead to sudden releases of energy . Neutron embrittlement mechanisms include: Neutron irradiation embrittlement limits the service life of reactor-pressure vessels (RPV) in nuclear power plants due to the degradation of reactor materials. In order to perform at high efficiency and safely contain coolant water at temperatures around 290°C and pressures of ~7 MPa (for boiling water reactors ) to 14 MPa (for pressurized water reactors ), the RPV must be heavy-section steel. Due to regulations, RPV failure probabilities must be very low. To achieve sufficient safety, the design of the reactor assumes large cracks and extreme loading conditions. Under such conditions, a probable failure mode is rapid, catastrophic fracture if the vessel steel is brittle. Tough RPV base metals that are typically used are A302B, A533B plates, or A508 forgings; these are quenched and tempered, low-alloy steels with primarily tempered bainitic microstructures. Over the past few decades, RPV embrittlement has been addressed by the use of tougher steels with lower trace impurity contents, the decrease of neutron flux that the vessel is subject to, and the elimination of beltline welds. However, embrittlement remains an issue for older reactors. [ 2 ] Pressurized water reactors are more susceptible to embrittlement than boiling water reactors. This is due to PWRs sustaining more neutron impacts. To counteract this, many PWRs have a specific core design that reduces the number of neutrons hitting the vessel wall. Moreover, PWR designs must be especially mindful of embrittlement because of pressurized thermal shock, an accident scenario that occurs when cold water enters a pressurized reactor vessel, introducing large thermal stress . This thermal stress may cause fracture if the reactor vessel is sufficiently brittle. [ 3 ] This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron_embrittlement
Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus . [ 1 ] It occurs in the most neutron-rich/proton-deficient nuclides , and also from excited states of other nuclides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element. [ 2 ] Neutrons are also produced in the spontaneous and induced fission of certain heavy nuclides. As a consequence of the Pauli exclusion principle , nuclei with an excess of protons or neutrons have a higher average energy per nucleon. Nuclei with a sufficient excess of neutrons have a greater energy than the combination of a free neutron and a nucleus with one less neutron, and therefore can decay by neutron emission. Nuclei which can decay by this process are described as lying beyond the neutron drip line . Two examples of isotopes that emit neutrons are beryllium-13 (decaying to beryllium-12 with a mean life 2.7 × 10 −21 s ) and helium-5 ( helium-4 , 7 × 10 −22 s ). [ 3 ] In tables of nuclear decay modes, neutron emission is commonly denoted by the abbreviation n . Some neutron-rich isotopes decay by the emission of two or more neutrons. For example, hydrogen-5 and helium-10 decay by the emission of two neutrons, hydrogen-6 by the emission of 3 or 4 neutrons, and hydrogen-7 by emission of 4 neutrons. Some nuclides can be induced to eject a neutron by gamma radiation . One such nuclide is 9 Be ; its photodisintegration is significant in nuclear astrophysics, pertaining to the abundance of beryllium and the consequences of the instability of 8 Be . This also makes this isotope useful as a neutron source in nuclear reactors. [ 4 ] Another nuclide, 181 Ta , is also known to be readily capable of photodisintegration; this process is thought to be responsible for the creation of 180m Ta , the only primordial nuclear isomer and the rarest primordial nuclide . [ 5 ] Neutron emission usually happens from nuclei that are in an excited state, such as the excited 17 O* produced from the beta decay of 17 N . The neutron emission process itself is controlled by the nuclear force and therefore is extremely fast, sometimes referred to as "nearly instantaneous". This process allows unstable atoms to become more stable. The ejection of the neutron may be as a product of the movement of many nucleons, but it is ultimately mediated by the repulsive action of the nuclear force that exists at extremely short-range distances between nucleons. Most neutron emission outside prompt neutron production associated with fission (either induced or spontaneous), is from neutron-heavy isotopes produced as fission products . These neutrons are sometimes emitted with a delay, giving them the term delayed neutrons , but the actual delay in their production is a delay waiting for the beta decay of fission products to produce the excited-state nuclear precursors that immediately undergo prompt neutron emission. Thus, the delay in neutron emission is not from the neutron-production process, but rather its precursor beta decay, which is controlled by the weak force, and thus requires a far longer time. The beta decay half lives for the precursors to delayed neutron-emitter radioisotopes, are typically fractions of a second to tens of seconds. Nevertheless, the delayed neutrons emitted by neutron-rich fission products aid control of nuclear reactors by making reactivity change far more slowly than it would if it were controlled by prompt neutrons alone. About 0.65% of neutrons are released in a nuclear chain reaction in a delayed way due to the mechanism of neutron emission, and it is this fraction of neutrons that allows a nuclear reactor to be controlled on human reaction time-scales, without proceeding to a prompt critical state, and runaway melt down. A synonym for such neutron emission is " prompt neutron " production, of the type that is best known to occur simultaneously with induced nuclear fission . Induced fission happens only when a nucleus is bombarded with neutrons, gamma rays, or other carriers of energy. Many heavy isotopes, most notably californium-252 , also emit prompt neutrons among the products of a similar spontaneous radioactive decay process, spontaneous fission . Spontaneous fission happens when a nucleus splits into two (occasionally three ) smaller nuclei and generally one or more neutrons.
https://en.wikipedia.org/wiki/Neutron_emission
The neutron flux is a scalar quantity used in nuclear physics and nuclear reactor physics . It is the total distance travelled by all free neutrons per unit time and volume. [ 1 ] Equivalently, it can be defined as the number of neutrons travelling through a small sphere of radius R {\displaystyle R} in a time interval, divided by a maximal cross section of the sphere (the great disk area, π R 2 {\displaystyle \pi R^{2}} ) and by the duration of the time interval. [ 2 ] : 82-83 The dimension of neutron flux is L − 2 T − 1 {\displaystyle {\mathsf {L}}^{-2}{\mathsf {T}}^{-1}} and the usual unit is cm −2 s −1 (reciprocal square centimetre times reciprocal second ). The neutron fluence is defined as the neutron flux integrated over a certain time period. So its dimension is L − 2 {\displaystyle {\mathsf {L}}^{-2}} and its usual unit is cm −2 (reciprocal square centimetre). An older term used instead of cm −2 was "n.v.t." (neutrons, velocity, time). [ 3 ] Neutron flux in asymptotic giant branch stars and in supernovae is responsible for most of the natural nucleosynthesis producing elements heavier than iron . In stars there is a relatively low neutron flux on the order of 10 5 to 10 11 cm −2 s −1 , resulting in nucleosynthesis by the s-process (slow neutron-capture process). By contrast, after a core-collapse supernova, there is an extremely high neutron flux, on the order of 10 32 cm −2 s −1 , [ 4 ] resulting in nucleosynthesis by the r-process (rapid neutron-capture process). Earth atmospheric neutron flux, apparently from thunderstorms, can reach levels of 3·10 −2 to 9·10 +1 cm −2 s −1 . [ 5 ] [ 6 ] However, recent results [ 7 ] (considered invalid by the original investigators [ 8 ] ) obtained with unshielded scintillation neutron detectors show a decrease in the neutron flux during thunderstorms. Recent research appears to support lightning generating 10 13 –10 15 neutrons per discharge via photonuclear processes . [ 9 ] Artificial neutron flux refers to neutron flux which is man-made, either as byproducts from weapons or nuclear energy production or for a specific application such as from a research reactor or by spallation . A flow of neutrons is often used to initiate the fission of unstable large nuclei. The additional neutron(s) may cause the nucleus to become unstable, causing it to decay (split) to form more stable products. This effect is essential in fission reactors and nuclear weapons . Within a nuclear fission reactor, the neutron flux is the primary quantity measured to control the reaction inside. The flux shape is the term applied to the density or relative strength of the flux as it moves around the reactor. Typically the strongest neutron flux occurs in the middle of the reactor core, becoming lower toward the edges. The higher the neutron flux the greater the chance of a nuclear reaction occurring as there are more neutrons going through an area per unit time. A reactor vessel of a typical nuclear power plant ( PWR ) endures in 40 years (32 full reactor years) of operation approximately 6.5×10 19 cm −2 ( E > 1 MeV ) of neutron fluence. [ 10 ] Neutron flux causes reactor vessels to suffer from neutron embrittlement and is a major problem with thermonuclear fusion like ITER and other magnetic confinement D-T reactors where fast (originally 14.06 MeV) neutrons damage equipment resulting in short equipment lifetime and huge costs and large volumes of radioactive waste streams.
https://en.wikipedia.org/wiki/Neutron_flux
Neutron generators are neutron source devices which contain compact linear particle accelerators and that produce neutrons by fusing isotopes of hydrogen together. The fusion reactions take place in these devices by accelerating either deuterium , tritium , or a mixture of these two isotopes into a metal hydride target which also contains deuterium, tritium or a mixture of these isotopes . Fusion of deuterium atoms (D + D) results in the formation of a helium-3 ion and a neutron with a kinetic energy of approximately 2.5 MeV . Fusion of a deuterium and a tritium atom (D + T) results in the formation of a helium-4 ion and a neutron with a kinetic energy of approximately 14.1 MeV. Neutron generators have applications in medicine, security, and materials analysis. [ 1 ] The basic concept was first developed by Ernest Rutherford 's team in the Cavendish Laboratory in the early 1930s. Using a linear accelerator driven by a Cockcroft–Walton generator , Mark Oliphant led an experiment that fired deuterium ions into a deuterium-infused metal foil and noticed that a small number of these particles gave off alpha particles . This was the first demonstration of nuclear fusion, as well as the first discovery of Helium-3 and tritium, created in these reactions. The introduction of new power sources has continually shrunk the size of these machines, from Oliphant's that filled the corner of the lab, to modern machines that are highly portable. Thousands of such small, relatively inexpensive systems have been built since the 1960s. While neutron generators do produce fusion reactions, the number of accelerated ions that cause these reactions is very low. It can be easily demonstrated that the energy released by these reactions is many times lower than the energy needed to accelerate the ions, so there is no possibility of these machines being used to produce net fusion power . A related concept, colliding beam fusion , attempts to address this issue by using two accelerators firing toward one another. Small neutron generators using the deuterium (D, hydrogen-2, 2 H) tritium (T, hydrogen-3, 3 H) fusion reactions are the most common accelerator based (as opposed to radioactive isotopes) neutron sources. In these systems, neutrons are produced by creating ions of deuterium, tritium, or deuterium and tritium and accelerating these into a hydride target loaded with deuterium, or deuterium and tritium. The DT reaction is used more than the DD reaction because the yield of the DT reaction is 50–100 times higher than that of the DD reaction. D + T → n + 4 He E n = 14.1 MeV D + D → n + 3 He E n = 2.5 MeV Neutrons produced by DD and DT reactions are emitted somewhat anisotropically from the target, slightly biased in the forward (in the axis of the ion beam) direction. The anisotropy of the neutron emission from DD and DT reactions arises from the fact the reactions are isotropic in the center of momentum coordinate system (COM) but this isotropy is lost in the transformation from the COM coordinate system to the laboratory frame of reference . In both frames of reference, the He nuclei recoil in the opposite direction to the emitted neutron consistent with the law of conservation of momentum . The gas pressure in the ion source region of the neutron tubes generally ranges between 0.1 and 0.01 mm Hg . The mean free path of electrons must be shorter than the discharge space to achieve ionization (lower limit for pressure) while the pressure must be kept low enough to avoid formation of discharges at the high extraction voltages applied between the electrodes. The pressure in the accelerating region, however, has to be much lower, as the mean free path of electrons must be longer to prevent formation of a discharge between the high voltage electrodes. [ 2 ] The ion accelerator usually consists of several electrodes with cylindrical symmetry, acting as an einzel lens . The ion beam can thus be focused to a small point at the target. The accelerators typically require power supplies of 100–500 kV. They usually have several stages, with voltage between the stages not exceeding 200 kV to prevent field emission . [ 2 ] In comparison with radionuclide neutron sources, neutron tubes can produce much higher neutron fluxes and consistent (monochromatic) neutron energy spectra can be obtained. The neutron production rate can also be controlled. [ 2 ] The central part of a neutron generator is the particle accelerator itself, sometimes called a neutron tube. Neutron tubes have several components including an ion source, ion optic elements, and a beam target; all of these are enclosed within a vacuum-tight enclosure. High voltage insulation between the ion optical elements of the tube is provided by glass and/or ceramic insulators. The neutron tube is, in turn, enclosed in a metal housing, the accelerator head, which is filled with a dielectric medium to insulate the high voltage elements of the tube from the operating area. The accelerator and ion source high voltages are provided by external power supplies. The control console allows the operator to adjust the operating parameters of the neutron tube. The power supplies and control equipment are normally located within 3–10 metres (10–30 ft) of the accelerator head in laboratory instruments, but may be several kilometers away in well logging instruments. In comparison with their predecessors, sealed neutron tubes do not require vacuum pumps and gas sources for operation. They are therefore more mobile and compact, while also durable and reliable. For example, sealed neutron tubes have replaced radioactive modulated neutron initiators , in supplying a pulse of neutrons to the imploding core of modern nuclear weapons . Examples of neutron tube ideas date as far back as the 1930s, pre-nuclear weapons era, by German scientists filing a 1938 German patent (March 1938, patent #261,156) and obtaining a United States Patent (July 1941, USP #2,251,190); examples of present state of the art are given by developments such as the Neutristor , [ 3 ] a mostly solid state device, resembling a computer chip, invented at Sandia National Laboratories in Albuquerque NM. [ 4 ] Typical sealed designs are used in a pulsed mode [ 5 ] and can be operated at different output levels, depending on the life from the ion source and loaded targets. [ 6 ] A good ion source should provide a strong ion beam without consuming much of the gas. For hydrogen isotopes, production of atomic ions is favored over molecular ions, as atomic ions have higher neutron yield on collision. The ions generated in the ion source are then extracted by an electric field into the accelerator region, and accelerated towards the target. The gas consumption is chiefly caused by the pressure difference between the ion generating and ion accelerating spaces that has to be maintained. Ion currents of 10 mA at gas consumptions of 40 cm 3 /hour are achievable. [ 2 ] For a sealed neutron tube, the ideal ion source should use low gas pressure, give high ion current with large proportion of atomic ions, have low gas clean-up, use low power, have high reliability and high lifetime, its construction has to be simple and robust and its maintenance requirements have to be low. [ 2 ] Gas can be efficiently stored in a replenisher, an electrically heated coil of zirconium wire. Its temperature determines the rate of absorption/desorption of hydrogen by the metal, which regulates the pressure in the enclosure. The Penning source is a low gas pressure, cold cathode ion source which utilizes crossed electric and magnetic fields. The ion source anode is at a positive potential, either dc or pulsed, with respect to the source cathode. The ion source voltage is normally between 2 and 7 kilovolts. A magnetic field, oriented parallel to the source axis, is produced by a permanent magnet . A plasma is formed along the axis of the anode which traps electrons which, in turn, ionize gas in the source. The ions are extracted through the exit cathode. Under normal operation, the ion species produced by the Penning source are over 90% molecular ions. This disadvantage is however compensated for by the other advantages of the system. One of the cathodes is a cup made of soft iron , enclosing most of the discharge space. The bottom of the cup has a hole through which most of the generated ions are ejected by the magnetic field into the acceleration space. The soft iron shields the acceleration space from the magnetic field, to prevent a breakdown. [ 2 ] Ions emerging from the exit cathode are accelerated through the potential difference between the exit cathode and the accelerator electrode. The schematic indicates that the exit cathode is at ground potential and the target is at high (negative) potential. This is the case in many sealed tube neutron generators. However, in cases when it is desired to deliver the maximum flux to a sample, it is desirable to operate the neutron tube with the target grounded and the source floating at high (positive) potential. The accelerator voltage is normally between 80 and 180 kilovolts. The accelerating electrode has the shape of a long hollow cylinder. The ion beam has a slightly diverging angle (about 0.1 radian ). The electrode shape and distance from target can be chosen so the entire target surface is bombarded with ions. Acceleration voltages of up to 200 kV are achievable. The ions pass through the accelerating electrode and strike the target. When ions strike the target, 2–3 electrons per ion are produced by secondary emission. In order to prevent these secondary electrons from being accelerated back into the ion source, the accelerator electrode is biased negative with respect to the target. This voltage, called the suppressor voltage, must be at least 500 volts and may be as high as a few kilovolts. Loss of suppressor voltage will result in damage, possibly catastrophic, to the neutron tube. Some neutron tubes incorporate an intermediate electrode, called the focus or extractor electrode, to control the size of the beam spot on the target. The gas pressure in the source is regulated by heating or cooling the gas reservoir element. Ions can be created by electrons formed in high-frequency electromagnetic field. The discharge is formed in a tube located between electrodes, or inside a coil . Over 90% proportion of atomic ions is achievable. [ 2 ] The targets used in neutron generators are thin films of metal such as titanium , scandium , or zirconium which are deposited onto a silver , copper or molybdenum substrate. Titanium, scandium, and zirconium form stable chemical compounds called metal hydrides when combined with hydrogen or its isotopes. These metal hydrides are made up of two hydrogen ( deuterium or tritium ) atoms per metal atom and allow the target to have extremely high densities of hydrogen. This is important to maximize the neutron yield of the neutron tube. The gas reservoir element also uses metal hydrides, e.g. uranium hydride , as the active material. Titanium is preferred to zirconium as it can withstand higher temperatures (200 °C), and gives higher neutron yield as it captures deuterons better than zirconium. The maximum temperature allowed for the target, above which hydrogen isotopes undergo desorption and escape the material, limits the ion current per surface unit of the target; slightly divergent beams are therefore used. A 1 microampere ion beam accelerated at 200 kV to a titanium-tritium target can generate up to 10 8 neutrons per second. The neutron yield is mostly determined by the accelerating voltage and the ion current level. [ 2 ] An example of a tritium target in use is a 0.2 mm thick silver disc with a 1 micrometer layer of titanium deposited on its surface; the titanium is then saturated with tritium. [ 2 ] Metals with sufficiently low hydrogen diffusion can be turned into deuterium targets by bombardment of deuterons until the metal is saturated. Gold targets under such condition show four times higher efficiency than titanium. Even better results can be achieved with targets made of a thin film of a high-absorption high-diffusivity metal (e.g. titanium) on a substrate with low hydrogen diffusivity (e.g. silver), as the hydrogen is then concentrated on the top layer and can not diffuse away into the bulk of the material. Using a deuterium-tritium gas mixture, self-replenishing D-T targets can be made. The neutron yield of such targets is lower than of tritium-saturated targets in deuteron beams, but their advantage is much longer lifetime and constant level of neutron production. Self-replenishing targets are also tolerant to high-temperature bake-out of the tubes, as their saturation with hydrogen isotopes is performed after the bakeout and tube sealing. [ 2 ] One approach for generating the high voltage fields needed to accelerate ions in a neutron tube is to use a pyroelectric crystal . In April 2005 researchers at UCLA demonstrated the use of a thermally cycled pyroelectric crystal to generate high electric fields in a neutron generator application. In February 2006 researchers at Rensselaer Polytechnic Institute demonstrated the use of two oppositely poled crystals for this application. Using these low-tech power supplies it is possible to generate a sufficiently high electric field gradient across an accelerating gap to accelerate deuterium ions into a deuterated target to produce the D + D fusion reaction. These devices are similar in their operating principle to conventional sealed-tube neutron generators which typically use Cockcroft–Walton type high voltage power supplies. The novelty of this approach is in the simplicity of the high voltage source. Unfortunately, the relatively low accelerating current that pyroelectric crystals can generate, together with the modest pulsing frequencies that can be achieved (a few cycles per minute) limits their near-term application in comparison with today's commercial products (see below). Also see pyroelectric fusion . [ 7 ] In addition to the conventional neutron generator design described above several other approaches exist to use electrical systems for producing neutrons. Another type of innovative neutron generator is the inertial electrostatic confinement fusion device. This neutron generator avoids using a solid target which will be sputter eroded causing metalization of insulating surfaces. Depletion of the reactant gas within the solid target is also avoided. Far greater operational lifetime is achieved. Originally called a fusor, it was invented by Philo Farnsworth . Neutron generators find application in semiconductor production industry. They also have use cases in the enrichment of depleted uranium, acceleration of breeder reactors, and activation and excitement of experimental thorium reactors. In material analysis neutron activation analysis is used to determine concentration of different elements in mixed materials such as minerals or ores.
https://en.wikipedia.org/wiki/Neutron_generator
Neutron irradiation damage refers to material changes caused by high neutron flux , typically in a nuclear reactor after many years. Graphite may shrink and then swell. [ 1 ] This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron_irradiation_damage
A neutron moisture meter is a moisture meter utilizing neutron scattering . The meters are most frequently used to measure the water content in soil or rock. The technique is non-destructive, and is sensitive to moisture in the bulk of the target material, not just at the surface. Water, due to its hydrogen content, is an effective neutron moderator , slowing high-energy neutrons. With a source of high-energy neutrons and a detector sensitive to low-energy neutrons ( thermal neutrons ), the detection rate will be governed by the water content of the soil between the source and the detector. The neutron source typically contains a small amount of a radionuclide . Sources may emit neutrons during spontaneous fission , as with californium ; alternatively, an alpha emitter may be mixed with a light element for a nuclear reaction yielding excess neutrons, as with americium in a beryllium matrix.
https://en.wikipedia.org/wiki/Neutron_moisture_gauge
The neutron number (symbol N ) is the number of neutrons in a nuclide . Atomic number (proton number) plus neutron number equals mass number : Z + N = A . The difference between the neutron number and the atomic number is known as the neutron excess: D = N − Z = A − 2 Z . Neutron number is not written explicitly in nuclide symbol notation, but can be inferred as it is the difference between the two left-hand numbers (atomic number and mass). Nuclides that have the same neutron number but different proton numbers are called isotones . This word was formed by replacing the p in isotope with n for neutron. Nuclides that have the same mass number are called isobars . Nuclides that have the same neutron excess are called isodiaphers . [ 1 ] Chemical properties are primarily determined by proton number, which determines which chemical element the nuclide is a member of; neutron number has only a slight influence . Neutron number is primarily of interest for nuclear properties. For example, actinides with odd neutron number are usually fissile ( fissionable with slow neutrons ) while actinides with even neutron number are usually not fissile (but are fissionable with fast neutrons ). Only 58 stable nuclides have an odd neutron number, compared to 194 with an even neutron number. No odd-neutron-number isotope is the most naturally abundant isotope in its element, except for beryllium-9 (which is the only stable beryllium isotope), nitrogen-14 , and platinum -195. No stable nuclides have a neutron number of 19, 21, 35, 39, 45, 61, 89, 115, 123, or ≥ 127. There are 6 stable nuclides and one radioactive primordial nuclide with neutron number 82 (82 is the neutron number with the most stable nuclides, since it is a magic number ): barium-138 , lanthanum-139 , cerium-140 , praseodymium-141 , neodymium-142 , and samarium-144 , as well as the radioactive primordial nuclide xenon-136 , which decays by a very slow double beta process. Except 20, 50 and 82 (all these three numbers are magic numbers), all other neutron numbers have at most 4 stable nuclides (in the case of 20, there are 5 stable nuclides 36 S, 37 Cl, 38 Ar, 39 K, and 40 Ca, and in the case for 50, there are 5 stable nuclides: 86 Kr, 88 Sr, 89 Y, 90 Zr, and 92 Mo, and 1 radioactive primordial nuclide, 87 Rb). Most odd neutron numbers have at most one stable nuclide (exceptions are 1 ( 2 H and 3 He), 5 ( 9 Be and 10 B), 7 ( 13 C and 14 N), 55 ( 97 Mo and 99 Ru) and 107 ( 179 Hf and 180m Ta)). However, some even neutron numbers also have only one stable nuclide; these numbers are 0 ( 1 H), 2 ( 4 He), 4 ( 7 Li), 84 ( 142 Ce), 86 ( 146 Nd) and 126 ( 208 Pb), the case of 84 is special, since 142 Ce is theoretically unstable to double beta decay , and the nuclides with 84 neutrons which are theoretically stable to both beta decay and double beta decay are 144 Nd and 146 Sm, but both nuclides are observed to alpha decay . [ 2 ] (In theory, no stable nuclides have neutron number 19, 21, 35, 39, 45, 61, 71, 83–91, 95, 96, and ≥ 99) Besides, no nuclides with neutron number 19, 21, 35, 39, 45, 61, 71, 89, 115, 123, 147, ... are stable to beta decay (see Beta-decay stable isobars ). Only two stable nuclides have fewer neutrons than protons: hydrogen-1 and helium-3 . Hydrogen-1 has the smallest neutron number, 0. This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron_number
A neutron research facility is most commonly a big laboratory operating a large-scale neutron source that provides thermal neutrons to a suite of research instruments. The neutron source usually is a research reactor or a spallation source. In some cases, a smaller facility will provide high energy neutrons (e.g. 2.5 MeV or 14 MeV fusion neutrons ) using existing neutron generator technologies. The following list is intended to be exhaustive and to cover active facilities as well as those that are shut down.
https://en.wikipedia.org/wiki/Neutron_research_facility
Neutron scattering , the irregular dispersal of free neutrons by matter, can refer to either the naturally occurring physical process itself or to the man-made experimental techniques that use the natural process for investigating materials. The natural/physical phenomenon is of elemental importance in nuclear engineering and the nuclear sciences. Regarding the experimental technique, understanding and manipulating neutron scattering is fundamental to the applications used in crystallography , physics , physical chemistry , biophysics , and materials research . Neutron scattering is practiced at research reactors and spallation neutron sources that provide neutron radiation of varying intensities . Neutron diffraction ( elastic scattering ) techniques are used for analyzing structures; where inelastic neutron scattering is used in studying atomic vibrations and other excitations . "Fast neutrons" (see neutron temperature ) have a kinetic energy above 1 MeV . They can be scattered by condensed matter—nuclei having kinetic energies far below 1 eV—as a valid experimental approximation of an elastic collision with a particle at rest. With each collision, the fast neutron transfers a significant part of its kinetic energy to the scattering nucleus (condensed matter), the more so the lighter the nucleus. And with each collision, the "fast" neutron is slowed until it reaches thermal equilibrium with the material in which it is scattered. Neutron moderators are used to produce thermal neutrons , which have kinetic energies below 1 eV (T < 500K). [ 1 ] Thermal neutrons are used to maintain a nuclear chain reaction in a nuclear reactor , and as a research tool in neutron scattering experiments and other applications of neutron science (see below). The remainder of this article concentrates on the scattering of thermal neutrons. Because neutrons are electrically neutral, they penetrate more deeply into matter than electrically charged particles of comparable kinetic energy, and thus are valuable as probes of bulk properties. Neutrons interact with atomic nuclei and with magnetic fields from unpaired electrons, causing pronounced interference and energy transfer effects in neutron scattering experiments. Unlike an x-ray photon with a similar wavelength, which interacts with the electron cloud surrounding the nucleus , neutrons interact primarily with the nucleus itself, as described by Fermi's pseudopotential . Neutron scattering and absorption cross sections vary widely from isotope to isotope. Neutron scattering can be incoherent or coherent, also depending on isotope. Among all isotopes, hydrogen has the highest scattering cross section. Important elements like carbon and oxygen are quite visible in neutron scattering—this is in marked contrast to X-ray scattering where cross sections systematically increase with atomic number. Thus neutrons can be used to analyze materials with low atomic numbers, including proteins and surfactants. This can be done at synchrotron sources but very high intensities are needed, which may cause the structures to change. The nucleus provides a very short range, as isotropic potential varies randomly from isotope to isotope, which makes it possible to tune the (scattering) contrast to suit the experiment. Scattering almost always presents both elastic and inelastic components. The fraction of elastic scattering is determined by the Debye-Waller factor or the Mössbauer-Lamb factor . Depending on the research question, most measurements concentrate on either elastic or inelastic scattering. Achieving a precise velocity, i.e. a precise energy and de Broglie wavelength , of a neutron beam is important. Such single-energy beams are termed 'monochromatic', and monochromaticity is achieved either with a crystal monochromator or with a time of flight (TOF) spectrometer . In the time-of-flight technique, neutrons are sent through a sequence of two rotating slits such that only neutrons of a particular velocity are selected. Spallation sources have been developed that can create a rapid pulse of neutrons. The pulse contains neutrons of many different velocities or de Broglie wavelengths, but separate velocities of the scattered neutrons can be determined afterwards by measuring the time of flight of the neutrons between the sample and neutron detector. The neutron has a net electric charge of zero, but has a significant magnetic moment , although only about 0.1% of that of the electron . Nevertheless, it is large enough to scatter from local magnetic fields inside condensed matter, providing a weakly interacting and hence penetrating probe of ordered magnetic structures and electron spin fluctuations. [ 2 ] Inelastic neutron scattering is an experimental technique commonly used in condensed matter research to study atomic and molecular motion as well as magnetic and crystal field excitations. [ 3 ] [ 4 ] It distinguishes itself from other neutron scattering techniques by resolving the change in kinetic energy that occurs when the collision between neutrons and the sample is an inelastic one. Results are generally communicated as the dynamic structure factor (also called inelastic scattering law) S ( Q , ω ) {\displaystyle S(\mathbf {Q} ,\omega )} , sometimes also as the dynamic susceptibility χ ′ ′ ( Q , ω ) {\displaystyle \chi ^{\prime \prime }(\mathbf {Q} ,\omega )} where the scattering vector Q {\displaystyle \mathbf {Q} } is the difference between incoming and outgoing wave vector , and ℏ ω {\displaystyle \hbar \omega } is the energy change experienced by the sample (negative that of the scattered neutron). When results are plotted as function of ω {\displaystyle \omega } , they can often be interpreted in the same way as spectra obtained by conventional spectroscopic techniques; insofar as inelastic neutron scattering can be seen as a special spectroscopy. Inelastic scattering experiments normally require a monochromatization of the incident or outgoing beam and an energy analysis of the scattered neutrons. This can be done either through time-of-flight techniques ( neutron time-of-flight scattering ) or through Bragg reflection from single crystals ( neutron triple-axis spectroscopy , neutron backscattering ). Monochromatization is not needed in echo techniques ( neutron spin echo , neutron resonance spin echo ), which use the quantum mechanical phase of the neutrons in addition to their amplitudes. [ citation needed ] The first neutron diffraction experiments were performed in the 1930s. [ 1 ] However it was not until around 1945, with the advent of nuclear reactors, that high neutron fluxes became possible, leading to the possibility of in-depth structure investigations. The first neutron-scattering instruments were installed in beam tubes at multi-purpose research reactors. In the 1960s, high-flux reactors were built that were optimized for beam-tube experiments. The development culminated in the high-flux reactor of the Institut Laue-Langevin (in operation since 1972) that achieved the highest neutron flux to this date. Besides a few high-flux sources, there were some twenty medium-flux reactor sources at universities and other research institutes. Starting in the 1980s, many of these medium-flux sources were shut down, and research concentrated at a few world-leading high-flux sources. Today, most neutron scattering experiments are performed by research scientists who apply for beamtime at neutron sources through a formal proposal procedure. Because of the low count rates involved in neutron scattering experiments, relatively long periods of beam time (on the order of days) are usually required for usable data sets. Proposals are assessed for feasibility and scientific interest. [ 5 ]
https://en.wikipedia.org/wiki/Neutron_scattering
Neutron spectroscopy is a spectroscopic method of measuring atomic and magnetic motions by measuring the kinetic energy of emitted neutrons . The measured neutrons may be emitted directly (for example, by nuclear reactions ), or they may scatter off cold matter before reaching the detector. Inelastic neutron scattering observes the change in the energy and wavevector of the neutron as it scatters from a sample. [ 1 ] This can be used to probe a wide variety of different physical phenomena such as the motions of atoms (diffusional or hopping), the rotational modes of molecules, sound modes and molecular vibrations , recoil in quantum fluids , magnetic and quantum excitations or even electronic transitions. [ 2 ] Since its discovery, neutron spectroscopy has become useful in medicine as it has been applied to radiation protection and radiation therapy . [ 3 ] It is also used in nuclear fusion experiments, where the neutron spectrum can be used to infer the plasma temperature, density, and composition, in addition to the total fusion power. [ 4 ] Neutron spectroscopy is routinely conducted with a wide range of neutron energies, from as low as a few hundredths of an electronvolt [ 5 ] to as high as tens of megaelectronvolts. [ 4 ] Much current research focuses on expanding these capabilities to higher energies. In 2001, US researchers were able to measure neutrons with energies up to 100 gigaelectronvolts [ 6 ] There are three different types of scattering interactions that allow for the probing of a variety of properties using neutrons: nuclear scattering (coherent scattering), spin-dependent nuclear scattering ( incoherent scattering ), and magnetic dipole interactions between the neutron and dipolar field of unpaired electrons . In most cases, coherent scattering and incoherent scattering are used to investigate molecular properties. With these scattering interactions, it is possible to probe diffusive motions in liquid water such as translational and rotational motions since the energies associated with this action are on the order of about 1 meV. Neutron spectroscopy can also be used to probe inter and intramolecular vibrational modes as the energies associated with such transfers are around 400-500 meV which is still within the range of energies possible for this method. [ 7 ] The first type of interaction is nuclear scattering occurs when neutrons interact with nuclei through the very short range nuclear force. The wavelength, λ, is on the order of a few angstroms (Å). Because a thermal neutron cannot “see” the internal structure of a nucleus, the scattering is considered to be isotropic . This interaction is thus characterized by a scattering length of b, which is on the same order of the size of a nucleus (10 −15 m). Therefore, nuclear scattering allows for the probing of density correlations of nucleons in the nucleus. [ 7 ] [ 8 ] The second type of interaction is spin-dependent nuclear scattering, which is when the neutron and nucleus interaction depends on the total spin (spin of the neutron, ½, and spin of the nucleus, I) formed during the scattering event. The two possible states thus become I + ½ and I – ½. This spin dependence thus results in incoherent scattering, which allows for the probing of single-particle motion as well as the study of the ordering of nuclear spins at ultra-low temperatures. [ 7 ] [ 8 ] The third type of interaction is between the magnetic dipole moment of the neutron and the dipolar field from unpaired electrons. This allows the total spin of the unpaired electrons and neutron to be probed. The magnetic scattering length from one electron is b m = 𝛾r 0 = 1.348 fm which is on the same order of magnitude as the nuclear scattering length. Because of the dipole-dipole character of the interaction, the scattering is considered to be anisotropic. [ 7 ]
https://en.wikipedia.org/wiki/Neutron_spectroscopy
Neutron transport (also known as neutronics ) is the study of the motions and interactions of neutrons with materials. Nuclear scientists and engineers often need to know where neutrons are in an apparatus, in what direction they are going, and how quickly they are moving. It is commonly used to determine the behavior of nuclear reactor cores and experimental or industrial neutron beams . Neutron transport is a type of radiative transport . Neutron transport has roots in the Boltzmann equation , which was used in the 1800s to study the kinetic theory of gases. It did not receive large-scale development until the invention of chain-reacting nuclear reactors in the 1940s. As neutron distributions came under detailed scrutiny, elegant approximations and analytic solutions were found in simple geometries. However, as computational power has increased, numerical approaches to neutron transport have become prevalent. Today, with massively parallel computers, neutron transport is still under very active development in academia and research institutions throughout the world. It remains a computationally challenging problem since it depends on time and the 3 dimensions of space, and the variables of energy span several orders of magnitude (from fractions of meV to several MeV). Modern solutions use either discrete ordinates or Monte Carlo methods, or even a hybrid of both. The neutron transport equation is a balance statement that conserves neutrons. Each term represents a gain or a loss of a neutron, and the balance, in essence, claims that neutrons gained equals neutrons lost. It is formulated as follows: [ 1 ] Where: The transport equation can be applied to a given part of phase space (time t , energy E , location r , {\displaystyle \mathbf {r} ,} and direction of travel Ω ^ . {\displaystyle \mathbf {\hat {\Omega }} .} ) The first term represents the time rate of change of neutrons in the system. The second terms describes the movement of neutrons into or out of the volume of space of interest. The third term accounts for all neutrons that have a collision in that phase space. The first term on the right hand side is the production of neutrons in this phase space due to fission, while the second term on the right hand side is the production of neutrons in this phase space due to delayed neutron precursors (i.e., unstable nuclei which undergo neutron decay). The third term on the right hand side is in-scattering, these are neutrons that enter this area of phase space as a result of scattering interactions in another. The fourth term on the right is a generic source. The equation is usually solved to find ϕ ( r , E ) , {\displaystyle \phi (\mathbf {r} ,E),} since that will allow for the calculation of reaction rates, which are of primary interest in shielding and dosimetry studies. Several basic types of neutron transport problems exist, depending on the type of problem being solved. A fixed source calculation involves imposing a known neutron source on a medium and determining the resulting neutron distribution throughout the problem. This type of problem is particularly useful for shielding calculations, where a designer would like to minimize the neutron dose outside of a shield while using the least amount of shielding material. For instance, a spent nuclear fuel cask requires shielding calculations to determine how much concrete and steel is needed to safely protect the truck driver who is shipping it. Fission is the process through which a nucleus splits into (typically two) smaller atoms. If fission is occurring, it is often of interest to know the asymptotic behavior of the system. A reactor is called “critical” if the chain reaction is self-sustaining and time-independent. If the system is not in equilibrium the asymptotic neutron distribution, or the fundamental mode, will grow or decay exponentially over time. Criticality calculations are used to analyze steady-state multiplying media (multiplying media can undergo fission), such as a critical nuclear reactor. The loss terms (absorption, out-scattering, and leakage) and the source terms (in-scatter and fission) are proportional to the neutron flux, contrasting with fixed-source problems where the source is independent of the flux. In these calculations, the presumption of time invariance requires that neutron production exactly equals neutron loss. Since this criticality can only be achieved by very fine manipulations of the geometry (typically via control rods in a reactor), it is unlikely that the modeled geometry will be truly critical. To allow some flexibility in the way models are set up, these problems are formulated as eigenvalue problems, where one parameter is artificially modified until criticality is reached. The most common formulations are the time-absorption and the multiplication eigenvalues, also known as the alpha and k eigenvalues. The alpha and k are the tunable quantities. K-eigenvalue problems are the most common in nuclear reactor analysis. The number of neutrons produced per fission is multiplicatively modified by the dominant eigenvalue. The resulting value of this eigenvalue reflects the time dependence of the neutron density in a multiplying medium. In the case of a nuclear reactor , neutron flux and power density are proportional, hence during reactor start-up k eff > 1, during reactor operation k eff = 1 and k eff < 1 at reactor shutdown. Both fixed-source and criticality calculations can be solved using deterministic methods or stochastic methods . In deterministic methods the transport equation (or an approximation of it, such as diffusion theory ) is solved as a differential equation. In stochastic methods such as Monte Carlo discrete particle histories are tracked and averaged in a random walk directed by measured interaction probabilities. Deterministic methods usually involve multi-group approaches while Monte Carlo can work with multi-group and continuous energy cross-section libraries. Multi-group calculations are usually iterative, because the group constants are calculated using flux-energy profiles, which are determined as the result of the neutron transport calculation. To numerically solve the transport equation using algebraic equations on a computer, the spatial, angular, energy, and time variables must be discretized .
https://en.wikipedia.org/wiki/Neutron_transport
Neutronium (or neutrium , [ 1 ] neutrite, [ 2 ] or element zero ) is a hypothetical substance made purely of neutrons . The word was coined by scientist Andreas von Antropoff in 1926 (before the 1932 discovery of the neutron ) for the hypothetical "element of atomic number zero" (with no protons in its nucleus) that he placed at the head of the periodic table (denoted by -). [ 3 ] [ 4 ] However, the meaning of the term has changed over time , and from the last half of the 20th century onward it has been also used to refer to extremely dense substances resembling the neutron-degenerate matter theorized to exist in the cores of neutron stars . Neutronium is used in popular physics literature [ 1 ] [ 2 ] to refer to the material present in the cores of neutron stars (stars which are too massive to be supported by electron degeneracy pressure and which collapse into a denser phase of matter). In scientific literature the term "neutron-degenerate matter" [ 5 ] or simply neutron matter is used for this material. [ 6 ] The term "neutronium" was coined in 1926 by Andreas von Antropoff for a conjectured form of matter made up of neutrons with no protons or electrons , which he placed as the chemical element of atomic number zero at the head of his new version of the periodic table . [ 3 ] It was subsequently placed in the middle of several spiral representations of the periodic system for classifying the chemical elements, such as those of Charles Janet (1928), Edgar Emerson (1944), [ 7 ] [ 8 ] and John D. Clark (1950). The term is not used in the scientific literature either for a condensed form of matter, or as an element, and theoretical analysis expects no bound forms of neutrons without protons. [ 9 ] The dineutron, containing two neutrons, is not a stable bound particle, but an extremely short-lived resonance state produced by nuclear reactions in the decay of beryllium-16. Evidence reported in 2012 for the resonance [ 10 ] [ 11 ] was disputed, [ 12 ] but new work reportedly clears up the issues. [ 13 ] The dineutron hypothesis had been used in theoretical studies of the structure of exotic nuclei . For example 11 Li is modeled as a dineutron bound to a 9 Li core. [ 14 ] [ 15 ] A system made up of only two neutrons is not bound, though the attraction between them is very nearly enough to make them so. [ 16 ] This has some consequences on nucleosynthesis and the abundance of the chemical elements . [ 14 ] [ 17 ] A trineutron state consisting of three bound neutrons has not been detected, and is not expected to be bound. [ 18 ] A tetraneutron is a hypothetical particle consisting of four bound neutrons. Reports of its existence have not been replicated. [ 19 ] [ 20 ] Calculations indicate that the hypothetical pentaneutron state, consisting of a cluster of five neutrons, would not be bound. [ 21 ]
https://en.wikipedia.org/wiki/Neutronium
The neutron–proton ratio ( N/Z ratio or nuclear ratio ) of an atomic nucleus is the ratio of its number of neutrons to its number of protons . Among stable nuclei and naturally occurring nuclei, this ratio generally increases with increasing atomic number. [ 1 ] This is because electrical repulsive forces between protons scale with distance differently than strong nuclear force attractions. In particular, most pairs of protons in large nuclei are not far enough apart, such that electrical repulsion dominates over the strong nuclear force, and thus proton density in stable larger nuclei must be lower than in stable smaller nuclei where more pairs of protons have appreciable short-range nuclear force attractions. For many elements with atomic number Z small enough to occupy only the first three nuclear shells , that is up to that of calcium ( Z = 20), there exists a stable isotope with N / Z ratio of one. The exceptions are beryllium ( N / Z = 1.25) and every element with odd atomic number between 9 and 19 inclusive (though in those cases N = Z + 1 always allows for stability). Hydrogen-1 ( N / Z ratio = 0) and helium-3 ( N / Z ratio = 0.5) are the only stable isotopes with neutron–proton ratio under one. Uranium-238 has the highest N / Z ratio of any primordial nuclide at 1.587, [ 2 ] while mercury-204 has the highest N / Z ratio of any known stable isotope at 1.55. Radioactive decay generally proceeds so as to change the N / Z ratio to increase stability. If the N / Z ratio is greater than 1, alpha decay increases the N / Z ratio, and hence provides a common pathway towards stability for decays involving large nuclei with too few neutrons. Positron emission and electron capture also increase the ratio, while beta decay decreases the ratio. Nuclear waste exists mainly because nuclear fuel has a higher stable N / Z ratio than its fission products . For stable nuclei, the neutron-proton ratio is such that the binding energy is at a local minimum or close to a minimum. From the liquid drop model, this bonding energy is approximated by empirical Bethe–Weizsäcker formula Given a value of A {\displaystyle A} and ignoring the contributions of nucleon spin pairing (i.e. ignoring the ± δ ( A , Z ) {\displaystyle \pm \delta (A,Z)} term), the binding energy is a quadratic expression in Z {\displaystyle Z} that is minimized when the neutron-proton ratio is N / Z ≈ 1 + a C 2 a A A 2 / 3 {\displaystyle N/Z\approx 1+{\frac {a_{C}}{2a_{A}}}A^{2/3}} . This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutron–proton_ratio
Neutrophils are a type of phagocytic white blood cell and part of innate immunity . More specifically, they form the most abundant type of granulocytes and make up 40% to 70% of all white blood cells in humans. [ 1 ] Their functions vary in different animals. [ 2 ] They are also known as neutrocytes, heterophils or polymorphonuclear leukocytes. They are formed from stem cells in the bone marrow and differentiated into subpopulations of neutrophil-killers and neutrophil-cagers. They are short-lived (between 5 and 135 hours, see § Life span ) and highly mobile, as they can enter parts of tissue where other cells/molecules cannot. Neutrophils may be subdivided into segmented neutrophils and banded neutrophils (or bands ). They form part of the polymorphonuclear cells family (PMNs) together with basophils and eosinophils . [ 3 ] [ 4 ] [ 5 ] The name neutrophil derives from staining characteristics on hematoxylin and eosin ( H&E ) histological or cytological preparations. Whereas basophilic white blood cells stain dark blue and eosinophilic white blood cells stain bright red, neutrophils stain a neutral pink. Normally, neutrophils contain a nucleus divided into 2–5 lobes. [ 6 ] Neutrophils are a type of phagocyte and are normally found in the bloodstream . During the beginning ( acute ) phase of inflammation , particularly as a result of bacterial infection , environmental exposure, [ 7 ] and some cancers, [ 8 ] [ 9 ] neutrophils are one of the first responders of inflammatory cells to migrate toward the site of inflammation. They migrate through the blood vessels and then through interstitial space, following chemical signals such as interleukin-8 (IL-8), C5a , fMLP , leukotriene B4 , and hydrogen peroxide (H 2 O 2 ) [ 10 ] in a process called chemotaxis . They are the predominant cells in pus , accounting for its whitish/yellowish appearance. [ 11 ] Neutrophils are recruited to the site of injury within minutes following trauma and are the hallmark of acute inflammation. [ 12 ] They not only play a central role in combating infection but also contribute to pain in the acute period by releasing pro-inflammatory cytokines and other mediators that sensitize nociceptors , leading to heightened pain perception. [ 13 ] However, due to some pathogens being indigestible, they may not be able to resolve certain infections without the assistance of other types of immune cells. When adhered to a surface, neutrophil granulocytes have an average diameter of 12–15 micrometers (μm) in peripheral blood smears . In suspension, human neutrophils have an average diameter of 8.85 μm. [ 14 ] With the eosinophil and the basophil , they form the class of polymorphonuclear cells , named for the nucleus ' multilobulated shape (as compared to lymphocytes and monocytes , the other types of white cells). The nucleus has a characteristic lobed appearance, the separate lobes connected by chromatin . The nucleolus disappears as the neutrophil matures, which is something that happens in only a few other types of nucleated cells. [ 15 ] : 168 Up to 17% of female human neutrophil nuclei have a drumstick-shaped appendage which contains the inactivated X chromosome . [ 16 ] In the cytoplasm, the Golgi apparatus is small, mitochondria and ribosomes are sparse, and the rough endoplasmic reticulum is absent. [ 15 ] : 170 The cytoplasm also contains about 200 granules, of which a third are azurophilic . [ 15 ] : 170 Neutrophils will show increasing segmentation (many segments of the nucleus) as they mature. A normal neutrophil should have 3–5 segments. Hypersegmentation is not normal but occurs in some disorders, most notably vitamin B 12 deficiency . This is noted in a manual review of the blood smear and is positive when most or all of the neutrophils have 5 or more segments. [ citation needed ] Neutrophils are the most abundant white blood cells in the human body (approximately 10 11 are produced daily); they account for approximately 50–70% of all white blood cells (leukocytes). The stated normal range for human blood counts varies between laboratories, but a neutrophil count of 2.5–7.5 × 10 9 /L is a standard normal range. People of African and Middle Eastern descent may have lower counts, which are still normal. [ 17 ] A report may divide neutrophils into segmented neutrophils and bands . When circulating in the bloodstream and inactivated, neutrophils are spherical. Once activated, they change shape and become more amorphous or amoeba -like and can extend pseudopods as they hunt for antigens . [ 18 ] The capacity of neutrophils to engulf bacteria is reduced when simple sugars like glucose, fructose as well as sucrose, honey and orange juice were ingested, while the ingestion of starches had no effect. Fasting, on the other hand, strengthened the neutrophils' phagocytic capacity to engulf bacteria. It was concluded that the function, and not the number, of phagocytes in engulfing bacteria was altered by the ingestion of sugars. [ 19 ] In 2007 researchers at the Whitehead Institute of Biomedical Research found that given a selection of sugars on microbial surfaces, the neutrophils reacted to some types of sugars preferentially. The neutrophils preferentially engulfed and killed beta-1,6-glucan targets compared to beta-1,3-glucan targets. [ 20 ] [ 21 ] The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours. [ 22 ] [ 23 ] Upon activation, they marginate (position themselves adjacent to the blood vessel endothelium) and undergo selectin -dependent capture followed by integrin -dependent adhesion in most cases, after which they migrate into tissues, where they survive for 1–2 days. [ 24 ] Neutrophils have also been demonstrated to be released into the blood from a splenic reserve following myocardial infarction . [ 25 ] The distribution ratio of neutrophils in bone marrow, blood and connective tissue is 28:1:25. [ citation needed ] Neutrophils are much more numerous than the longer-lived monocyte / macrophage phagocytes. A pathogen (disease-causing microorganism or virus) is likely to first encounter a neutrophil. Some experts hypothesize that the short lifetime of neutrophils is an evolutionary adaptation. The short lifetime of neutrophils minimizes propagation of those pathogens that parasitize phagocytes (e.g. Leishmania [ 26 ] ) because the more time such parasites spend outside a host cell , the more likely they will be destroyed by some component of the body's defenses. Also, because neutrophil antimicrobial products can also damage host tissues , their short life limits damage to the host during inflammation . [ 24 ] Neutrophils will be removed after phagocytosis of pathogens by macrophages. PECAM-1 and phosphatidylserine on the cell surface are involved in this process. [ citation needed ] Neutrophils undergo a process called chemotaxis via amoeboid movement , which allows them to migrate toward sites of infection or inflammation. Cell surface receptors allow neutrophils to detect chemical gradients of molecules such as interleukin-8 (IL-8), interferon gamma (IFN-γ), C3a, C5a , and leukotriene B4 , which these cells use to direct the path of their migration. [ citation needed ] Neutrophils have a variety of specific receptors, including ones for complement , cytokines like interleukins and IFN-γ, chemokines , lectins , and other proteins. They also express receptors to detect and adhere to endothelium and Fc receptors for opsonin . [ 27 ] In leukocytes responding to a chemoattractant , the cellular polarity is regulated by activities of small Ras or Rho guanosine triphosphatases (Ras or Rho GTPases ) and the phosphoinositide 3-kinases ( PI3Ks ). In neutrophils, lipid products of PI3Ks regulate activation of Rac1, hematopoietic Rac2, and RhoG GTPases of the Rho family and are required for cell motility . Ras-GTPases and Rac-GTPases regulate cytoskeletal dynamics and facilitate neutrophils adhesion, migration, and spreading. [ 28 ] [ 29 ] [ 30 ] They accumulate asymmetrically to the plasma membrane at the leading edge of polarized cells. Spatially regulating Rho GTPases and organizing the leading edge of the cell, PI3Ks and their lipid products could play pivotal roles in establishing leukocyte polarity, as compass molecules that tell the cell where to crawl. [ citation needed ] It has been shown in mice that in certain conditions neutrophils have a specific type of migration behaviour referred to as neutrophil swarming during which they migrate in a highly coordinated manner and accumulate and cluster to sites of inflammation. [ 31 ] Being highly motile , neutrophils quickly congregate at a focus of infection , attracted by cytokines expressed by activated endothelium , mast cells , and macrophages . Neutrophils express [ 32 ] and release cytokines, which in turn amplify inflammatory reactions by several other cell types. [ citation needed ] In addition to recruiting and activating other cells of the immune system, neutrophils play a key role in the front-line defense against invading pathogens, and contain a broad range of proteins. [ 33 ] Neutrophils have three methods for directly attacking microorganisms: phagocytosis (ingestion), degranulation (release of soluble anti-microbials), and generation of neutrophil extracellular traps (NETs). [ 34 ] Neutrophils are phagocytes , capable of ingesting microorganisms or particles. For targets to be recognized, they must be coated in opsonins – a process known as antibody opsonization . [ 18 ] They can internalize and kill many microbes , each phagocytic event resulting in the formation of a phagosome into which reactive oxygen species and hydrolytic enzymes are secreted. The consumption of oxygen during the generation of reactive oxygen species has been termed the " respiratory burst ", although unrelated to respiration or energy production. [ citation needed ] The respiratory burst involves the activation of the enzyme NADPH oxidase , which produces large quantities of superoxide , a reactive oxygen species. Superoxide decays spontaneously or is broken down via enzymes known as superoxide dismutases (Cu/ZnSOD and MnSOD), to hydrogen peroxide, which is then converted to hypochlorous acid (HClO), by the green heme enzyme myeloperoxidase . It is thought that the bactericidal properties of HClO are enough to kill bacteria phagocytosed by the neutrophil, but this may instead be a step necessary for the activation of proteases. [ 35 ] Though neutrophils can kill many microbes, the interaction of neutrophils with microbes and molecules produced by microbes often alters neutrophil turnover. The ability of microbes to alter the fate of neutrophils is highly varied, can be microbe-specific, and ranges from prolonging the neutrophil lifespan to causing rapid neutrophil lysis after phagocytosis. Chlamydia pneumoniae and Neisseria gonorrhoeae have been reported to delay neutrophil apoptosis . [ 36 ] [ 37 ] [ 38 ] Thus, some bacteria – and those that are predominantly intracellular pathogens – can extend the neutrophil lifespan by disrupting the normal process of spontaneous apoptosis and/or PICD (phagocytosis-induced cell death). On the other end of the spectrum, some pathogens such as Streptococcus pyogenes are capable of altering neutrophil fate after phagocytosis by promoting rapid cell lysis and/or accelerating apoptosis to the point of secondary necrosis. [ 39 ] [ 40 ] Neutrophils also release an assortment of proteins in three types of granules by a process called degranulation . The contents of these granules have antimicrobial properties, and help combat infection. Glitter cells are polymorphonuclear leukocyte neutrophils with granules. [ 41 ] Degranulation is postulated to occur in a hierarchical manner, with the sequential release of secretory vesicles, tertiary granules, specific granules, and azurophilic granules in response to increasing intracellular calcium concentrations. [ 42 ] The release of neutrophils by degranulation occurs through exocytosis , regulated by exocytotic machinery including SNARE proteins, RAC2 , RAB27 , and others. [ citation needed ] In 2004, Brinkmann and colleagues described a striking observation that activation of neutrophils causes the release of web-like structures of DNA; this represents a third mechanism for killing bacteria. [ 44 ] These neutrophil extracellular traps (NETs) comprise a web of fibers composed of chromatin and serine proteases [ 45 ] that trap and kill extracellular microbes. It is suggested that NETs provide a high local concentration of antimicrobial components and bind, disarm, and kill microbes independent of phagocytic uptake. In addition to their possible antimicrobial properties, NETs may serve as a physical barrier that prevents further spread of pathogens. Trapping of bacteria may be a particularly important role for NETs in sepsis , where NETs are formed within blood vessels. [ 46 ] Finally, NET formation has been demonstrated to augment macrophage bactericidal activity during infection. [ 47 ] [ 48 ] Recently, NETs have been shown to play a role in inflammatory diseases, as NETs could be detected in preeclampsia , a pregnancy-related inflammatory disorder in which neutrophils are known to be activated. [ 49 ] Neutrophil NET formation may also impact cardiovascular disease , as NETs may influence thrombus formation in coronary arteries . [ 50 ] [ 51 ] NETs are now known to exhibit pro- thrombotic effects both in vitro [ 52 ] and in vivo . [ 53 ] [ 54 ] More recently, in 2020 NETs were implicated in the formation of blood clots in cases of severe COVID-19 . [ 55 ] TANs can exhibit an elevated extracellular acidification rate when there is an increase in glycolysis levels. [ 56 ] When there is a metabolic shift in TANs this can lead to tumor progression in certain areas of the body, such as the lungs. TANs support the growth and progression of tumors unlike normal neutrophils which would inhibit tumor progression through the phagocytosis of tumor cells. Utilizing a mouse model, they [ who? ] identified that both Glut1 and glucose metabolism increased in TANs found within a mouse who possessed lung adenocarcinoma. [ 56 ] A study showed that lung tumor cells can remotely initiate osteoblasts and these osteoblasts can worsen tumors in two ways. First, they can induce SiglecF high -expressing neutrophil formation that in turn promotes lung tumor growth and progression. Second, the osteoblasts can promote bone growth thus forming a favorable environment for tumor cells to grow to form bone metastasis. [ 57 ] Low neutrophil counts are termed neutropenia . This can be congenital (developed at or before birth) or it can develop later, as in the case of aplastic anemia or some kinds of leukemia . It can also be a side-effect of medication , most prominently chemotherapy . Neutropenia makes an individual highly susceptible to infections. It can also be the result of colonization by intracellular neutrophilic parasites. [ citation needed ] In alpha 1-antitrypsin deficiency , the important neutrophil elastase is not adequately inhibited by alpha 1-antitrypsin , leading to excessive tissue damage in the presence of inflammation – the most prominent one being emphysema . Negative effects of elastase have also been shown in cases when the neutrophils are excessively activated (in otherwise healthy individuals) and release the enzyme in extracellular space. Unregulated activity of neutrophil elastase can lead to disruption of pulmonary barrier showing symptoms corresponding with acute lung injury . [ 58 ] The enzyme also influences activity of macrophages by cleaving their toll-like receptors (TLRs) and downregulating cytokine expression by inhibiting nuclear translocation of NF-κB . [ 59 ] In Familial Mediterranean fever (FMF), a mutation in the pyrin (or marenostrin ) gene, which is expressed mainly in neutrophil granulocytes, leads to a constitutively active acute-phase response and causes attacks of fever , arthralgia , peritonitis , and – eventually – amyloidosis . [ 60 ] Hyperglycemia can lead to neutrophil dysfunction. Dysfunction in the neutrophil biochemical pathway myeloperoxidase as well as reduced degranulation are associated with hyperglycemia. [ 61 ] The Absolute neutrophil count (ANC) is also used in diagnosis and prognosis. ANC is the gold standard for determining severity of neutropenia, and thus neutropenic fever. Any ANC < 1500 cells / mm 3 is considered neutropenia, but <500 cells / mm 3 is considered severe. [ 62 ] There is also new research tying ANC to myocardial infarction as an aid in early diagnosis. [ 63 ] [ 64 ] Neutrophils promote ventricular tachycardia in acute myocardial infarction. [ 65 ] In autopsy , the presence of neutrophils in the heart or brain is one of the first signs of infarction, and is useful in the timing and diagnosis of myocardial infarction and stroke . [ citation needed ] Just like phagocytes, pathogens may evade or infect neutrophils. [ 68 ] Some bacterial pathogens evolved various mechanisms such as virulence molecules to avoid being killed by neutrophils. These molecules collectively may alter or disrupt neutrophil recruitment, apoptosis or bactericidal activity. [ 68 ] Neutrophils can also serve as host cell for various parasites that infects them avoding phagocytosis, including: There are five (HNA 1–5) sets of neutrophil antigens recognized. The three HNA-1 antigens (a-c) are located on the low affinity Fc-γ receptor IIIb (FCGR3B : CD16b ) The single known HNA-2a antigen is located on CD177 . The HNA-3 antigen system has two antigens (3a and 3b) which are located on the seventh exon of the CLT2 gene ( SLC44A2 ). The HNA-4 and HNA-5 antigen systems each have two known antigens (a and b) and are located in the β2 integrin . HNA-4 is located on the αM chain ( CD11b ) and HNA-5 is located on the αL integrin unit ( CD11a ). [ 70 ] Two functionally unequal subpopulations of neutrophils were identified on the basis of different levels of their reactive oxygen metabolite generation, membrane permeability, activity of enzyme system, and ability to be inactivated. The cells of one subpopulation with high membrane permeability (neutrophil-killers) intensively generate reactive oxygen metabolites and are inactivated in consequence of interaction with the substrate, whereas cells of another subpopulation (neutrophil-cagers) produce reactive oxygen species less intensively, don't adhere to substrate and preserve their activity. [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] Additional studies have shown that lung tumors can be infiltrated by various populations of neutrophils. [ 76 ] Neutrophils display highly directional amoeboid motility in infected footpad and phalanges. Intravital imaging was performed in the footpad path of LysM-eGFP mice 20 minutes after infection with Listeria monocytogenes . [ 77 ]
https://en.wikipedia.org/wiki/Neutrophil
Neutrophil extracellular traps ( NETs ) are networks of extracellular fibers, primarily composed of DNA from neutrophils , which bind pathogens . [ 2 ] Neutrophils are the immune system's first line of defense against infection and have conventionally been thought to kill invading pathogens through two strategies: engulfment of microbes and secretion of anti-microbials. In 2004, a novel third function was identified: formation of NETs. NETs allow neutrophils to kill extracellular pathogens while minimizing damage to the host cells. [ 3 ] Upon in vitro activation with the pharmacological agent phorbol myristate acetate (PMA), Interleukin 8 (IL-8) or lipopolysaccharide (LPS), neutrophils release granule proteins and chromatin to form an extracellular fibril matrix known as NET through an active process. [ 2 ] High-resolution scanning electron microscopy has shown that NETs consist of stretches of DNA and globular protein domains with diameters of 15–17 nm and 25 nm, respectively. These aggregate into larger threads with a diameter of 50 nm. [ 2 ] However, under flow conditions, NETs can form much larger structures, reaching hundreds of nanometers in length and width. [ 4 ] Analysis by immunofluorescence corroborated that NETs contain proteins from azurophilic granules (neutrophil elastase, cathepsin G and myeloperoxidase ), specific granules ( lactoferrin ), tertiary granules ( gelatinase ), and the cytoplasm; however, CD63 , actin , tubulin and various other cytoplasmatic proteins are not present in NETs. [ 2 ] [ 5 ] NETs disarm pathogens with antimicrobial proteins such as neutrophil elastase , cathepsin G and histones that have a high affinity for DNA. [ 6 ] NETs provide for a high local concentration of antimicrobial components and bind, disarm, and kill microbes extracellularly independent of phagocytic uptake. In addition to their antimicrobial properties, NETs may serve as a physical barrier that prevents further spread of the pathogens. Furthermore, delivering the granule proteins into NETs may keep potentially injurious proteins like proteases from diffusing away and inducing damage in tissue adjacent to the site of inflammation . NET formation has also been shown to augment macrophage bactericidal activity in response to multiple bacterial pathogens. [ 7 ] [ 8 ] More recently, it has also been shown that not only bacteria but also pathogenic fungi such as Candida albicans induce neutrophils to form NETs that capture and kill C. albicans hyphal as well as yeast-form cells. [ 9 ] NETs have also been documented in association with Plasmodium falciparum infections in children. [ 10 ] While it was originally proposed that NETs would be formed in tissues at a site of bacterial/yeast infection, NETs have also been shown to form within blood vessels during sepsis (specifically in the lung capillaries and liver sinusoids ). Intra-vascular NET formation is tightly controlled and is regulated by platelets , which sense severe infection via platelet TLR4 and then bind to and activate neutrophils to form NETs. Platelet-induced NET formation occurs very rapidly (in minutes) and may or may not result in death of the neutrophils. [ 11 ] NETs formed in blood vessels can catch circulating bacteria as they pass through the vessels. Trapping of bacteria under flow has been imaged directly in flow chambers in vitro and intravital microscopy demonstrated that bacterial trapping occurs in the liver sinusoids and lung capillaries (sites where platelets bind neutrophils). [ 4 ] NET activation and release, or NETosis, is a dynamic process that can come in two forms, suicidal and vital NETosis. Overall, many of the key components of the process are similar for both types of NETosis, however, there are key differences in stimuli, timing, and ultimate result. [ 12 ] The full NETosis activation pathway is still under investigation but a few key proteins have been identified and slowly a full picture of the pathway is emerging. The process is thought to begin with NADPH oxidase activation of protein-arginine deiminase 4 ( PAD4 ) via reactive oxygen species (ROS) intermediaries. PAD4 is responsible for the citrullination of histones in the neutrophil, resulting in decondensation of chromatin. [ 12 ] A NADPH oxidase–independent form of NETosis, relying solely on mitochondrial -derived ROS, has also been described. [ 13 ] Azurophilic granule proteins such as myeloperoxidase (MPO) and neutrophil elastase (NE) then enter the nucleus and further the decondensation process, resulting in the rupture of the nuclear envelope. The uncondensed chromatin enters the cytoplasm where additional granule and cytoplasmic proteins are added to the early-stage NET. The result of the process then depends on which NETosis pathway is activated. [ 12 ] Suicidal NETosis was first described in a 2007 study that noted that the release of NETs resulted in neutrophil death through a different pathway than apoptosis or necrosis . [ 14 ] In suicidal NETosis, the intracellular NET formation is followed by the rupture of the plasma membrane , releasing it into the extracellular space. This NETosis pathway can be initiated through activation of toll-like receptors (TLRs), Fc receptors , and complement receptors with various ligands such as antibodies , PMA, and so on. [ 12 ] [ 15 ] The current understanding is that upon activation of these receptors, downstream signaling results in the release of calcium from the endoplasmic reticulum . This intracellular influx of calcium in turn activates NADPH oxidase, resulting in activation of the NETosis pathway as described above. [ 15 ] Of note, suicidal NETosis can take hours, even with high levels of PMA stimulation, while vital NETosis can be completed in a matter of minutes. [ 12 ] Vital NETosis can be stimulated by bacterial lipopolysaccharide (LPS), other "bacterial products, TLR4-activated platelets, or complement proteins in tandem with TLR2 ligands." [ 12 ] Vital NETosis is made possible through the blebbing of the nucleus, resulting in a DNA-filled vesicle that is exocytosed and leaves the plasma membrane intact. [ 12 ] Its rapid formation and release does not result in neutrophil death. It has been noted that neutrophils can continue to phagocytose and kill microbes after vital NETosis, highlighting the neutrophil's anti-microbial versatility. [ 15 ] The formation of NETs is regulated by the lipoxygenase pathway – during certain forms of activation (including contact with bacteria) neutrophil 5-lipoxygenase forms 5-HETE-phospholipids that inhibit NET formation. [ 16 ] Evidence from laboratory experiments suggests that NETs are cleaned away by macrophages that phagocytose and degrade them. [ 17 ] NETs might also have a deleterious effect on the host, because the extracellular exposure of histone complexes could play a role during the development of autoimmune diseases like systemic lupus erythematosus (SLE). [ 18 ] NETs could also play a role in inflammatory diseases, as NETs could be identified in preeclampsia , a pregnancy-related inflammatory disorder in which neutrophils are known to be activated. [ 19 ] NETs have also been reported in the colon mucosa of patients with the inflammatory bowel disease ulcerative colitis . [ 20 ] NETs have also been associated with the production of IgG antinuclear double stranded DNA antibodies in children infected with P. falciparum malaria . [ 10 ] NETs have also been found in cancer patients. [ 21 ] Significantly higher levels of NETs have been detected in cancer patients compared to healthy controls, and have been associated with poor prognosis and clinical outcome. [ 22 ] Preclinical research suggests that NETs are jointly responsible for cancer-related pathologies like thrombosis, organ failure and metastasis formation. [ 23 ] NETs can cause peripheral organ failure or organ dysfunction in cancer patients by obstructing vasculature, causing an inflammatory response, and by releasing cytotoxic components with a direct damaging effect on the tissue. [ 24 ] NETs have been described as potential promoters of metastasis in cancer. They may enhance metastatic spread through various mechanisms. [ 25 ] Research has shown that NETs can form in response to infections and surgical stress , which may contribute to metastasis . For instance, A study utilizing the cecal ligation and puncture (CLP) model demonstrated that CLP-induced NETs enhanced the trapping of circulating tumor cells and increased metastasis to the liver. [ 26 ] Specifically, when Lewis lung carcinoma cells (LLC-H59) were injected via the intrasplenic route 24 hours after CLP, the mice exhibited a higher number of metastases compared to sham-operated controls. Intravital imaging revealed that NETs colocalized with tumor cells in the liver and lung microvasculature, promoting tumor cell arrest in these areas. [ 26 ] NETs can also be induced by cancer cells in the absence of infection or surgical intervention. [ 25 ] In a mouse model of breast cancer, it was found that metastatic cancer cells were more effective at inducing NET formation compared to less aggressive cells. [ 27 ] Additionally, higher levels of NETs were detected in metastatic lesions of breast cancer patients, particularly in those with triple-negative breast cancer , which is known for its aggressive progression. [ 27 ] NETs have been shown to contribute to the pathogenesis of HIV / SIV . NETs are capable of capturing HIV virions and destroying them. [ 28 ] There is an increase in NET production throughout the course of HIV/SIV, which is reduced by ART . In addition, NETs are able to capture and kill various immune cell groups such as CD4+ and CD8+ T cells , B cells , and monocytes . This effect is seen not only with neutrophils in the blood, but also in various tissues such as the gut, lung, liver, and blood vessels. NETs possibly contribute to the hypercoagulable state in HIV by trapping platelets , and expressing tissue factor . [ 29 ] NETs also have a role in thrombosis and have been associated with stroke. [ 30 ] [ 31 ] [ 32 ] These observations suggest that NETs might play an important role in the pathogenesis of infectious, inflammatory and thrombotic disorders. [ 33 ] [ 34 ] [ 35 ] Due to the charged and 'sticky' nature of NETs, they may become a problem in cystic fibrosis sufferers, by increasing sputum viscosity. Treatments have focused on breaking down DNA within sputum, which is largely composed of host NET DNA. A small study published in the journal JAMA Cardiology suggested that NETs played a major role in COVID-19 patients who developed ST-elevation myocardial infarctions . [ 36 ]
https://en.wikipedia.org/wiki/Neutrophil_extracellular_traps
Neutrophil swarming is a type of coordinated neutrophil movement that acts in response to acute tissue inflammation or infection. [ 1 ] The term comes from the swarming characteristics of insects that are similar to the behavior of neutrophils in response to an infection. These processes have mostly been studied in tissues of mice and studies of mouse ear tissue has proved to be very effective at observing neutrophil movement. Neutrophil swarming typically aggregates at surface layers of tissue so the thin nature of the mouse ear tissue makes for a good model to study this process. [ 2 ] Additionally, zebrafish larvae have been used for the study of neutrophil movement mainly because of their translucence during the first few days of their development. With transgenic lines that fluorescently label zebrafish neutrophils, the cells can be tracked by epifluorescence or confocal microscopy during the course of an inflammatory response. [ 3 ] Through this method, specific subpopulations of neutrophils can be tracked and their origin and fate during the induction and resolution of inflammation is observed. Another advantage for using zebrafish to study neutrophil swarming is that adaptive immunity for this organism does not develop until around 4 weeks of age. This allows for the study of neutrophil movement and other host immune responses independent of adaptive immune responses. [ 4 ] Originally, neutrophils were once seen as a solely homogenous (of the same type) populations but as of late, there had been discoveries that show that this is not the case. Instead, they are a mixture (heterogeneity) of mature neutrophils as they have been divided based on their production of cytokine expression of TLR (toll-like receptors), activation of macrophages in immunological responses, and host resistance, and lastly, in vitro angiogenesis and tumorigenesis. [ 1 ] Neutrophils have two different forms of communication: homotypic and heterotypic. Homotypic communication, which is between a neutrophil and another neutrophil, is involved in the signaling when their bodies are fighting infections and inflammation. In order to carry out this type of communication, the neutrophils must cross the vascular endothelium and the basal membrane to pass into the interstitial space (IF). These are assisted by chemoattractant gradients as well as signal relays that interact between the neutrophils to carry out this signaling. In addition with communication with each other, the neutrophils must also communicate with the leukocytes (white blood cells) that are directly involved with immunological functions in the bodies of the neutrophils. This would be considered the heterotypic form of communication (neutrophil to leukocyte). Some of the functions of heterotypic communication include regulating when effector molecules are distributed, conducting immune responses, and also leaving lasting effects on cells even once they have been removed. This type of communication can also be referred to as cross-talk. [ 1 ] A study of the lymph nodes of mice that were infected by injection of parasites into their earflaps revealed two types of neutrophil swarming: transient and persistent swarms. Transient swarms are characterized by groups of 10-150 neutrophils forming multiple small cell clusters within 10–40 minutes that quickly dispersed. Once the neutrophils moved dispersed, they join other close swarm centers and this oftentimes leads to competition as the neutrophil groups fight for recruiting neutrophils. Persistent swarms showed clusters of more than 300 neutrophils and recruitment lasted for more than 40 minutes. These persistent swarms are also characterized by having constant neutrophil recruitment with large-cell clustering, stable and longer term than transient (a few hours). For both the transient and persistent swarms, the formed neutrophil clusters appeared to be competing with each other with the larger clusters attracting neutrophils from the smaller clusters. The study also revealed two distinct phases in swarm formation. The first phase occurs when a small number of “pioneer” neutrophils respond to an initial signal and form small clusters and this is followed by the second phase where there are a large scale migration of neutrophils leading to the growth of multiple cell clusters. In terms of migration, neutrophils will do something called chemotactic migration in which they go into and out of a swarm center by accumulation (moving towards) or moving out. Another movement is with just individual neutrophils that go will go from one swarm to another when they are in competition. One interesting fact about these two swarm types is that they can work together in the same disrupted tissue in order to restore an inflamed tissue to its original composition. [ 5 ] The exact size or duration of swarms depends on the specific inflammatory conditions as well as the tissue type of the infection location. Several factors that influence the swarm phenotype are: the size of the initial tissue damage, the presence of pathogens, the induction of secondary cell death, and the number of recruited neutrophils. [ 6 ] A study that compared large scale tissue damage of sterile mouse tissue by a needle prick with small injuries by a laser beam showed that the needle prick provoked a larger and longer swarm response. After the needle injury, hundreds to thousands of neutrophils were recruited that formed stable cell clusters that sometimes were prolonged for days. [ 7 ] In comparison, the neutrophil swarms resulting from the laser induced injury only recruited around 50-330 neutrophils which persisted for a few hours. The presence of pathogens can also increase the size of neutrophil swarms, not necessarily because of their presence as a foreign body, but because of the additional cell death that they can cause in infection sites. When cells are lysed in an infection site, they release an assortment of signaling factors that augment the recruitment of neutrophils to the site. Additionally, neutrophil death during a swarm releases more signaling factors to recruit more neutrophils so the initial amount of neutrophils recruited plays a role in how large the propagation effect is during swarming. [ 6 ] The neutrophil swarming process is categorized into 5 phases: swarm initiation, swarm amplification, additional swarm amplification through intercellular signaling, swarm aggregation and tissue remodeling, and recruitment of myeloid cells and swarm resolution. The first stage of neutrophil swarming details the “pioneer” neutrophils responding to an infection or inflammation site. The neutrophils close to the injury will switch from random motility to chemotactic movement within a period of 5–15 minutes and swarm towards the infection site. [ 8 ] In the second stage, the pioneer neutrophils attract a second wave of neutrophils that come from more distant regions of the tissue. The methods of movement to the region of injury depends on the tissue environment the neutrophils are moving towards. Neutrophil swarming in extravascular spaces such as the connective tissue in the skin involves movement without the assistance of integrin proteins and neutrophil attraction by a gradient of chemoattractants. Neutrophils will be guided by the forces generated by the actomyosin cytoskeleton through the path of least resistance to the site of infection. [ 9 ] However, for intravascular tissue environments, neutrophil movement is dependent on integrins and chemoattractant signals on the luminal surface of endothelial cells. In this process, distant neutrophils will be recruited by an inflammatory signal and perform integrin-mediated crawling along the vascular walls to reach the neutrophil swarming sites. [ 10 ] In the third stage, swarming neutrophils can amplify their recruitment in a feed forward manner through intercellular communication by leukotriene B4 ( LTB4 ). The propagation of neutrophil recruitment leads to multiple, dense neutrophil cell clusters at the site of inflammation. A 2013 study showed that neutrophils lacking the high affinity receptor for LTB4 (LTB4R1) decreased the recruitment of neutrophils at later stages of swarming. In addition, proximal cells to the inflammation site showed chemotaxis similar to the control cells while distant cells were poorly attracted. This finding suggests that the proximal neutrophils that are recruited early on are not affected by the lack of LTB4R1, but distant neutrophils that are required for the propagation of neutrophil swarming are not able to be recruited to the swarming site. These results present LTB4 as a key signaling molecule for a prolonged neutrophil swarm response and recruitment of neutrophils from distant areas of the tissue. [ 11 ] After stages 1–3, neutrophils slow down in the cell clusters and begin to form aggregates. In this fourth stage, the neutrophil aggregates will aid in rearranging the surrounding extracellular tissue area and create a collagen-free zone at the inflammation center eventually resulting in a wound seal which isolates the site from the rest of the tissue. The exact mechanisms of this are unknown but it is believed that neutrophil proteases from the cell clusters play a role in clearing out the surrounding tissue environment. [ 8 ] These neutrophil aggregates become stable as opposed to the constant movement in stages 1–3 by development of high chemoattractant concentrations within the clusters that promote local neutrophil interactions within the cluster. Additionally, neutrophils are switched to an adhesive mode of migration within clusters which further stabilize the aggregates and can prevent neutrophils from leaving the cluster. This switch is believed to be caused by additional secretions of LTB4 and other chemoattractants within the neutrophil aggregates. [ 11 ] In stage 5, the swarming response terminates and the clusters dissolve with the resolution of inflammation. Little is known about the mechanisms of this stage but the process may be regulated by neutrophils or external factors from the tissue environment. In a laser-induced skin injury model, neutrophil aggregation typically stopped after 40–60 minutes which occurs at the same time as the appearance of secondary myeloid cell swarms. Knock-in mice studies have shown that the myeloid cells move slower than neutrophils and assemble around the neutrophil aggregates during this stage. These myeloid cells may disrupt the propagation signals of neutrophil chemoattractants or to create competing attractants in the tissue space so that the neutrophil aggregation is less strong. [ 8 ] [ 11 ] When discussing the neutrophil swarming, it is important to address the other factors in the outside environment that can influence what is happening during migration of these neutrophils whether in packs or individually. A massive influence of neutrophils occurs when there is some sort of inflammatory problem occurring as they influence autocrine and paracrine signaling, involved in the clustering up and recruiting of the neutrophils themselves. Neutrophil swarming are influenced by three main external factors: type of tissues involved, nearby tissue-specific cells, and something referred to as chemoattractant (or when there is a chemical substance that influences a bacterium to move in the direction of their increasing concentration). One of the external factors that impact how communication occurs is the tissue context, as these each have a specific signal that can influence the swarming (the size and persistence of the neutrophil swarms). Two of these types are extravascular swarming and intravascular swarming. The extravascular swarming is due to integrin-independent interstitial movement as well as using soluble directional aspects like LTB4 that affects neutrophil attraction. Extravascular swarming consists of fibrillar (ex. skin) and cell rich (ex. lymph node) while intravascular consists of intrasinusoidal, with an example being the liver. [ 5 ] Two triggers of neutrophil swarming include PAMPs or Pathogen Associated Molecular Patterns and DAMPs or Damage-Associated Molecular Patterns. One attribute of note in neutrophil swarming is that it is a conserved protective mechanism that respond when tissues undergo disruption. This can occur in many different tissues of the body includes the ears, liver, lung, and skin. The neutrophil warming can also participate in activating pathogen containment, keeping foreign substances localized and easier to treat and rid the body of later on. [ 12 ] In the figure above, we see that at the start of neutrophil swarming, we begin with either an injury, fungi, or bacteria. As discussed above the PAMPs and the DAMPs trigger the initial neutrophil swarming. Then the LTB4 and CXCL2, which are chemoattractants, that are there to further the signals that cause a cascade of intracellular reactions to the disruptions and foreign substances. These begin a process known as swarm aggregation where the bacterial or other substances begin to all congregate together into one massive "ball" of bacteria, as shown above after the second green arrow. Another part of the image includes the box above the LTB4 and CXCL2, including Calcium, complement, ATP, Connexin 43, and Integrins. These also contribute to the chemoattractants by amplifying their signal and causing the swarm aggregation to run more forwardly. However, below is the NADPH Oxidase 2 or NOX2 is a negative regulator of the chemoattractants that may disrupt the events from proceeding forward. These events previously show how neutrophil swarming begins while the steps ahead are going to explain the ending of this process once the body no longer needs this to occur for its health and well-being, if the bacteria fungi or injury has resolved. The most crucial step in terminating the neutrophil swarming is when GPCR kinase 2 or GRK2 has phosphorylated or desensitized the GPCRs (G-protein coupled receptors). Then the other three, Lipoxin A4, resolving E3, and w-OH-LTB4 assist the GRK2 in stopping this process fully. [ 12 ] One of the regulators of signaling includes Calcium. It is one of the positive regulators to chemoattractants such as LTB4 and CXCL2. To obtain calcium, the cell must get the calcium from the intracellular endoplasmic reticulum (ER) or from the extracellular matrix. For the endoplasmic reticulum calcium sequestering, the cell uses a process called SOCE or store-operated calcium entry that induces cascades of signaling via receptors and these then stimulate the release of calcium out of the ER. In order to bring the Calcium from outside the cell a much more complex process occurs by using CRAC or calcium release-activated calcium channels (which also have something called ORAI family members in them). Before this can occur, however, stromal interaction molecule proteins (STIM) must detect the calcium and this then allows the ER to sense a change thus changing their shape and allowing these CRAC channels to gate between the intracellular ER and the extracellular space in order that calcium may be through into the cell then driving downstream mechanisms that were dependent on calcium. During swarming, neutrophils notably exhibit sustained calcium activity in the center of the swarm and produces calcium waves. [ 13 ] [ 14 ] Another part of regulation includes the chemokines and the cytokines. There are two chemokines that work cooperatively in neutrophil swarming operations, CXCL2 and LTB4. They did tests in order to find out that CXCL2 did in fact assist, making a noticeable impact on the driving of swarming. But, it was a little more complicated than just that one chemokine. In coupling with inhibition of CXCR1 as well as BLT1 and BLT2 there was a decrease in chemoattractant inducement (also known as chemotactic index). In summary, there is a chemokine called CXCL8, which is basically a ligand of CXCR1 and 2 that alongside LTB4 positively promotes the swarming in neutrophils. [ 1 ] In order to properly understand neutrophil swarming, one must also understand the basics of the structure and function of neutrophils. They are leukocytes (white blood cells) that are the most abundant WBC in the body and are known for their role in the immune system. [ 15 ] In figure 2, it shows the main three ways in which a neutrophil can approach attacking and getting rid of a foreign antigen or bacteria. The top left illustrates degranulation, a process in which the neutrophil itself degradulates and then releases its substances in the outer circulation in which the bacteria lies. These contents work to destroy/break down the bacteria. The second way, on the upper right, shows phagocytosis. This is when the bacteria is brought into the neutrophil by the plasma membrane engulfing it and pulling it inside to create a vacuole. The engulfing process begins with a bacteria, then turns into a phagosome when it being to form a vacuole, and lastly becomes a phagolysosome that contain broken down bacteria with contents from the neutrophil environment inside. These contents have enzymes that degrade the bacteria coupled with a low pH of the internal environment. Lastly, in the bottom part of the image, it shows NETosis. Comparatively to the other bacteria, the bacteria are much larger and thus need this process to be combatted. It includes the creation of NETs or neutrophil extracellular traps that are composed of DNA wrapped around histones and proteins like myeloperoxidase and elastase. These DNA string extensions along with the helper proteins envelop the bacteria and these break down the bacteria. All three of these processes show the action that neutrophils have in targeting and destroying foreign substances, their main job in the body. [ 16 ]
https://en.wikipedia.org/wiki/Neutrophil_swarming
The neutrophil to lymphocyte ratio (NLR) reflects systemic inflammation, which plays an important role in the process of treating ischemic strokes. [ 1 ] In medicine neutrophil to lymphocyte ratio (NLR) is used to show there is inflammation in the body. It is calculated by dividing the number of neutrophils by number of lymphocytes , usually from peripheral blood sample , [ 2 ] but sometimes also from cells that infiltrate tissue, such as tumor . [ 3 ] Recently Lymphocyte Monocyte ratio (LMR) has also been studied as a marker of inflammation including tuberculosis and various cancers. The NLR is associated with stroke severity, unfavorable functional outcomes and mortality in AIS. [ 1 ] Higher NLR contributes to predicting mortality rates ( prognostic marker ) in patients undergoing certain medical procedures, such as angiography or cardiac revascularization . [ 2 ] Increased NLR is associated with poor prognosis of various cancers, [ 4 ] such as esophageal , [ 5 ] liver , [ 6 ] ovarian , [ 7 ] pancreatic , [ 8 ] prostate [ 9 ] and stomach cancer . [ 10 ] NLR can be used as a prognostic marker for COVID-19 given the significant difference of NLR between those died and recovered from COVID-19. [ 11 ] A 2017 study found that the normal NLR range for healthy adults is between 0.78 and 3.53. [ 12 ] Neutrophil to Lymphocyte ratio was first demonstrated as useful parameter after a correlation of a relationship between the neutrophil lymphocyte ratio to reactions of the immune response was noted. A study in 2001 was conducted by the Department of Anaesthesiology and Intensive Care Medicine, St. Elizabeth Cancer Institute in Bratislava by Zahorec which suggested the routine used of the ratio as a stress factor in clinical ICU practice in intervals of 6-12 and 24 hours. [ 13 ] The first study to demonstrate that pre-therapeutic NLR can be used as a predictor of chemotherapy sensitivity to thoracic esophageal cancer was demonstrated by Hiroshi Sato, Yasuhiro Tsubosa, and Tatsuyuki Kawano in a 2012 study published in World Journal of Surgery journal. [ 14 ]
https://en.wikipedia.org/wiki/Neutrophil_to_lymphocyte_ratio
A neutrophile is a neutrophilic organism [ 2 ] that thrives in a neutral pH environment between 6.5 and 7.5. [ 3 ] The pH of the environment can support growth or hinder neutrophilic organisms. When the pH is within the microbe's range, they grow and within that range there is an optimal growth pH. [ 4 ] Neutrophiles are adapted to live in an environment where the hydrogen ion concentration is at equilibrium. [ 2 ] They are sensitive to the concentration, and when the pH become too basic or acidic, the cell's proteins can denature . [ 4 ] Depending on the microbe and the pH, the microbe's growth can be slowed or stopped altogether. [ 5 ] Manipulation of the pH of the environment that the microbe is in is used by the food industry to control its growth in order to increase the shelf life of food. [ 5 ] This microbiology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neutrophile
The Nevada–Texas–Utah retort process (also known as NTU , Dundas–Howes or Rexco process) was an above-ground shale oil extraction technology to produce shale oil , a type of synthetic crude oil. It heated oil shale in a sealed vessel ( retort ) causing its decomposition into shale oil, oil shale gas and spent residue . The process was developed in the 1920s and used for shale oil production in the United States and in Australia. The process was simple to operate; however, it was ceased from the operation because of a small capacity and labor extensiveness. The NTU retort was a successor of the 19th-century coal gasification retorts and it is considered as a predecessor of the gas combustion retort and the Paraho processes. [ 1 ] [ 2 ] It was invented and patented by Roy C. Dundas and Raymond T. Howes in 1923. The process was improved by David Davis and George Wightman Wallace, a consulting engineer of the NTU Company. [ 2 ] [ 3 ] In 1925, the NTU Company built a test plant at Sherman Cut near Casmalia, California . [ 1 ] [ 4 ] In 1925–1929, the process was tested by the United States Bureau of Mines in the Oil Shale Experiment Station at Anvil Point in Rifle, Colorado . [ 1 ] [ 3 ] Retorting was carried out from 17 January to 28 June 1927. The plant was dismantled when work was terminated in June 1929. [ 3 ] One of the leading technologist involved in this stage was Lewis Cass Karrick , an inventor of the Karrick process . [ 5 ] In 1946–1951, two pilot plants with nominal capacities of 40 tons of raw oil shale were located at the same location. More than 12,000 barrels of shale oil was produced during this period. During the World War II , three NTU retorts were operated at Marangaroo, New South Wales , Australia . [ 1 ] [ 6 ] [ 7 ] Almost 500,000 barrels of shale oil was produced by these retorts by retorting local torbanite . [ 1 ] The NTU retort was a vertical downdraft retort, which used internal combustion to generate heat for an oil shale pyrolysis ( chemical decomposition ). The retort was designed as a steel cylinder, lined with fire bricks . At the top it was equipped with an air supply pipe and at the bottom it was equipped with an exhaust pipe. The batch of crushed oil shale was loaded from the top; after that the retort was sealed. To start the pyrolysis process the fuel gas was ignited at the top of retort, and air injection into the retort started. The supply of fuel gas stopped after the upper quarter of the oil shale batch started to burn. At the same time the air injection continued bringing temperature in the burning part to about 1,500 °F (820 °C). The heated gas caused a pyrolysis on the lower part of oil shale and produced shale oil and oil-shale gas are escaped from the retort through exhaust pipe at the bottom of retort. The pyrolysis occurred at the temperature about 800 °F (430 °C). By time-being, the combustion zones moved downward, and the char (semi-coke) produced as a solid residue of pyrolysis, ignited to burn as additional fuel for combustion. This caused the pyrolysis zone to move downward to the lower parts of retort. After combustion zone reached to the bottom of retort, air injection was stopped to stop combustion. After burning of char, the shale oil production ceased and only spent oil shale ash remained in the retort. The bottom of retort could opened for removal of the oil shale ash after retorting process. Operating the NTU retort with nominal capacity of 40 ton of raw oil shale, the full process cycle took about 40 hours. [ 1 ] The shale oil yield varied from 80% to 85% of Fischer assay . [ 2 ] The advantage of the NTU retort process was simple design, simple operation, and limited need for external fuel. It was suitable for processing of wide variety of oil shales. The disadvantage of this process was a batch mode of operation not allowing continuous retorting, and therefore having small capacity being labor extensive at the same time. The process also had a relatively low oil yield and it required cooling water. [ 1 ] [ 8 ]
https://en.wikipedia.org/wiki/Nevada–Texas–Utah_retort
Nevil Vincent Sidgwick FRS [ 1 ] (8 May 1873 – 15 March 1952) was an English theoretical chemist who made significant contributions to the theory of valency and chemical bonding. [ 2 ] [ 3 ] Sidgwick was born in Park Town, Oxford, the elder of two children of William Carr Sidgwick, lecturer at Oriel College , and Sarah Isabella (née Thompson), descended from a notable family; her uncle was Thomas Perronet Thompson . He was initially educated at Summer Fields School but, after a year, he entered Rugby School in 1886. From there he was elected to an open scholarship in Natural Science at Christ Church, Oxford . He gained a first in 1895, and went on to gain another first in Greats in 1897, a very rare feat. His principal interest, though, was science, and he spent some time in Wilhelm Ostwald ’s laboratory in Germany, where he fell ill and had to go home. He returned to Germany in the autumn of 1899, this time in Hans von Pechmann ’s lab at the University of Tübingen . His researches on derivatives of acetone-dicarboxylic acid resulted in his being award a DSc in 1901. Sidgwick was elected to a Fellowship at Lincoln College, where he went into residence in October 1901 and remained for the rest of his life. [ 1 ] In 1914 Sidgwick was one of the members of the party chosen to represent the British Association for the meeting held in Australia. [ 4 ] On 1 July he set sail on the maiden voyage of the Euripides from London to Brisbane, disembarking at Adelaide. [ 5 ] A fellow first-class passenger was Sir Ernest Rutherford , who had been knighted that year. Sidgwick became a devotee of the physicist, and would hear no criticism of him in later years. [ 1 ] On the return journey, via Penang, in November 1914, a fellow passenger on the Kashima Maru was the astronomer and physicist Professor A S Eddington . [ 6 ] Sidgwick became absorbed by the study of atomic structure and its importance in chemical bonding. He explained the bonding in coordination compounds (complexes), with a convincing account of the significance of the dative bond . Together with his students he demonstrated the existence and wide-ranging importance of the hydrogen bond . He was elected a Fellow of the Royal Society in 1922. [ 1 ] In 1927, he proposed the inert pair effect which describes the stability of heavier p-block atoms in an oxidation state two less than the maximum. In 1940 his Bakerian lecture with Herbert Marcus Powell correlated molecular geometry with the number of valence electrons on a central atom. [ 7 ] These ideas were later developed into the VSEPR theory by Gillespie and Nyholm . The scope and significance of his researches brought international fame for Sidgwick. He travelled to Toronto for a British Association meeting in 1924, [ 8 ] and then explored much of western Canada. Another BA meeting in 1929 took him to Cape Town and then Dar es Salaam , Zanzibar , and back home via Aden and Suez. Two years later he was off to spend a semester at Cornell University , via New York and Princeton University . Cornell provided him “with every luxury that an American laboratory can supply. Two offices, four telephones, a private laboratory, and a stenographer, all to myself. . . It is a wonderful place, with a great deal of good work going on, and everybody is most kind, so that I can see that I am going to have a very pleasant time here.” [ 1 ] His stay at Ithaca was followed by a 10,000 mile trip to the West and back via Yellowstone National Park , Buffalo , Ottawa and Quebec . Back in Oxford, he concentrated as much as he could on new books, and revisions to earlier ones, but was diverted by his serving on several committees. He had several more trips to the United States in the 1930s and later, culminating in a voyage on the Britannic from Liverpool to New York on 27 July 1951. He was given a warm reception at the American Chemical Society meeting in New York in early September, having earlier had the chance to visit Oak Ridge National Laboratory . Nevil Vincent Sidgwick died, unmarried, at the Acland Nursing Home, Oxford, on 15 March 1952, leaving effects worth £67,000. The Sidgwick Laboratory in the Dyson Perrins Laboratory for organic chemistry and Sidgwick Close in front of the Inorganic Chemistry Laboratory at the University of Oxford were named after him. [ 9 ]
https://en.wikipedia.org/wiki/Nevil_Sidgwick
In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points. Neville's algorithm evaluates this polynomial. Neville's algorithm is based on the Newton form of the interpolating polynomial and the recursion relation for the divided differences . It is similar to Aitken's algorithm (named after Alexander Aitken ), which is nowadays not used. Given a set of n +1 data points ( x i , y i ) where no two x i are the same, the interpolating polynomial is the polynomial p of degree at most n with the property This polynomial exists and it is unique. Neville's algorithm evaluates the polynomial at some point x . Let p i , j denote the polynomial of degree j − i which goes through the points ( x k , y k ) for k = i , i + 1, ..., j . The p i , j satisfy the recurrence relation This recurrence can calculate p 0, n ( x ), which is the value being sought. This is Neville's algorithm. For instance, for n = 4, one can use the recurrence to fill the triangular tableau below from the left to the right. This process yields p 0,4 ( x ), the value of the polynomial going through the n + 1 data points ( x i , y i ) at the point x . This algorithm needs O ( n 2 ) floating point operations to interpolate a single point, and O ( n 3 ) floating point operations to interpolate a polynomial of degree n. The derivative of the polynomial can be obtained in the same manner, i.e: In the above formulae, if we take the degree of the successive interpolating polynomials d = j − i and change the notation to p d , i , The final value p n ,0 (in this notation) is the required interpolated value. Since the number of computed items i.e. the range of i decreases with each successive d , a linear array can be used for memory efficiency with p i being overwritten and d being ignored. (For example: [1] ) The derivative (using the product rule) can be computed likewise as: As before, p ′ n ,0 (in this notation) is the derivative. As this depends on the successive computed values of p also for each d , it may be computed within the same loop. If linear arrays for p and p′ are used for efficiency, the p ′ values should be computed before the p values are overwritten. Lyness and Moler showed in 1966 that using undetermined coefficients for the polynomials in Neville's algorithm, one can compute the Maclaurin expansion of the final interpolating polynomial, which yields numerical approximations for the derivatives of the function at the origin. While "this process requires more arithmetic operations than is required in finite difference methods", "the choice of points for function evaluation is not restricted in any way". They also show that their method can be applied directly to the solution of linear systems of the Vandermonde type.
https://en.wikipedia.org/wiki/Neville's_algorithm
Neviot ( Hebrew : נביעות ) is an Israeli mineral water marketing company. Neviot was established in 1989 after geologists discovered that the water of Ein Zahav spring near Kiryat Shmona was suitable for drinking. In 2002, Neviot changed its logo and bottle design. In 2004, the Podhorzer family, which owned Neviot, sold almost half its shares to the Central Bottling Company (Coca-Cola Israel), which already owned 34.06% of Neviot, bringing its total stake to 78.58%. [ 1 ] This article about an Israeli company is a stub . You can help Wikipedia by expanding it . This article related to a drink company is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neviot
newcleo is a nuclear energy company founded in September 2021. It focuses on designing, building, and operating Generation IV Small Modular Lead-cooled Fast Reactors (SMRs) using MOX fuel nuclear energy . [ 1 ] [ 2 ] This type of reactor just like the current ones are safe, clean and ecological newcleo's MOX fuel is made from depleted uranium , a byproduct of the enrichment process, and plutonium recovered from the reprocessing of spent nuclear fuel. This process can help reduce the demand for new uranium mining while contributing to a more effective management of long-lasting nuclear waste. [ 3 ] [ 4 ] The project is grounded in decades of international research dedicated to improving nuclear energy technology and aims to address critical issues in the nuclear industry, such as waste management, safety, and cost efficiency. [ 5 ] [ 6 ] As of the end of 2024, newcleo has raised over €537 million in capital. [ 7 ] Headed by CEO and Chairman Stefano Buono, the company is based in France, with further locations in the UK, Italy, Switzerland, Belgium, and Slovakia, employing approximately 1,000 people. [ 8 ] newcleo relies on more than 30 years of international research aimed at advancing lead-cooled nuclear technology. This long-term research began with milestones at CERN and has evolved through collaboration with scientists and institutions across the globe. These research efforts began in 1993 when Nobel Laureate and CERN Director General Carlo Rubbia introduced the concept of the Accelerator Driven System (ADS), calling it the "Energy Amplifier", which set the stage for extensive research in the following decades. [ 9 ] In 1994, the feasibility of the ADS for energy production was demonstrated through the FEAT (First Energy Amplifier Test) at CERN. [ 10 ] The following year saw the first collaboration with Russian scientists, including those involved in the design of the Alpha class nuclear Pb-Bi submarines. Notably, V.V. Orlov presented his conceptual project for the BREST reactor at CERN, which has since moved toward construction. [ 11 ] By 1996 and 1997, [ 12 ] further experiments were conducted, such as the TARC (Test of Adiabatic Resonance Crossing) at CERN, which demonstrated neutrons phenomenology in pure lead. In 1999, the Italian government funded an industrial project to produce a reference configuration for the ADS Experimental Facility (XADS). Luciano Cinotti led this project as the technical manager. [ 13 ] The Lead-cooled Fast Reactor (LFR) [ 14 ] was identified as one of six key technologies to be developed by the Generation IV International Forum in 2000. In 2002, the CIRCE large-scale liquid lead test facility was created at the ENEA-Brasimone Centre [ 15 ] in Italy, becoming a crucial R&D infrastructure for the development of LFR systems. [ 16 ] Throughout the early 2000s, the European Union became heavily involved in the advancement of ADS technology. In 2003, it launched a broad research and development programme as part of its 5th Framework Programme, bringing together around 50 academic and industrial organisations to focus on lead-based ADS technologies. [ 17 ] In 2013, Hydromine Nuclear Energy was incorporated to design the LFR-AS-200 and LFR-TL-30 reactors, which later became part of the Generation IV projects recognised by the International Atomic Energy Agency (IAEA). [ 18 ] In September 2021, newcleo was officially founded by Stefano Buono, Luciano Cinotti, and Elisabeth Rizzotti, building on this extensive history of research and development. The company acquired Hydromine Nuclear Energy, along with its portfolio of international patents, and successfully raised €100 million in capital to propel its nuclear energy solutions forward. [ 19 ] [ 20 ] In March 2022, newcleo held its first Annual General Meeting (AGM) and launched a €300 million equity raise to support the next phase of its growth. Around the same time, the company entered into a technological partnership with ENEA, the Italian National Agency for New Technologies, Energy and Sustainable Economic Development, leader on research on liquid lead technologies for nuclear systems. [ 21 ] [ 22 ] In March 2023, newcleo signed an agreement with Enel, launched another capital raise, and completed the conceptual design for the LFR-AS-30 reactor. [ 23 ] During the Choose France summit in May 2023, the company disclosed plans for up to €3 billion in investments in France by 2030, including industrial projects, R&D initiatives, and engineering developments. The following month, newcleo was awarded funding as part of the France 2030 call for projects, an initiative designed to support innovative technologies in France. By July 2023, newcleo partnered with Fincantieri and RINA for a feasibility study focusing on the application of newcleo's reactors for naval propulsion. [ 24 ] [ 25 ] In October 2023, the company acquired SRS-Fucina, a global leader in engineering and manufacturing, with a specific focus on liquid lead systems. Additionally, a cooperation and an investment agreement were established with the Tosto Group. [ 26 ] The following month saw newcleo's participation in the World Nuclear Exhibition (WNE) and secured partnerships with Assystem, Ingérop, and Onet. [ 27 ] A further acquisition followed in December 2023, when newcleo purchased the Rütschi Group, a leader in nuclear pump manufacturing. [ 28 ] [ 29 ] The company also launched its first experimental facility, known as CAPSULE. In January 2024, newcleo signed an agreement with MAIRE for hydrogen and chemicals production. The same month, it also secured a contract with Nuclear Transport Solutions (NTS) to support its logistics needs. [ 30 ] [ 31 ] By April 2024, newcleo had submitted its Regulatory Justification in the UK and formed a partnership with CEA, France's Alternative Energies and Atomic Energy Commission. The company also launched its second experimental facility, CORE-1. [ 31 ] [ 32 ] [ 33 ] In May 2024, it joined the European Industrial Alliance on Small Modular Reactors (SMRs), aligning itself with other leading organisations in the development of this next-generation nuclear technology. Two months later, in July 2024, newcleo successfully completed the preparatory stage of the licensing process set by the French nuclear safety authorities, ASN and IRSN. [ 34 ] [ 35 ] In August, newcleo had signed a cooperation agreement with Slovak nuclear company VUJE to advance nuclear technology development. [ 4 ] In September 2024, the company entered into an agreement with Saipem to explore offshore applications of its technology. newcleo also relocated its holding headquarters from London to Paris. [ 36 ] [ 7 ] [ 37 ] Additionally, the company strengthened its partnership with the FALCON consortium, led by Ansaldo Nucleare and SCK-CEN (the Belgian Nuclear Research Centre), with the goal of accelerating the industrialisation of Lead-cooled Fast Reactor (LFR) technology. [ 38 ] In October 2024, newcleo's Lead-cooled Fast Reactor (LFR) was selected by the European Industrial Alliance on Small Modular Reactors as one of the projects to receive support. As part of this, newcleo formed a Project-Based Working Group, collaborating with the Alliance's Technical Working Groups on Skills, Fuel, R&D, Supply Chain, and Financing. [ 39 ] The company also joined forces with Orano, HEXANA, and Otrera to further develop Advanced Modular Reactors utilising recycled nuclear fuel. [ 40 ] In November 2024, newcleo was recognised by PitchBook as the top company for both Opportunity Score and Success Probability among next-generation nuclear fission companies. During the same period, it showcased its reactor safety options to ASN and IRSN and submitted a Generic Design Assessment (GDA) application for its 200MWe LFR technology in the UK [ 41 ] [ 42 ] On December 16, 2024, newcleo submitted its French MOX Safety Option File (DOS) to the Nuclear Safety Authority (ASN) for its fuel assembly testing facility, initiating a new regulatory phase in which key safety options are assessed. This milestone plays an important role in advancing the application for the authorisation process to establish a Basic Nuclear Installation (INB). [ 43 ] Since its launch in 2021, newcleo has focused on the design of Generation IV Advanced Modular Reactors (AMRs) cooled by liquid lead and a fuel production facility of MOX for its fast reactors. By combining existing technologies with a multi-recycling approach, the company aims to close the nuclear fuel cycle while generating low-carbon energy. newcleo supports both internal projects and the broader development of SMR supply chains across Europe and beyond. [ 2 ] The company builds on decades of international research focused on advancing nuclear energy technology, with the objective of addressing key challenges faced by the industry. These include the long-term management of nuclear waste through advanced fuel recycling, enhancing safety standards, and improving cost efficiency by optimising reactor performance and fuel usage. The project seeks to contribute to the development of more sustainable and scalable nuclear energy solutions. [ 5 ] [ 6 ] In January 2025, newcleo has signed two framework agreements with Slovakian nuclear companies JAVYS and VUJE, aiming to advance the use of spent nuclear fuel and the development of Lead-Cooled Fast Reactor (LFR) technology. The first agreement, with JAVYS, outlines plans to establish the Centre for Development of Spent Nuclear Fuel Utilisation (CVP), with a joint venture (51% JAVYS, 49% newcleo) focusing on the construction of an Advanced Modular Reactor (AMR)-based nuclear power plant with up to four LFR-AS-200 reactors at Jaslovské Bohunice V1, as well as the development of a nuclear fuel supply chain. The project aims to integrate spent nuclear fuel recycling, supported by newcleo's planned fuel manufacturing facility in France, to reduce nuclear waste disposal needs. The second agreement, with VUJE, sets a framework for technical and commercial cooperation in developing LFR technology in Slovakia, with VUJE contributing its expertise in nuclear power plant construction and commissioning, starting with feasibility studies. [ 44 ] In February 2025, newcleo has initiated the land acquisition process for its first LFR-AS-30 reactor in the Chinon Vienne et Loire community of municipalities in France, aiming to build a GEN-IV 30 MWe nuclear reactor by 2031. The project is intended as an industrial demonstrator to showcase newcleo's technology and support the development of the nuclear sector in France. Discussions with local authorities and stakeholders are underway, following the standard approval process for large-scale projects in France. The initiative will be subject to preliminary studies and regulatory procedures, including a review by the National Commission for Public Debate (CNDP). In December 2024, newcleo had already submitted its French MOX Safety Option File (DOS) to the Nuclear Safety Authority (ASN) for its fuel assembly testing facility and advanced regulatory work on safety options for the LFR-AS-30 reactor. [ 45 ] Small Modular Reactors are a new type of nuclear fission reactor, designed to offer greater flexibility and economic advantages over traditional large-scale reactors. Unlike conventional reactors that can generate more than 1 Gigawatt of electrical power (GWe), SMRs are smaller, producing less than 300 Megawatts of electrical power (MWe). This smaller size allows for standardised designs, which can be mass-produced in factories and transported to sites for single- or multi-unit installation. [ 46 ] The economic benefits of SMRs stem from their scalability and modular nature. They require a lower initial capital investment, making them more affordable and reducing the financial risk associated with large, single-build reactors. Serial production also leads to reduced costs through economies of scale, while modular construction allows for quicker assembly and shorter construction times. [ 46 ] Their smaller size offers greater flexibility in terms of site selection as they are suitable for use in a wide variety of applications, including remote locations, smaller electrical grids, and non-electrical uses such as desalination or hydrogen production. They can even be deployed for marine purposes, including floating reactors and propulsion systems. [ 47 ] The modularity of SMRs also supports a flexible financing model. Multiple reactors can be deployed in stages, with the revenue generated from one module financing the next. This chain financing approach makes SMRs accessible to a wider range of users and enhances their adaptability for different energy needs. [ 47 ] Small Modular Reactors (SMRs) employing one of the six Generation IV technologies are usually called Advanced Modular Reactors (AMRs). Since 2000, an international forum led by the U.S. Department of Energy (DOE) has focused on the development of six key Gen-IV nuclear systems. newcleo is working on the design, construction, and operation of Advanced Modular Reactors (AMRs) using liquid lead as a coolant: Lead-cooled Fast Reactors. [ 48 ] Among the Gen-IV technologies, Lead-cooled Fast Reactors (LFRs) stand out for several reasons. Unlike other reactor types, fast reactors like the LFR can close the fuel cycle, which allows for the sustainable use of nuclear energy by reusing what today goes to waste. LFRs are also designed for intrinsic safety in accident scenarios: for example, they operate at atmospheric pressure and, unlike sodium-cooled reactors, lead does not react violently with water or air. By exploiting lead's properties, LFRs can be designed to be safer and more cost-effective compared to other traditional and advanced reactor designs. [ 49 ] newcleo is focused on addressing the challenge of nuclear waste management through the production and use of Mixed Uranium Plutonium Oxide ( MOX ) fuel for fast reactors. MOX is produced using depleted uranium and plutonium, respectively a byproduct of the enrichment process and a nuclear material recovered from the reprocessing of spent nuclear fuel. This approach offers a potential solution to reducing the need for new uranium mining while also managing long-lasting nuclear waste more effectively. [ 3 ] [ 4 ] The strategy involves reprocessing spent nuclear fuel multiple times, [ 50 ] with the objective of consuming long half-life elements and discard only fission products, hence greatly reducing the volume and the half-life of waste. The material recovered from the reprocessed spent fuel could then be used again to manufacture fresh MOX for newcleo's fast reactors. Fast reactors paired with fuel reprocessing offer the potential to close the nuclear fuel cycle, resulting in the reduction of long-lived nuclear waste volume and no need of new uranium mining, enabling a more sustainable fuel cycle by extracting more energy from already existing nuclear materials, reducing the environmental and financial burden of high-level nuclear waste. [ 51 ] newcleo is developing a fully-integrated fuel strategy to support its Lead-Cooled Fast SMRs, focusing on the production of its own MOX fuel. By repurposing depleted uranium and plutonium recovered from spent nuclear fuel, the company reduces the accumulation of nuclear waste and reduces the need for fresh uranium extraction, contributing to a more circular use of nuclear resources. [ 52 ] newcleo's FR-MOX production facility is designed to transform powdered plutonium and depleted uranium into fuel pellets, rods, and assemblies ready to be loaded into its reactors. The pilot line of this modular facility is designed with a capacity will of 20 tHM/year, sufficient to power newcleo's first reactors. Over time, the production capacity will expand to fully supply its entire reactor fleet, supporting the long-term sustainability of its operations. [ 53 ] Currently, the design and development of the FR-MOX production facility are progressing, as well as the licensing process, with French Nuclear safety authorities, ASN (Autorité de Sûreté Nucléaire) and IRSN (Institut de Radioprotection et de Sûreté Nucléaire). Since September 2023, newcleo has been engaged in ongoing technical discussions with these regulatory bodies, as part of the pre-licensing phase. This marks the progress in the company's strategy to create an integrated nuclear energy model where both reactors and fuel are developed and managed in-house, ensuring greater control over safety and fuel supply security. [ 53 ] On December 16, 2024, newcleo submitted its French MOX Safety Option File (DOS) to the Nuclear Safety Authority (ASN) for its fuel assembly testing facility, initiating a new regulatory phase in which key safety options are assessed. This milestone plays an important role in advancing the application for the authorisation process to establish a Basic Nuclear Installation (INB). [ 43 ] S.R.S. Servizi Ricerche e Sviluppo, based in Rome, Italy, specialises in the design and engineering of processes, plants, machineries and nuclear systems, including the decommissioning of power plants, nuclear waste management (covering conditioning and storage), the development of new power plants (fusion and Generation IV), and nuclear fuel cycle systems. In recent years, the company has expanded its focus on nuclear technology applications, strengthening its expertise in advanced engineering solutions and contributing to the design and construction of nuclear systems deploying liquid lead technology, the same technology at the core of newcleo's activities. It has been involved in 24 lead-cooled fast reactor (LFR) projects and has developed expertise in reactor lead cooling technology and nuclear fuel cycle closure. S.R.S. provides services that include design, technical specifications, procurement, project management, construction supervision, final testing, and engineering services. Its expertise spans multiple sectors, including nuclear energy, conventional energy (oil and gas), renewable energy, environmental protection, chemical plants, petrochemical engineering, steel working, and water treatment and desalination. [ 54 ] [ 55 ] Fucina Italia, located in Piombino (LI), Italy, has transitioned from being a manufacturer of high-technology automation and steel and naval structures to a company in the fields of nuclear decommissioning, nuclear waste management, and liquid lead systems. It operates a production platform with a total area of 20,000 square meters, including 9,000 square meters of covered space. The company also has 11,000 square meters of additional land, with 6,000 square meters available for future development. This site is planned to become a production hub for newcleo. Fucina Italia's activities include the design, manufacturing, and assembly of components, along with mechanical processing and the maintenance of lifting equipment and industrial machinery. Its services extend to multiple sectors, including steel machinery, cranes, cellulose handling in the port area, pressure vessels, containers for radioactive waste, heavy carpentry. [ 56 ] Rütschi Group, established in 1946, specialises in the production of engineered pumps for nuclear applications. With pumps installed in over 100 nuclear reactors, primarily across Europe, the company also participates in projects in Asia and South America. It operates two production facilities located in Mulhouse, France, and Möhlin, Switzerland, covering a combined area of 3,500 square meters, with potential for expansion, particularly at the Mulhouse site. Rütschi's offerings include the design, machining, welding, assembly, and testing of pumps, as well as the provision of spare parts for pumps in existing nuclear plants. The company also develops custom-made pumps for nuclear power plants, research centres, and naval applications. Its product range includes canned motor pumps, mechanical seal pumps, immersed vertical pumps, and submersible pumps for nuclear, industrial, and chemical uses. [ 57 ] The Leadership Team at newcleo is composed of professionals with experience in scientific, development, and business roles across multiple sectors. [ 58 ] newcleo's Board of Directors consists of individuals with international business experience and expertise in the nuclear sector. [ 59 ] Focused on the design, construction, and operation of Generation IV Small Modular Lead-Cooled Fast Reactors (SM-LFRs) powered by MOX fuel, newcleo aims to generate nuclear energy that is safe, clean, cost-effective, and sustainable over the long term. This approach is intended to contribute to energy solutions that minimise environmental impact while addressing the increasing global demand for reliable and affordable energy. [ 60 ] In line with its operational goals, newcleo has developed a comprehensive environmental, social, and governance (ESG) strategy, which is built around four key pillars: people, planet, prosperity, and principles of governance. Each of these areas reflects the company's commitment to assessing and improving its impact on both the environment and the communities it serves. [ 61 ] [ 62 ] As a signatory of the UN 24/7 Carbon-Free Energy Compact, newcleo aims to support the transition to carbon-neutral energy sources. [ 63 ] With oversight from its ESG committee, newcleo reports on its progress in these areas, ensuring a consistent approach to improving its social and environmental performance. Additionally, the company prioritises diversity and skill development within its workforce, acknowledging the role of these factors in supporting long-term organisational growth. newcleo's ESG strategy is integrated with its business objectives, incorporating sustainability into its operations and decision-making processes, and contributing to global efforts to reduce carbon emissions and support sustainable development. [ 64 ] [ 65 ]
https://en.wikipedia.org/wiki/NewCleo