text
stringlengths
11
320k
source
stringlengths
26
161
In multicellular organisms , stem cells are undifferentiated or partially differentiated cells that can change into various types of cells and proliferate indefinitely to produce more of the same stem cell. They are the earliest type of cell in a cell lineage . [ 1 ] They are found in both embryonic and adult organisms, but they have slightly different properties in each. They are usually distinguished from progenitor cells , which cannot divide indefinitely, and precursor or blast cells, which are usually committed to differentiating into one cell type. In mammals , roughly 50 to 150 cells make up the inner cell mass during the blastocyst stage of embryonic development , around days 5–14. These have stem-cell capability. In vivo , they eventually differentiate into all of the body's cell types (making them pluripotent ). This process starts with the differentiation into the three germ layers – the ectoderm , mesoderm and endoderm – at the gastrulation stage. However, when they are isolated and cultured in vitro , they can be kept in the stem-cell stage and are known as embryonic stem cells (ESCs). Adult stem cells are found in a few select locations in the body, known as niches , such as those in the bone marrow or gonads . They exist to replenish rapidly lost cell types and are multipotent or unipotent, meaning they only differentiate into a few cell types or one type of cell. In mammals, they include, among others, hematopoietic stem cells , which replenish blood and immune cells, basal cells , which maintain the skin epithelium , and mesenchymal stem cells , which maintain bone, cartilage , muscle and fat cells. Adult stem cells are a small minority of cells; they are vastly outnumbered by the progenitor cells and terminally differentiated cells that they differentiate into. [ 1 ] Research into stem cells grew out of findings by Canadian biologists Ernest McCulloch , James Till and Andrew J. Becker at the University of Toronto and the Ontario Cancer Institute in the 1960s. [ 2 ] [ 3 ] As of 2016 [update] , the only established medical therapy using stem cells is hematopoietic stem cell transplantation , [ 4 ] first performed in 1958 by French oncologist Georges Mathé . Since 1998 however, it has been possible to culture and differentiate human embryonic stem cells (in stem-cell lines ). The process of isolating these cells has been controversial , because it typically results in the destruction of the embryo. Sources for isolating ESCs have been restricted in some European countries and Canada, but others such as the UK and China have promoted the research. [ 5 ] Somatic cell nuclear transfer is a cloning method that can be used to create a cloned embryo for the use of its embryonic stem cells in stem cell therapy. [ 6 ] In 2006, a Japanese team led by Shinya Yamanaka discovered a method to convert mature body cells back into stem cells. These were termed induced pluripotent stem cells (iPSCs). [ 7 ] The term stem cell was coined by Theodor Boveri and Valentin Haecker in late 19th century. [ 8 ] Pioneering works in theory of blood stem cell were conducted in the beginning of 20th century by Artur Pappenheim , Alexander A. Maximow , Franz Ernst Christian Neumann . [ 8 ] The key properties of a stem cell were first defined by Ernest McCulloch and James Till at the University of Toronto and the Ontario Cancer Institute in the early 1960s. They discovered the blood-forming stem cell, the hematopoietic stem cell (HSC), through their pioneering work in mice. McCulloch and Till began a series of experiments in which bone marrow cells were injected into irradiated mice. They observed lumps in the spleens of the mice that were linearly proportional to the number of bone marrow cells injected. They hypothesized that each lump (colony) was a clone arising from a single marrow cell (stem cell). In subsequent work, McCulloch and Till, joined by graduate student Andrew John Becker and senior scientist Louis Siminovitch , confirmed that each lump did in fact arise from a single cell. Their results were published in Nature in 1963. In that same year, Siminovitch was a lead investigator for studies that found colony-forming cells were capable of self-renewal, which is a key defining property of stem cells that Till and McCulloch had theorized. [ 9 ] The first therapy using stem cells was a bone marrow transplant performed by French oncologist Georges Mathé in 1956 on five workers at the Vinča Nuclear Institute in Yugoslavia who had been affected by a criticality accident . The workers all survived. [ 10 ] In 1981, embryonic stem (ES) cells were first isolated and successfully cultured using mouse blastocysts by British biologists Martin Evans and Matthew Kaufman . This allowed the formation of murine genetic models, a system in which the genes of mice are deleted or altered in order to study their function in pathology. In 1991, a process that allowed the human stem cell to be isolated was patented by Ann Tsukamoto. By 1998, human embryonic stem cells were first isolated by American biologist James Thomson , which made it possible to have new transplantation methods or various cell types for testing new treatments. In 2006, Shinya Yamanaka 's team in Kyoto, Japan converted fibroblasts into pluripotent stem cells by modifying the expression of only four genes. The feat represents the origin of induced pluripotent stem cells, known as iPS cells. [ 7 ] In 2011, a female maned wolf , run over by a truck, underwent stem cell treatment at the Zoo Brasília [ pt ] , this being the first recorded case of the use of stem cells to heal injuries in a wild animal. [ 11 ] [ 12 ] The classical definition of a stem cell requires that it possesses two properties: Two mechanisms ensure that a stem cell population is maintained (does not shrink in size): 1. Asymmetric cell division : a stem cell divides into one mother cell, which is identical to the original stem cell, and another daughter cell, which is differentiated. When a stem cell self-renews, it divides and disrupts the undifferentiated state. This self-renewal demands control of cell cycle as well as upkeep of multipotency or pluripotency, which all depends on the stem cell. [ 13 ] H. Stem cells use telomerase , a protein that restores telomeres , to protect their DNA and extend their cell division limit (the Hayflick limit ). [ 14 ] Potency specifies the differentiation potential (the potential to differentiate into different cell types) of the stem cell. [ 15 ] In practice, stem cells are identified by whether they can regenerate tissue. For example, the defining test for bone marrow or hematopoietic stem cells (HSCs) is the ability to transplant the cells and save an individual without HSCs. This demonstrates that the cells can produce new blood cells over a long term. It should also be possible to isolate stem cells from the transplanted individual, which can themselves be transplanted into another individual without HSCs, demonstrating that the stem cell was able to self-renew. Properties of stem cells can be illustrated in vitro , using methods such as clonogenic assays , in which single cells are assessed for their ability to differentiate and self-renew. [ 18 ] [ 19 ] Stem cells can also be isolated by their possession of a distinctive set of cell surface markers. However, in vitro culture conditions can alter the behavior of cells, making it unclear whether the cells shall behave in a similar manner in vivo . There is considerable debate as to whether some proposed adult cell populations are truly stem cells. [ 20 ] Embryonic stem cells (ESCs) are the cells of the inner cell mass of a blastocyst , formed prior to implantation in the uterus. [ 21 ] In human embryonic development the blastocyst stage is reached 4–5 days after fertilization , at which time it consists of 50–150 cells. ESCs are pluripotent and give rise during development to all derivatives of the three germ layers : ectoderm , endoderm and mesoderm . In other words, they can develop into each of the more than 200 cell types of the adult body when given sufficient and necessary stimulation for a specific cell type. They do not contribute to the extraembryonic membranes or to the placenta . During embryonic development the cells of the inner cell mass continuously divide and become more specialized. For example, a portion of the ectoderm in the dorsal part of the embryo specializes as ' neurectoderm ', which will become the future central nervous system (CNS). [ 22 ] Later in development, neurulation causes the neurectoderm to form the neural tube . At the neural tube stage, the anterior portion undergoes encephalization to generate or 'pattern' the basic form of the brain. At this stage of development, the principal cell type of the CNS is considered a neural stem cell . The neural stem cells self-renew and at some point transition into radial glial progenitor cells (RGPs). Early-formed RGPs self-renew by symmetrical division to form a reservoir group of progenitor cells . These cells transition to a neurogenic state and start to divide asymmetrically to produce a large diversity of many different neuron types, each with unique gene expression, morphological, and functional characteristics. The process of generating neurons from radial glial cells is called neurogenesis . The radial glial cell, has a distinctive bipolar morphology with highly elongated processes spanning the thickness of the neural tube wall. It shares some glial characteristics, most notably the expression of glial fibrillary acidic protein (GFAP). [ 23 ] [ 24 ] The radial glial cell is the primary neural stem cell of the developing vertebrate CNS, and its cell body resides in the ventricular zone , adjacent to the developing ventricular system . Neural stem cells are committed to the neuronal lineages ( neurons , astrocytes , and oligodendrocytes ), and thus their potency is restricted. [ 22 ] Nearly all research to date has made use of mouse embryonic stem cells (mES) or human embryonic stem cells (hES) derived from the early inner cell mass. Both have the essential stem cell characteristics, yet they require very different environments in order to maintain an undifferentiated state. Mouse ES cells are grown on a layer of gelatin as an extracellular matrix (for support) and require the presence of leukemia inhibitory factor (LIF) in serum media. A drug cocktail containing inhibitors to GSK3B and the MAPK/ERK pathway , called 2i, has also been shown to maintain pluripotency in stem cell culture. [ 25 ] Human ESCs are grown on a feeder layer of mouse embryonic fibroblasts and require the presence of basic fibroblast growth factor (bFGF or FGF-2). [ 26 ] Without optimal culture conditions or genetic manipulation, [ 27 ] embryonic stem cells will rapidly differentiate. A human embryonic stem cell is also defined by the expression of several transcription factors and cell surface proteins. The transcription factors Oct-4 , Nanog , and Sox2 form the core regulatory network that ensures the suppression of genes that lead to differentiation and the maintenance of pluripotency. [ 28 ] The cell surface antigens most commonly used to identify hES cells are the glycolipids stage specific embryonic antigen 3 and 4, and the keratan sulfate antigens Tra-1-60 and Tra-1-81. The molecular definition of a stem cell includes many more proteins and continues to be a topic of research. [ 29 ] By using human embryonic stem cells to produce specialized cells like nerve cells or heart cells in the lab, scientists can gain access to adult human cells without taking tissue from patients. They can then study these specialized adult cells in detail to try to discern complications of diseases, or to study cell reactions to proposed new drugs. Because of their combined abilities of unlimited expansion and pluripotency, embryonic stem cells remain a theoretically potential source for regenerative medicine and tissue replacement after injury or disease., [ 30 ] however, there are currently no approved treatments using ES cells. The first human trial was approved by the US Food and Drug Administration in January 2009. [ 31 ] However, the human trial was not initiated until October 13, 2010, in Atlanta for spinal cord injury research . On November 14, 2011, the company conducting the trial ( Geron Corporation ) announced that it will discontinue further development of its stem cell programs. [ 32 ] Differentiating ES cells into usable cells while avoiding transplant rejection are just a few of the hurdles that embryonic stem cell researchers still face. [ 33 ] Embryonic stem cells, being pluripotent, require specific signals for correct differentiation – if injected directly into another body, ES cells will differentiate into many different types of cells, causing a teratoma . Ethical considerations regarding the use of unborn human tissue are another reason for the lack of approved treatments using embryonic stem cells. Many nations currently have moratoria or limitations on either human ES cell research or the production of new human ES cell lines. Mesenchymal stem cells (MSC) or mesenchymal stromal cells, also known as medicinal signaling cells are known to be multipotent, which can be found in adult tissues, for example, in the muscle, liver, bone marrow and adipose tissue. Mesenchymal stem cells usually function as structural support in various organs as mentioned above, and control the movement of substances. MSC can differentiate into numerous cell categories as an illustration of adipocytes, osteocytes, and chondrocytes, derived by the mesodermal layer. [ 34 ] Where the mesoderm layer provides an increase to the body's skeletal elements, such as relating to the cartilage or bone. The term "meso" means middle, infusion originated from the Greek, signifying that mesenchymal cells are able to range and travel in early embryonic growth among the ectodermal and endodermal layers. This mechanism helps with space-filling thus, key for repairing wounds in adult organisms that have to do with mesenchymal cells in the dermis (skin), bone, or muscle. [ 35 ] Mesenchymal stem cells are known to be essential for regenerative medicine. They are broadly studied in clinical trials . Since they are easily isolated and obtain high yield, high plasticity, which makes able to facilitate inflammation and encourage cell growth, cell differentiation, and restoring tissue derived from immunomodulation and immunosuppression. MSC comes from the bone marrow, which requires an aggressive procedure when it comes to isolating the quantity and quality of the isolated cell, and it varies by how old the donor. When comparing the rates of MSC in the bone marrow aspirates and bone marrow stroma, the aspirates tend to have lower rates of MSC than the stroma. MSC are known to be heterogeneous, and they express a high level of pluripotent markers when compared to other types of stem cells, such as embryonic stem cells. [ 34 ] MSCs injection leads to wound healing primarily through stimulation of angiogenesis. [ 36 ] Embryonic stem cells (ESCs) have the ability to divide indefinitely while keeping their pluripotency , which is made possible through specialized mechanisms of cell cycle control. [ 37 ] Compared to proliferating somatic cells , ESCs have unique cell cycle characteristics—such as rapid cell division caused by shortened G1 phase , absent G0 phase , and modifications in cell cycle checkpoints —which leaves the cells mostly in S phase at any given time. [ 37 ] [ 38 ] ESCs' rapid division is demonstrated by their short doubling time, which ranges from 8 to 10 hours, whereas somatic cells have doubling time of approximately 20 hours or longer. [ 39 ] As cells differentiate, these properties change: G1 and G2 phases lengthen, leading to longer cell division cycles. This suggests that a specific cell cycle structure may contribute to the establishment of pluripotency. [ 37 ] Particularly because G1 phase is the phase in which cells have increased sensitivity to differentiation, shortened G1 is one of the key characteristics of ESCs and plays an important role in maintaining undifferentiated phenotype . Although the exact molecular mechanism remains only partially understood, several studies have shown insight on how ESCs progress through G1—and  potentially other phases—so rapidly. [ 38 ] The cell cycle is regulated by complex network of cyclins , cyclin-dependent kinases (Cdk), cyclin-dependent kinase inhibitors (Cdkn), pocket proteins of the retinoblastoma (Rb) family, and other accessory factors. [ 39 ] Foundational insight into the distinctive regulation of ESC cell cycle was gained by studies on mouse ESCs (mESCs). [ 38 ] mESCs showed a cell cycle with highly abbreviated G1 phase, which enabled cells to rapidly alternate between M phase and S phase. In a somatic cell cycle, oscillatory activity of Cyclin-Cdk complexes is observed in sequential action, which controls crucial regulators of the cell cycle to induce unidirectional transitions between phases: Cyclin D and Cdk4/6 are active in the G1 phase, while Cyclin E and Cdk2 are active during the late G1 phase and S phase; and Cyclin A and Cdk2 are active in the S phase and G2, while Cyclin B and Cdk1 are active in G2 and M phase. [ 39 ] However, in mESCs, this typically ordered and oscillatory activity of Cyclin-Cdk complexes is absent. Rather, the Cyclin E/Cdk2 complex is constitutively active throughout the cycle, keeping retinoblastoma protein (pRb) hyperphosphorylated and thus inactive. This allows for direct transition from M phase to the late G1 phase, leading to absence of D-type cyclins and therefore a shortened G1 phase. [ 38 ] Cdk2 activity is crucial for both cell cycle regulation and cell-fate decisions in mESCs; downregulation of Cdk2 activity prolongs G1 phase progression, establishes a somatic cell-like cell cycle, and induces expression of differentiation markers. [ 40 ] In human ESCs (hESCs), the duration of G1 is dramatically shortened. This has been attributed to high mRNA levels of G1-related Cyclin D2 and Cdk4 genes and low levels of cell cycle regulatory proteins that inhibit cell cycle progression at G1, such as p21 CipP1 , p27 Kip1 , and p57 Kip2 . [ 37 ] [ 41 ] Furthermore, regulators of Cdk4 and Cdk6 activity, such as members of the Ink family of inhibitors (p15, p16, p18, and p19), are expressed at low levels or not at all. Thus, similar to mESCs, hESCs show high Cdk activity, with Cdk2 exhibiting the highest kinase activity. Also similar to mESCs, hESCs demonstrate the importance of Cdk2 in G1 phase regulation by showing that G1 to S transition is delayed when Cdk2 activity is inhibited and G1 is arrest when Cdk2 is knocked down. [ 37 ] However unlike mESCs, hESCs have a functional G1 phase. hESCs show that the activities of Cyclin E/Cdk2 and Cyclin A/Cdk2 complexes are cell cycle-dependent and the Rb checkpoint in G1 is functional. [ 39 ] ESCs are also characterized by G1 checkpoint non-functionality, even though the G1 checkpoint is crucial for maintaining genomic stability. In response to DNA damage , ESCs do not stop in G1 to repair DNA damages but instead, depend on S and G2/M checkpoints or undergo apoptosis. The absence of G1 checkpoint in ESCs allows for the removal of cells with damaged DNA, hence avoiding potential mutations from inaccurate DNA repair. [ 37 ] Consistent with this idea, ESCs are hypersensitive to DNA damage to minimize mutations passed onto the next generation. [ 39 ] The primitive stem cells located in the organs of fetuses are referred to as fetal stem cells. [ 42 ] There are two types of fetal stem cells: Adult stem cells, also called somatic (from Greek σωματικóς, "of the body") stem cells, are stem cells which maintain and repair the tissue in which they are found. [ 44 ] There are three known accessible sources of autologous adult stem cells in humans: Stem cells can also be taken from umbilical cord blood just after birth. Of all stem cell types, autologous harvesting involves the least risk. By definition, autologous cells are obtained from one's own body, just as one may bank their own blood for elective surgical procedures. [ citation needed ] Pluripotent adult stem cells are rare and generally small in number, but they can be found in umbilical cord blood and other tissues. [ 48 ] Bone marrow is a rich source of adult stem cells, [ 49 ] which have been used in treating several conditions including liver cirrhosis, [ 50 ] chronic limb ischemia [ 51 ] and endstage heart failure. [ 52 ] The quantity of bone marrow stem cells declines with age and is greater in males than females during reproductive years. [ 53 ] Much adult stem cell research to date has aimed to characterize their potency and self-renewal capabilities. [ 54 ] DNA damage accumulates with age in both stem cells and the cells that comprise the stem cell environment. This accumulation is considered to be responsible, at least in part, for increasing stem cell dysfunction with aging (see DNA damage theory of aging ). [ 55 ] Most adult stem cells are lineage-restricted ( multipotent ) and are generally referred to by their tissue origin ( mesenchymal stem cell , adipose-derived stem cell, endothelial stem cell , dental pulp stem cell , etc.). [ 56 ] [ 57 ] Muse cells (multi-lineage differentiating stress enduring cells) are a recently discovered pluripotent stem cell type found in multiple adult tissues, including adipose, dermal fibroblasts, and bone marrow. While rare, muse cells are identifiable by their expression of SSEA-3 , a marker for undifferentiated stem cells, and general mesenchymal stem cells markers such as CD90, CD105 . When subjected to single cell suspension culture, the cells will generate clusters that are similar to embryoid bodies in morphology as well as gene expression, including canonical pluripotency markers Oct4 , Sox2 , and Nanog . [ 58 ] Adult stem cell treatments have been successfully used for many years to treat leukemia and related bone/blood cancers through bone marrow transplants. [ 59 ] Adult stem cells are also used in veterinary medicine to treat tendon and ligament injuries in horses. [ 60 ] The use of adult stem cells in research and therapy is not as controversial as the use of embryonic stem cells , because the production of adult stem cells does not require the destruction of an embryo . Additionally, in instances where adult stem cells are obtained from the intended recipient (an autograft ), the risk of rejection is essentially non-existent. Consequently, more US government funding is being provided for adult stem cell research. [ 61 ] With the increasing demand of human adult stem cells for both research and clinical purposes (typically 1–5 million cells per kg of body weight are required per treatment) it becomes of utmost importance to bridge the gap between the need to expand the cells in vitro and the capability of harnessing the factors underlying replicative senescence. Adult stem cells are known to have a limited lifespan in vitro and to enter replicative senescence almost undetectably upon starting in vitro culturing. [ 62 ] Hematopoietic stem cells (HSCs) are vulnerable to DNA damage and mutations that increase with age. [ 63 ] This vulnerability may explain the increased risk of slow growing blood cancers (myeloid malignancies) in the elderly. [ 63 ] Several factors appear to influence HSC aging including responses to the production of reactive oxygen species that may cause DNA damage and genetic mutations as well as altered epigenetic profiling. [ 64 ] Also called perinatal stem cells, these multipotent stem cells are found in amniotic fluid and umbilical cord blood. These stem cells are very active, expand extensively without feeders and are not tumorigenic. Amniotic stem cells are multipotent and can differentiate in cells of adipogenic, osteogenic, myogenic, endothelial, hepatic and also neuronal lines. [ 65 ] Amniotic stem cells are a topic of active research. Use of stem cells from amniotic fluid overcomes the ethical objections to using human embryos as a source of cells. Roman Catholic teaching forbids the use of embryonic stem cells in experimentation; accordingly, the Vatican newspaper " Osservatore Romano " called amniotic stem cells "the future of medicine". [ 66 ] It is possible to collect amniotic stem cells for donors or for autologous use: the first US amniotic stem cells bank [ 67 ] [ 68 ] was opened in 2009 in Medford, MA, by Biocell Center Corporation [ 69 ] [ 70 ] [ 71 ] and collaborates with various hospitals and universities all over the world. [ 72 ] Adult stem cells have limitations with their potency; unlike embryonic stem cells (ESCs), they are not able to differentiate into cells from all three germ layers . As such, they are deemed multipotent . However, reprogramming allows for the creation of pluripotent cells, induced pluripotent stem cells (iPSCs), from adult cells. These are not adult stem cells, but somatic cells (e.g. epithelial cells) reprogrammed to give rise to cells with pluripotent capabilities. Using genetic reprogramming with protein transcription factors , pluripotent stem cells with ESC-like capabilities have been derived. [ 73 ] [ 74 ] [ 75 ] The first demonstration of induced pluripotent stem cells was conducted by Shinya Yamanaka and his colleagues at Kyoto University . [ 76 ] They used the transcription factors Oct3/4 , Sox2 , c-Myc , and Klf4 to reprogram mouse fibroblast cells into pluripotent cells. [ 73 ] [ 77 ] Subsequent work used these factors to induce pluripotency in human fibroblast cells. [ 78 ] Junying Yu , James Thomson , and their colleagues at the University of Wisconsin–Madison used a different set of factors, Oct4, Sox2, Nanog and Lin28, and carried out their experiments using cells from human foreskin . [ 73 ] [ 79 ] However, they were able to replicate Yamanaka 's finding that inducing pluripotency in human cells was possible. Induced pluripotent stem cells differ from embryonic stem cells. They share many similar properties, such as pluripotency and differentiation potential, the expression of pluripotency genes, epigenetic patterns, embryoid body and teratoma formation, and viable chimera formation, [ 76 ] [ 77 ] but there are many differences within these properties. The chromatin of iPSCs appears to be more "closed" or methylated than that of ESCs. [ 76 ] [ 77 ] Similarly, the gene expression pattern between ESCs and iPSCs, or even iPSCs sourced from different origins. [ 76 ] There are thus questions about the "completeness" of reprogramming and the somatic memory of induced pluripotent stem cells. Despite this, inducing somatic cells to be pluripotent appears to be viable. As a result of the success of these experiments, Ian Wilmut , who helped create the first cloned animal Dolly the Sheep , has announced that he will abandon somatic cell nuclear transfer as an avenue of research. [ 80 ] The ability to induce pluripotency benefits developments in tissue engineering . By providing a suitable scaffold and microenvironment, iPSC can be differentiated into cells of therapeutic application, and for in vitro models to study toxins and pathogenesis. [ 81 ] Induced pluripotent stem cells provide several therapeutic advantages. Like ESCs, they are pluripotent . They thus have great differentiation potential; theoretically, they could produce any cell within the human body (if reprogramming to pluripotency was "complete"). [ 76 ] Moreover, unlike ESCs, they potentially could allow doctors to create a pluripotent stem cell line for each individual patient. [ 82 ] Frozen blood samples can be used as a valuable source of induced pluripotent stem cells. [ 83 ] Patient specific stem cells allow for the screening for side effects before drug treatment, as well as the reduced risk of transplantation rejection. [ 82 ] Despite their current limited use therapeutically, iPSCs hold great potential for future use in medical treatment and research. The key factors controlling the cell cycle also regulate pluripotency . Thus, manipulation of relevant genes can maintain pluripotency and reprogram somatic cells to an induced pluripotent state. [ 39 ] However, reprogramming of somatic cells is often low in efficiency and considered stochastic . [ 84 ] With the idea that a more rapid cell cycle is a key component of pluripotency, reprogramming efficiency can be improved. Methods for improving pluripotency through manipulation of cell cycle regulators include: overexpression of Cyclin D/Cdk4, phosphorylation of Sox2 at S39 and S253, overexpression of Cyclin A and Cyclin E, knockdown of Rb, and knockdown of members of the Cip/Kip family or the Ink family. [ 39 ] Furthermore, reprogramming efficiency is correlated with the number of cell divisions happened during the stochastic phase, which is suggested by the growing inefficiency of reprogramming of older or slow diving cells. [ 85 ] Lineage is an important procedure to analyze developing embryos. Since cell lineages shows the relationship between cells at each division. This helps in analyzing stem cell lineages along the way which helps recognize stem cell effectiveness, lifespan, and other factors. With the technique of cell lineage mutant genes can be analyzed in stem cell clones that can help in genetic pathways. These pathways can regulate how the stem cell perform. [ 86 ] To ensure self-renewal, stem cells undergo two types of cell division (see Stem cell division and differentiation diagram). Symmetric division gives rise to two identical daughter cells both endowed with stem cell properties. Asymmetric division, on the other hand, produces only one stem cell and a progenitor cell with limited self-renewal potential. Progenitors can go through several rounds of cell division before terminally differentiating into a mature cell. It is possible that the molecular distinction between symmetric and asymmetric divisions lies in differential segregation of cell membrane proteins (such as receptors ) between the daughter cells. [ 87 ] An alternative theory is that stem cells remain undifferentiated due to environmental cues in their particular niche . Stem cells differentiate when they leave that niche or no longer receive those signals. Studies in Drosophila germarium have identified the signals decapentaplegic and adherens junctions that prevent germarium stem cells from differentiating. [ 88 ] [ 89 ] In the United States, Executive Order 13505 established that federal money can be used for research in which approved human embryonic stem-cell (hESC) lines are used, but it cannot be used to derive new lines. [ 90 ] The National Institutes of Health (NIH) Guidelines on Human Stem Cell Research, effective July 7, 2009, implemented the Executive Order 13505 by establishing criteria which hESC lines must meet to be approved for funding. [ 91 ] The NIH Human Embryonic Stem Cell Registry can be accessed online and has updated information on cell lines eligible for NIH funding. [ 92 ] There are 486 approved lines as of January 2022. [ 93 ] Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplant is a form of stem cell therapy that has been used for many years because it has proven to be effective in clinical trials. [ 94 ] [ 95 ] Stem cell implantation may help in strengthening the left-ventricle of the heart, as well as retaining the heart tissue to patients who have suffered from heart attacks in the past. [ 96 ] For over 50 years, hematopoietic stem cell transplantation (HSCT) has been used to treat people with conditions such as leukaemia and lymphoma ; this is the only widely practiced form of stem-cell therapy. [ 94 ] [ 95 ] [ 97 ] As of 2016 [update] , the only established therapy using stem cells is hematopoietic stem cell transplantation . [ 98 ] This usually takes the form of a bone-marrow transplantation, but the cells can also be derived from umbilical cord blood . Research is underway to develop various sources for stem cells as well as to apply stem-cell treatments for neurodegenerative diseases [ 99 ] [ 100 ] [ 101 ] and conditions such as diabetes and heart disease . Stem cell treatments may lower symptoms of the disease or condition that is being treated. The lowering of symptoms may allow patients to reduce the drug intake of the disease or condition. Stem cell treatment may also provide knowledge for society to further stem cell understanding and future treatments. [ 102 ] The physicians' creed would be to do no injury, and stem cells make that simpler than ever before. Surgical processes by their character are harmful. Tissue has to be dropped as a way to reach a successful outcome. One may prevent the dangers of surgical interventions using stem cells. Additionally, there's a possibility of disease, and whether the procedure fails, further surgery may be required. Risks associated with anesthesia can also be eliminated with stem cells. [ 103 ] On top of that, stem cells have been harvested from the patient's body and redeployed in which they're wanted. Since they come from the patient's own body, this is referred to as an autologous treatment. Autologous remedies are thought to be the safest because there's likely zero probability of donor substance rejection. Stem cell treatments may require immunosuppression because of a requirement for radiation before the transplant to remove the person's previous cells, or because the patient's immune system may target the stem cells. One approach to avoid the second possibility is to use stem cells from the same patient who is being treated. Pluripotency in certain stem cells could also make it difficult to obtain a specific cell type. It is also difficult to obtain the exact cell type needed, because not all cells in a population differentiate uniformly. Undifferentiated cells can create tissues other than desired types. [ 104 ] Some stem cells form tumors after transplantation; [ 105 ] pluripotency is linked to tumor formation especially in embryonic stem cells, fetal proper stem cells, induced pluripotent stem cells. Fetal proper stem cells form tumors despite multipotency. [ 106 ] Ethical concerns are also raised about the practice of using or researching embryonic stem cells. Harvesting cells from the blastocyst results in the death of the blastocyst. The concern is whether or not the blastocyst should be considered as a human life. [ 107 ] The debate on this issue is mainly a philosophical one, not a scientific one. Stem cell tourism is the part of the medical tourism industry in which patients travel to obtain stem cell procedures. [ 108 ] The United States has had an explosion of "stem cell clinics". [ 109 ] Stem cell procedures are highly profitable for clinics. The advertising sounds authoritative but the efficacy and safety of the procedures is unproven. Patients sometimes experience complications, such as spinal tumors [ 110 ] and death. The high expense can also lead to financial problems. [ 110 ] According to researchers, there is a need to educate the public, patients, and doctors about this issue. [ 111 ] According to the International Society for Stem Cell Research , the largest academic organization that advocates for stem cell research, stem cell therapies are under development and cannot yet be said to be proven. [ 112 ] [ 113 ] Doctors should inform patients that clinical trials continue to investigate whether these therapies are safe and effective but that unethical clinics present them as proven. [ 114 ] Some of the fundamental patents covering human embryonic stem cells are owned by the Wisconsin Alumni Research Foundation (WARF) – they are patents 5,843,780, 6,200,806, and 7,029,913 invented by James A. Thomson . WARF does not enforce these patents against academic scientists, but does enforce them against companies. [ 115 ] In 2006, a request for the US Patent and Trademark Office (USPTO) to re-examine the three patents was filed by the Public Patent Foundation [ 116 ] on behalf of its client, the non-profit patent-watchdog group Consumer Watchdog (formerly the Foundation for Taxpayer and Consumer Rights). [ 115 ] In the re-examination process, which involves several rounds of discussion between the USPTO and the parties, the USPTO initially agreed with Consumer Watchdog and rejected all the claims in all three patents, [ 117 ] however in response, WARF amended the claims of all three patents to make them more narrow, and in 2008 the USPTO found the amended claims in all three patents to be patentable. The decision on one of the patents (7,029,913) was appealable, while the decisions on the other two were not. [ 118 ] [ 119 ] Consumer Watchdog appealed the granting of the '913 patent to the USPTO's Board of Patent Appeals and Interferences (BPAI) which granted the appeal, and in 2010 the BPAI decided that the amended claims of the '913 patent were not patentable. [ 120 ] However, WARF was able to re-open prosecution of the case and did so, amending the claims of the '913 patent again to make them more narrow, and in January 2013 the amended claims were allowed. [ 121 ] In July 2013, Consumer Watchdog announced that it would appeal the decision to allow the claims of the '913 patent to the US Court of Appeals for the Federal Circuit (CAFC), the federal appeals court that hears patent cases. [ 122 ] At a hearing in December 2013, the CAFC raised the question of whether Consumer Watchdog had legal standing to appeal; the case could not proceed until that issue was resolved. [ 123 ] Diseases and conditions where stem cell treatment is being investigated include: Research is underway to develop various sources for stem cells. [ 146 ] Research is attempting to generating organoids using stem cells, which would allow for further understanding of human development, organogenesis , and modeling of human diseases. [ 147 ] Engineered ‘synthetic organizer’ (SO) cells can instruct stem cells to grow into specific tissues and organs. The program used native and synthetic cell adhesion protein molecules (CAMs) that help make cells sticky. The organizer cells self-assembled around mouse ESCs. These cells were engineered to produce morphogens (signaling molecules) that direct cellular development based on their concentration. Delivered morphogens disperse, leaving higher concentrations closer to the source and lower concentrations further away. These gradients signal cells' ultimate roles, such as nerve, skin cell, or connective tissue. The engineered organizer cells were also fitted with a chemical switch that enabled the researchers to turn the delivery of cellular instructions on and off, as well as a ‘suicide switch’ for eliminating the cells when needed. SOs carry spatial and biochemical information, allowing considerable discretion in organoid formation. [ 148 ] Hepatotoxicity and drug-induced liver injury account for a substantial number of failures of new drugs in development and market withdrawal, highlighting the need for screening assays such as stem cell-derived hepatocyte-like cells, that are capable of detecting toxicity early in the drug development process. [ 149 ] In August 2021, researchers in the Princess Margaret Cancer Centre at the University Health Network published their discovery of a dormancy mechanism in key stem cells which could help develop cancer treatments in the future. [ 150 ]
https://en.wikipedia.org/wiki/Stem_cell
Stem cell laws are the law rules, and policy governance concerning the sources, research, and uses in treatment of stem cells in humans. These laws have been the source of much controversy and vary significantly by country. [ 1 ] In the European Union , stem cell research using the human embryo is permitted in Sweden , Spain , Finland , Belgium , Greece , Britain , Denmark and the Netherlands ; [ 2 ] however, it is illegal in Germany , Austria , Ireland , Italy , and Portugal . The issue has similarly divided the United States , with several states enforcing a complete ban and others giving support. [ 3 ] Elsewhere, Japan , India , Iran , Israel , South Korea , China , and Australia are supportive. However, New Zealand , most of Africa (except South Africa ), and most of South America (except Brazil ) are restrictive. The information presented here covers the legal implications of embryonic stem cells (ES), rather than induced pluripotent stem cells (iPSCs). The laws surrounding the two differ because while both have similar capacities in differentiation, their modes of derivation are not. While embryonic stem cells are taken from embryoblasts, induced pluripotent stem cells are undifferentiated from somatic adult cells. [ 4 ] Stem cells are cells found in most, if not all, multi-cellular organisms. A common example of a stem cell is the hematopoietic stem cell (HSC) which are multipotent stem cells that give rise to cells of the blood lineage. In contrast to multipotent stem cells, embryonic stem cells are pluripotent and are thought to be able to give rise to all cells of the body. Embryonic stem cells were isolated in mice in 1981, and in humans in 1998. [ 5 ] Stem cell treatments are a type of cell therapy that introduce new cells into adult bodies for possible treatment of cancer , somatic cell nuclear transfer , diabetes , and other medical conditions. Cloning also might be done with stem cells. Stem cells have been used to repair tissue damaged by disease. [ 6 ] Because ES cells are cultured from the embryoblast 4–5 days after fertilization, harvesting them is most often done from donated embryos from in vitro fertilization (IVF) clinics. In January 2007, researchers at Wake Forest University reported that "stem cells drawn from amniotic fluid donated by pregnant women hold much of the same promise as embryonic stem cells." [ 5 ] The European Union has yet to issue consistent regulations with respect to stem cell research in member states. Whereas Germany , Austria , Italy , Finland , Portugal and the Netherlands prohibit or severely restrict the use of embryonic stem cells, Greece, Sweden , Spain and the United Kingdom have created the legal basis to support this research. [ 7 ] Belgium bans reproductive cloning but allows therapeutic cloning of embryos. [ 1 ] France prohibits reproductive cloning and embryo creation for research purposes, but enacted laws (with a sunset provision expiring in 2009) to allow scientists to conduct stem cell research on imported a large amount of embryos from in vitro fertilization treatments. [ 1 ] Germany has restrictive policies for stem cell research, but a 2008 law authorizes "the use of imported stem cell lines produced before May 1, 2007." [ 1 ] Italy has a 2004 law that forbids all sperm or egg donations and the freezing of embryos, but allows, in effect, using existing stem cell lines that have been imported. [ 1 ] Sweden forbids reproductive cloning, but allows therapeutic cloning and authorized a stem cell bank. [ 1 ] [ 7 ] According to modern stem cell researchers, Spain is one of the leaders in stem cell research and currently has one of the most progressive legislations worldwide with respect to human embryonic stem cell (hESC) research. [ 8 ] The new Spanish law allows existing frozen embryos – of which there are estimated to be tens of thousands in Spain – to be kept for patient's future use, donated for another infertile couple, or used in research. [ 9 ] In 2003, Spain's laws state that embryos left over from IVF and donated by the couple that created them can be used in research, including ES cell research, if they have been frozen for more than five years. [ 10 ] In 2001, the British Parliament amended the Human Fertilisation and Embryology Act 1990 (since amended by the Human Fertilisation and Embryology Act 2008 ) to permit the destruction of embryos for hESC harvests but only if the research satisfies one of the following requirements: The United Kingdom is one of the leaders in stem cell research, in the opinion of Lord Sainsbury, Science and Innovation Minister for the UK. [ 11 ] A new £10 million stem cell research centre has been announced at the University of Cambridge . [ 12 ] The primary legislation in South Africa that deals with embryo research is the Human Tissue Act, which is set to be replaced by Chapter 8 of the National Health Act. The NHA Chapter 8 has been enacted by parliament, but not yet signed into force by the president. The process of finalising these regulations is still underway. The NHA Chapter 8 allows the Minister of Health to give permission for research on embryos not older than 14 days. The legislation on embryo research is complemented by the South African Medical Research Council's Ethics Guidelines. These Guidelines advise against the creation of embryos for the sole purpose of research. In the case of Christian Lawyers Association of South Africa & others v Minister of Health & others [ 13 ] the court ruled that the Bill of Rights is not applicable to the unborn. It has therefore been argued based on constitutional grounds (the right to human dignity, and the right to freedom of scientific research) that the above limitations on embryo research are overly inhibitive of the autonomy of scientists, and hence unconstitutional. [ 14 ] China prohibits human reproductive cloning but allows the creation of human embryos for research and therapeutic purposes. [ 1 ] India banned in 2004 reproductive cloning, permitted therapeutic cloning. [ 1 ] In 2004, Japan ’s Council for Science and Technology Policy voted to allow scientists to conduct stem cell research for therapeutic purposes, though formal guidelines have yet to be released. [ 1 ] In December 2012, Japanese Prime Minister, Shinzō Abe , announced an investment into regenerative medicine of ¥110 billion (US$1 billion) over the next decade. [ 15 ] The South Korean government promotes therapeutic cloning, but forbids cloning. [ 1 ] The Philippines prohibits human embryonic and aborted human fetal stem cells and their derivatives for human treatment and research. In 1999, Israel passed legislation banning reproductive, but not therapeutic, cloning. [ 1 ] [ 7 ] Saudi Arabia religious officials issued a decree that sanctions the use of embryos for therapeutic and research purposes. [ 1 ] According to the Royan Institute for Reproductive Biomedicine , Iran has some of the most liberal laws on stem cell research and cloning. [ 16 ] [ 17 ] Laws and regulations in Jordan allow stem-cell research. [ 18 ] A center for stem cell research has acquired a license to begin operating in April 2017 at the University of Jordan . [ 19 ] Brazil has passed legislation to permit stem cell research using excess in vitro fertilized embryos that have been frozen for at least three years. [ 1 ] Federal law places restrictions on funding and use of hES cells through amendments to the budget bill. [ 20 ] In 2001, George W. Bush implemented a policy limiting the number of stem cell lines that could be used for research. [ 5 ] There were some state laws concerning stem cells that were passed in the mid-2000s. New Jersey 's 2004 S1909/A2840 specifically permitted human cloning for the purpose of developing and harvesting human stem cells, and Missouri 's 2006 Amendment Two legalized certain forms of embryonic stem cell research in the state. On the other hand, Arkansas , Indiana , Louisiana , Michigan , North Dakota and South Dakota passed laws to prohibit the creation or destruction of human embryos for medical research. [ 20 ] During Bush's second term , in July 2006, he used his first Presidential veto on the Stem Cell Research Enhancement Act . The Stem Cell Research Enhancement Act was the name of two similar bills, and both were vetoed by President George W. Bush and were not enacted into law. New Jersey congressman Chris Smith wrote the Stem Cell Therapeutic and Research Act of 2005, which made some narrow exceptions, and was signed into law by President Bush. In November 2004, California voters approved Proposition 71 , creating a US$3 billion state taxpayer-funded institute for stem cell research, the California Institute for Regenerative Medicine . It hopes to provide $300 million a year. In 2014, United States v. Regenerative Sciences, LLC upheld FDA's regulation of stem cell therapies. Barack Obama removed the restriction of federal funding signed by Bush in 2001, which only allowed funding on the 21 cell lines already created. However, the Dickey Amendment to the budget, The Omnibus Appropriations Act of 2009 , still bans federal funding of creating new cell lines. In other words, the federal government will now fund research which uses the hundreds of more lines created by public and private funds. [ 21 ] In March 2002, the Canadian Institutes of Health Research announced the first ever guidelines for human pluripotent stem cell research in Canada. The federal granting agencies, CIHR , Natural Sciences and Engineering Research Council , and Social Sciences and Humanities Research Council of Canada teamed up and agreed that no research with human IPSCs would be funded without review and approval from the Stem Cell Oversight Committee (SCOC). [ 22 ] In March 2004, Canadian parliament enacted the Assisted Human Reproduction Act (AHRA), modeled on the United Kingdom’s Human Fertilization and Embryology Act of 1990. Highlights of the act include prohibitions against the creation of embryos for research purposes and the criminalization of commercial transactions in human reproductive tissues. [ 23 ] In 2005, Canada enacted a law permitting research on discarded embryos from in vitro fertilization procedures. However, it prohibits the creation of human embryos for research. [ 1 ] On June 30, 2010, The Updated Guidelines for Human Pluripotent Stem Cell Research outline that: Canada's National Embryonic Stem Cell Registry: Australia is partially supportive (exempting reproductive cloning yet allowing research on embryonic stem cells that are derived from the process of IVF ). New Zealand , however, restricts stem cell research. [ 24 ]
https://en.wikipedia.org/wiki/Stem_cell_laws
The laws and policies regarding stem cell research in the People's Republic of China are relatively relaxed in comparison to that of other nations. The reason for this is due to different traditional and cultural views in relation to that of the West. China has one of the most unrestrictive embryonic stem cell research policies in the world. In recent years, seeing the research opportunities that China's lax regulations provide, many expatriate Chinese scientists from the West are returning to China to establish stem cell research centers and laboratories there. [ 1 ] As a result of the increased interest in this field of research, in 2003, the People's Republic of China Ministry of Science and Technology and Ministry of Health issued official ethical guidelines for human embryonic stem cell research in its territories. The guidelines strictly forbid any research aimed at human reproductive cloning and require that the embryos used for stem cell research come only from: American scientific journals Science and Nature have both reported in recent years that China's stem cell programs hold potential, and in 2004 a delegation from Britain's Department of Trade and Industry concluded more emphatically that Chinese research in the field was already world-class. [ 3 ] Funding for stem cell research by the Chinese government is extremely limited compared to Western nations, with the Chinese Ministry of Science and Technology planning to devote between US$33 million and US$132 million on stem cell research during the next 5 years. By contrast, the state of California alone has earmarked US$3 billion to fund stem cell research at California institutions during the next decade. However, it is simply cheaper to produce goods in China than in nearly any other country, and in sophisticated sectors such as medical research, the cost advantage is likely to be retained for quite some time. On January 10, 2012, it was announced that China will halt new applications for clinical trials of stem-cell products until July 1 as part of a year-long campaign to regulate the development of the industry. Deng Haihua, the Ministry of Health's spokesman, said "Trials that haven’t been approved should be stopped." Perhaps more importantly, the cultural and national attitude on stem cell research differs greatly between China and the West. Most Chinese citizens do not view the embryo as containing any inherent moral value. [ 3 ] According to the accepted Confucian view, a person begins with birth; a person is an entity that has a body or shape and psyche, and has rational, emotional and social-relational capacity for a lifetime of learning and innovation. Therefore, to the Chinese, a human embryo, lacking the characteristics of a person, cannot be equated morally to a person or a personal life. [ 4 ] Stem cell research in China thus is unlikely ever to be prone to the intense Christian moral values in the West, especially so from the US, one of the largest Christian countries. China's distinctive attitude toward the embryo, combined with its lax regulatory system, will potentially help its researchers to not be hindered when pursuing laboratory science and medical application in stem cell therapy developments. [ 3 ]
https://en.wikipedia.org/wiki/Stem_cell_laws_and_policy_in_China
Iran's flexible approach towards stem-cell research is linked to the Shia tradition being flexible enough to allow for ESCs science; the second is that the approval of ESCs research was made easier by permissive laws governing other areas of biomedicine, such as new assisted reproductive technologies; and the third is that Iran's lack of a public discussion of bioscience affects how its ESCs research policy is seen. [ 1 ] In 2002 a fatwa was issued by the Supreme Leader of Iran regarding the permissibility of "destruction of residual embryos from the in vitro fertilization cycle for the purpose of obtaining stem cells for research purposes" as accreditation for the country's ESCs scientific community. Following this positive fatwa, the stem cell department of the Royan Institute in Tehran was established in the same year to establish the ESCs lines and to develop techniques to differentiate these lineages into various mature cell types including cardiomyocytes, B cells, and neurons. [ 2 ] In the case of Iran, the introduction of the Islamic system appears to have forced religious scholars to assume an unprecedented role of responsibility and engagement. Social Planning and Public Health. The large-scale crises may partially explain why religious scholars quoted Maslahat and Istihsan in their decisions on medicine and health problems rather than looking at those problems in isolation or in the theoretical sense as it happened in the past. [ 3 ] The financial burden of devastating diseases is also at the heart of hESC research decisions in Iran. This may have given Shia scholars a boost to reconsider the degenerative and public health implications of terminal disorders or economic hardships causing serious and long-lasting illnesses for individuals, families, and society. The eight-year Iran-Iraq war has left the country with a large disabled community, due in part to spinal cord injuries, which has been an intense motivation for Iran to start many cell therapy research projects. [ 3 ] Even in developing countries (e.g. Iran), home cell therapy and regenerative medicine are cost-effective solutions for the growing number of patients with chronic diseases including diabetes, heart disease, and hepatitis blood diseases such as thalassemia , which are relatively common. [ 4 ] Although Iran has a liberal domestic regulatory environment and its scientists are well-funded, the country cannot import scientific equipment and materials that most stem cell scientists use on a daily basis. This is largely due to trade sanctions imposed on Iran by other countries, including the United States and the European Community , which ban the export of certain scientific equipment to Iran and require other special export permits. [ 5 ]
https://en.wikipedia.org/wiki/Stem_cell_laws_and_policy_in_Iran
Stem cell laws and policy in the United States have had a complicated legal and political history. Stem cells are cells found in all multi-cellular organisms. They were isolated in mice in 1981, and in humans in 1998. [ 1 ] In humans there are many types of stem cells, each with varying levels of potency. Potency is a measure of a cell's differentiation potential, or the number of other cell types that can be made from that stem cell. Embryonic stem cells are pluripotent stem cells derived from the inner cell mass of the blastocyst . These stem cells can differentiate into all other cells in the human body and are the subject of much scientific research. However, since they must be derived from early human embryos their production and use in research has been a hotly debated topic. Stem cell treatments are a type of cell therapy that introduce new cells into adult bodies for possible treatment of cancer , diabetes , neurological disorders and other medical conditions. Stem cells have been used to repair tissue damaged by disease or age. [ 2 ] Cloning also might be done with stem cells. Pluripotent stem cells can also be derived from Somatic cell nuclear transfer which is a laboratory technique where a clone embryo is created from a donor nucleus. Somatic cell nuclear transfer is also tightly regulated amongst various countries. Until recently, the principal source of human embryonic stem cells has been donated embryos from fertility clinics . In January 2007, researchers at Wake Forest University reported that "stem cells drawn from amniotic fluid donated by pregnant women hold much of the same promise as embryonic stem cells." [ 1 ] In 2000, the NIH , under the administration of President Bill Clinton , issued "guidelines that allow federal funding of embryonic stem-cell research." [ 1 ] In 1973, Roe v. Wade legalized abortion in the United States. Five years later, the first successful human in vitro fertilization resulted in the birth of Louise Brown in England. These developments prompted the federal government to create regulations barring the use of federal funds for research that experimented on human embryos. [ 3 ] In 1995, the NIH Human Embryo Research Panel advised the administration of President Bill Clinton to permit federal funding for research on embryos left over from in vitro fertility treatments and also recommended federal funding of research on embryos specifically created for experimentation. In response to the panel's recommendations, the Clinton administration, citing moral and ethical concerns, declined to fund research on embryos created solely for research purposes, [ 4 ] but did agree to fund research on left-over embryos created by in vitro fertility treatments. At this point, the Congress intervened and passed the Dickey–Wicker Amendment in 1995 (the final bill, which included the Dickey Amendment, was signed into law by Bill Clinton) which prohibited any federal funding for the Department of Health and Human Services be used for research that resulted in the destruction of an embryo regardless of the source of that embryo. In 1998, privately funded research led to the breakthrough discovery of human Embryonic stem cells (hESC). No federal law ever did ban stem cell research in the United States, but only placed restrictions on funding and use, under Congress's power to spend. [ 5 ] In February 2001, George W. Bush requested a review of the NIH's guidelines, and after a policy discussion within his circle of supporters, implemented a policy in August of that year to limit the number of embryonic stem cell lines that could be used for research. [ 1 ] (While he claimed that 78 lines would qualify for federal funding, only 19 lines were actually available. [ 1 ] ) In April 2004, 206 members of Congress , including many moderate Republicans , signed a letter urging President Bush to expand federal funding of embryonic stem cell research beyond what Bush had already supported. In May 2005, the House of Representatives voted 238-194 to loosen the limitations on federally funded embryonic stem-cell research — by allowing government-funded research on surplus frozen embryos from in vitro fertilization clinics to be used for stem cell research with the permission of donors — despite Bush's promise to veto if passed. [ 6 ] On July 29, 2005, Senate Majority Leader William H. Frist (R- TN ), announced that he too favored loosening restrictions on federal funding of embryonic stem cell research. [ 7 ] On July 18, 2006, the Senate passed three different bills concerning stem cell research. The Senate passed the first bill, 63-37, which would have made it legal for the Federal government to spend Federal money on embryonic stem cell research that uses embryos left over from in vitro fertilization procedures. On July 19, 2006, President Bush vetoed this bill. The second bill makes it illegal to create, grow, and abort fetuses for research purposes. The third bill would encourage research that would isolate pluripotent, i.e., embryonic-like, stem cells without the destruction of human embryos. The National Institutes of Health has hundreds of funding opportunities for researchers interested in hESC. [ 8 ] In 2005 the NIH funded $607 million worth of stem cell research, of which $39 million was specifically used for hESC. [ 9 ] During Bush's second term , in July 2006, he used his first Presidential veto on the Stem Cell Research Enhancement Act . The Stem Cell Research Enhancement Act was the name of two similar bills, and both were vetoed by President George W. Bush and were not enacted into law. New Jersey congressman Chris Smith wrote a Stem Cell Therapeutic and Research Act of 2005 , which was signed into law by President Bush. It provided $265 million for adult stem cell therapy, umbilical cord blood and bone marrow treatment, and authorized $79 million for the collection of cord blood stem cells. By executive order on March 9, 2009, President Barack Obama removed certain restrictions on federal funding for research involving new lines of human embryonic stem cells. [ 10 ] Prior to President Obama's executive order, federal funding was limited to non-embryonic stem cell research and embryonic stem cell research based upon embryonic stem cell lines in existence prior to August 9, 2001. Federal funding originating from current appropriations to the Department of Health and Human Services (including the National Institutes of Health ) under the Omnibus Appropriations Act of 2009 , remains prohibited under the Dickey–Wicker Amendment for (1) the creation of a human embryo for research purposes; or (2) research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed for research on fetuses in utero. In a speech before signing the executive order, President Obama noted the following: Today, with the Executive Order I am about to sign, we will bring the change that so many scientists and researchers; doctors and innovators; patients and loved ones have hoped for, and fought for, these past eight years: we will lift the ban on federal funding for promising embryonic stem cell research. We will vigorously support scientists who pursue this research. And we will aim for America to lead the world in the discoveries it one day may yield. [ 11 ] In 2011, a United States District Court "threw out a lawsuit that challenged the use of federal funds for embryonic stem cell research." [ 12 ] The decision was a case on remand from the United States Court of Appeals for the District of Columbia Circuit . [ 12 ] [ 13 ] S1909/A2840 is a bill that was passed by the New Jersey legislature in December 2003, and signed into law by Governor James McGreevey on January 4, 2004, that permits human cloning for the purpose of developing and harvesting human stem cells. Specifically, it legalizes the process of cloning a human embryo, and implanting the clone into a womb, provided that the clone is then aborted and used for medical research. Missouri Constitutional Amendment 2 (2006) ( Missouri Amendment Two ) was a 2006 law that legalized certain forms of embryonic stem cell research in the state. California voters in November 2004 approved Proposition 71 , creating a US$ 3 billion state taxpayer-funded institute for stem cell research, the California Institute for Regenerative Medicine . It hopes to provide $300 million a year. However, as of June 6, 2006, there were delays in the implementation of the California program and it is believed that the delays will continue for the significant future. [2] On July 21, 2006, Governor Arnold Schwarzenegger (R-Calif.) authorized $150 million in loans to the Institute in an attempt to jump start the process of funding research. [ 14 ] Several states, in what was initially believed to be a national migration of biotech researchers to California, [ 15 ] have shown interest in providing their own funding support of embryonic and adult stem cell research. These states include Connecticut , [ 16 ] Florida , Illinois , Massachusetts , [ 17 ] Missouri , New Hampshire , New York , Pennsylvania , Texas [3] Around The Area , Washington , and Wisconsin . Other states have (or have shown interest in) additional restrictions or even complete bans on embryonic stem cell research. These states include Arkansas , Iowa , Kansas , Louisiana , Nebraska , North Dakota , South Dakota , and Virginia . [ 18 ] Arkansas , Indiana , Louisiana , Michigan (subsequently reversed by constitutional amendment), North Dakota and South Dakota have passed laws to "prohibit the creation or destruction of human embryos for medical research." [ 5 ] Policy stances on stem cell research of various political leaders in the United States have not always been predictable. As a rule, most Democratic Party leaders and high-profile supporters and even rank and file members have pushed for laws and policies almost exclusively favoring embryonic stem cell research. [ 20 ] President Bill Clinton supported the NIH's guidelines in 2000. [ 1 ] Both the major candidates in 2008 had supported the 2005 and 2007 bills, in particular Hillary Rodham Clinton , Bill Clinton's First Lady , then U. S. Senator for New York , [ 21 ] and Barack Obama , then U.S. Senator for Illinois, who promised to sign the EFCA into law, and was a cosponsor of such bills. [ 22 ] Massachusetts governor Deval Patrick is also a proponent of embryonic stem cell research. There have been some Democrats who have asked for boundaries be placed on human embryo use. For example, Carolyn McCarthy has publicly stated she only supports using human embryos "that would be discarded". [ 23 ] [ 24 ] The Republicans largely oppose embryonic stem cell research in favor of adult stem cell research which has already produced cures and treatments for cancer and paralysis for example, but there are some high-profile exceptions who offer qualified support for some embryonic stem cell research. [ 5 ] Prominent Republican leaders against embryonic stem cell research include Sarah Palin , Jim Talent , Rick Santorum , and Sam Brownback . [ 5 ] In July 2001: Sen. Bill Frist (R-TN) and Sen. Orrin Hatch (R-UT), a vocal abortion opponent , call[ed] for limited federal funding for embryonic stem-cell research.... House Speaker Dennis Hastert (R-IL) and other Republican House leaders [came] out in opposition to federal funding for embryonic stem cell research. 2008 GOP Presidential Candidate John McCain was a member of The Republican Main Street Partnership , and supported embryonic stem cell research, [ 5 ] despite his earlier opposition. [ 25 ] In July 2008 he said, "At the moment I support stem cell research [because of] the potential it has for curing some of the most terrible diseases that afflict mankind." [ 26 ] In 2007, in what he described as "a very agonizing and tough decision," he voted to allow research using human embryos left over from fertility treatments. [ 27 ] Former First Lady Nancy Reagan and Senator Orrin Hatch also support stem cell research, after first opposing the issue. [ 5 ] Former Senator Frist also supports stem cell research, despite having initially supported past restrictions on embryonic stem cell research. 2008 V.P. candidate Palin opposed embryonic stem cell research, which she said causes the destruction of life, thus this research is inconsistent with her pro-life position and she does not support it. [ 28 ] A few moderates or Libertarians support such research with limits. Lincoln Chafee supported federal funding for embryonic stem cell research. Ron Paul , a Republican congressman, physician , and Libertarian and Independent candidate for President, has sponsored much legislation , and has had quite complex positions . Several studies have examined the impact of changing funding policies on scientific research in the US and the development of new cell therapies by industry. For example, studies have highlighted an immediate and sizable drop in research productivity of US-based researchers as compared to researchers based elsewhere during the years after the enactment in August 2001 of federal funding restrictions on research involving new embryonic stem cell lines. [ 29 ] [ 30 ] [ 31 ] [ 32 ] US knowledge production in the human embryonic stem cell field fell 35 to 40 percent below anticipated levels, and measured in terms of forward citations to core research publications in the field, US-based follow-on work in the human embryonic stem cell research field declined by nearly 59 percent relative to non-US-based research over the period 2001-2003. [ 29 ] During this period US based firms were also less likely to launch new therapeutic product development projects in the cell therapy field than firms outside the US, and were more likely to discontinue clinical trials for new cell therapies that were already under way. [ 33 ] All these effects were reversed as the funding environment for stem cell research in the US became more favorable during the second half of the 2000s. In 2005, the United States National Academies released its Guidelines for Human Embryonic Stem Cell Research . These Guidelines were prepared to enhance the integrity of human embryonic stem cell research in the public's perception and in actuality by encouraging responsible practices in the conduct of that research. The National Academies has subsequently named the Human Embryonic Stem Cell Research Advisory Committee to keep the Guidelines up-to-date. [ 34 ] The guidelines preserve two primary principles. First, that hESC research has the potential to improve our understanding of human health and discover new ways to treat illness. Second, that individuals donating embryos should do so freely, with voluntary and informed consent. The guidelines implement executive order 13505, and apply to hESC research receiving funds from the NIH. The guidelines detail safeguards to protect donating individuals by acquiring informed consent and protecting their identity. In addition, the guidelines contain multiple sections applying to embryos donated in the US and abroad, both before and after the effective date of the guidelines. [ 35 ] The NIH guidelines define which hESC research is eligible to receive NIH funding through a series of regulations which applicants for funding must adhere to. Applicants proposing research, may use stem cell lines that are posted on the NIH registry, or may submit an assurance of compliance with section II of the guidelines. Section II is applicable to stem cells derived from human embryos. [ 35 ] For the purposes of section II of the NIH guidelines, the following requirements must be met. First, the hESCs should have been derived from embryos created using an in vitro fertilization procedure for reproductive purposes, and no longer needed for this purpose. Second, the donors who sought reproductive treatment have given written consent for the embryos to be used for research purposes. Third, all written consent forms and other documentation must be provided. [ 35 ] Documentation must be provided regarding the following: All options available to the healthcare facility regarding the embryos in question were explained to the individual who sought reproductive treatment. No payments of any kind may be offered for the donated embryos. Policies and procedures must be in place at the facility where the embryos were donated to ensure that neither donation nor refusal to donate affects quality of care received by the patient. [ 35 ] There must also be a clear distinction between the donor's decision to create embryos for reproductive purposes, and the decision to donate embryos for research. This is ensured through a number of regulations which follow. First, the decision to create embryos for reproductive purposes must have been made without the influence of researchers proposing usage for the embryos to derive hESCs for research purposes. Consent for the donation of embryos should have been given at the time of donation. Finally, donors should have been informed that they have the right to withdraw consent at any time until derivation of stem cells from the embryo, or until the identity of the donor can no longer be linked to the embryo. [ 35 ] When seeking consent from the donor, they must be informed of what will become of their donation. The donor must be informed that the embryonic stem cells would be derived from the embryos from research purposes. The donor must also be informed of the procedures that the embryo would undergo in the derivation process, and that the stem cell lines derived from the embryo may be kept for many years. In addition, the donors must be informed that the donation is not made with direction regarding the intended use of the derived stem cells, and the research is not intended to provide direct medical benefit to the donor. The donor is also to be informed that there may be commercial potential resulting from the research performed, and that the donor is not to benefit from commercial development as a result of the donation. The donor is also to be notified if information that could disclose their identity will be available to the researchers. [ 35 ] Applicants seeking to use stem cell lines established before the effective date of the guidelines may use lines published on the NIH registry, or establish eligibility by complying with the requirements listed above. Alternately, researchers may submit materials to a working group of the Advisory Committee to the Director. The working group will review submitted materials and submit recommendations to the Advisory Committee, which will in turn make recommendations to the NIH director. A final decision regarding eligibility for funding is then made by the NIH director. [ 35 ] The materials submitted to the working group must demonstrate that the stem cells were derived from embryos created for reproductive purposes, and are no longer needed. Also, the materials must demonstrate that the stem cells were donated by donors who had granted voluntary written consent. [ 35 ] Research ineligible for NIH funding as dictated within the guidelines include research in which hESCs are introduced into non-human primate blastocysts. Research of the breeding of animals where hESCs may contribute to the germ line are similarly ineligible. NIH funding of the derivation of stem cells from human embryos is prohibited by the annual appropriations ban on the funding of human embryo research. Research using hESCs derived from other sources is also not eligible for funding. [ 35 ]
https://en.wikipedia.org/wiki/Stem_cell_laws_and_policy_in_the_United_States
Stem cell markers are genes and their protein products used by scientists to isolate and identify stem cells . Stem cells can also be identified by functional assays. Below is a list of genes/protein products that can be used to identify various types of stem cells, or functional assays that do the same. [ 1 ] The initial version of the list below was obtained by mining the PubMed database as described in [ 2 ]
https://en.wikipedia.org/wiki/Stem_cell_marker
Stem cell research policy varies significantly throughout the world. There are overlapping jurisdictions of international organizations, nations, and states or provinces. Some government policies determine what is allowed versus prohibited, whereas others outline what research can be publicly financed. Of course, all practices not prohibited are implicitly permitted. Some organizations have issued recommended guidelines for how stem cell research is to be conducted. The United Nations adopted a declaration on human cloning that can be interpreted as calling on member states to prohibit somatic cell nuclear transfer , or therapeutic cloning . In 2005, in a divided vote, "Member States were called on to adopt all measures necessary to prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." The World Health Organization has opposed a ban on cloning techniques in stem cell research. The Council of Europe 's Convention on Human Rights and Biomedicine seems to ban the creation of embryos solely for research purposes. [ 1 ] It has been signed by 31 countries and ratified by 19: [ 2 ] Bulgaria , Croatia , Cyprus , the Czech Republic , Denmark , Estonia , Georgia , Greece , Hungary , Iceland , Lithuania , Moldova , Portugal , Romania , San Marino , Slovakia , Slovenia , Spain , and Turkey . [ 3 ] Researchers , ethicists and assorted spokespersons from 14 different countries have published a set of legal and ethical guidelines relating to stem cell research, in an effort to address conflicting international laws in this area. [ 4 ] [ 5 ] The ‘Hinxton Group’ met recently for the first time, in Cambridge , and published a consensus statement calling for a ‘flexible’ regulatory framework, which can simultaneously accommodate rapid scientific advance and at the same time accommodate the diversity of international approaches towards stem cell science. [ 4 ] It also recommends that, in countries which oppose embryonic stem cell research, scientists should be free to pursue their research elsewhere. [ 4 ] In light of the controversy surrounding Hwang Woo-Suk , the Hinxton Group has additionally recommended a number of measures intended to prevent fraud in stem cell research. The group has requested that all authors of embryonic stem cell papers submit a statement of authenticity of any new cell-lines and that the source of stem cells be clearly specified. [ 4 ] On the ethical issues surrounding embryonic stem cell research, the group has additionally recommended that an international database be created, containing guidelines for ethical practice, research protocols, consent forms, and the information provided to donors. [ 4 ] However, the potential for an international consensus on these matters seems remote given the complexity and diversity of regulatory frameworks in this controversial area of science, both within nations and between nations. The International Society for Stem Cell Research is developing guidelines for the conduct of stem cell research. Embryonic stem cell research has divided the international community. In the European Union , stem cell research using the human embryo is permitted in Ireland , Sweden , Finland , Belgium , Greece , Britain , Denmark and the Netherlands ; however it is illegal in Germany , Austria , Italy , and Portugal . The issue has similarly divided the United States , with several states enforcing a complete ban and others giving financial support. [ 6 ] Elsewhere, Japan , India , Iran , Israel , South Korea , and China are supportive, Australia is partially supportive (exempting reproductive cloning yet allowing research on embryonic stem cells that are derived from the process of IVF ); however New Zealand , most of Africa (excepting South Africa ) and most of South America (excepting Brazil ) are restrictive.
https://en.wikipedia.org/wiki/Stem_cell_research_policy
The stem cell secretome (also referred to as the stromal cell secretome ) is a collective term for the paracrine soluble factors produced by stem cells and utilized for their inter-cell communication . In addition to inter-cell communication, the paracrine factors are also responsible for tissue development, homeostasis and (re-)generation. The stem cell secretome consists of extracellular vesicles , [ 1 ] specifically exosomes , microvesicles , membrane particles , peptides and small proteins ( cytokines ). The paracrine activity of stem cells, i.e. the stem cell secretome, has been found to be the predominant mechanism by which stem cell-based therapies mediate their effects in degenerative, auto-immune and/or inflammatory diseases. [ 2 ] Though not only stem cells possess a secretome which influences their cellular environment, their secretome currently appears to be the most relevant for therapeutic use. [ 3 ] The Extracellular Vesicles are small partials that are normally discharged and have boundaries that are formed by a lipid bilayer. Although cells can replicate, extracellular vesicle is not able to. In the extracellular vesical, things that consist of the stem cell secretome and are being packed are organelles, mRNA, miRNA, and proteins. [ 4 ] Exosomes are discharged from the extracellular vesicles, which are found in biological fluid. Biological fluid like the cerebrospinal fluid, which can be used for treatment. Most impotently, exosomes can be found in between the eukaryotic organism's cell, also known as the tissue matrix. [ 5 ] Stem Cell therapies , here referred to as therapies employing non- hematopoietic , mesenchymal stem cells , have a wide range of potential therapeutic benefits for different diseases, most of which are currently investigated in clinical trials . [ 6 ] Stem cell therapies can benefit as a regenerative medicine for patients that have or been diagnosed with disease that affect the mid part of the brain, strokes and heart disease, joint disease and injuries to the spinal cord. [ 7 ] Therapeutic properties of stem cells are mainly attributed to their secretome, which has been shown to modulate several biological processes in vitro and in vivo, such as cell proliferation, survival, differentiation, immunomodulation, anti-apoptosis, angiogenesis and stimulation of tissue adjacent cells. This is contrary to the historic hypothesis that stem cell migration and transdifferentiation is the primary mechanism of effect of stem cell injection therapies. [ 2 ] The most commonly used type of stem cells for therapeutic use are human ( autologous ) Mesenchymal Stem Cells , hMSCs. hMSCs’ secretome is one of the most widely researched secretome profile. The secretomes of other cell types, for example dendritic cells, are also being investigated for therapeutic use. [ 8 ] Studies of hMSCs aimed for examining their regenerative capacities for putative treatment of neurodegenerative diseases have demonstrated that hMSCs are able to secrete important neuroregulatory molecules, such as: brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), insulin growth factor 1 (IGF-1), hepatocyte growth factor (HGF), vascular endothelial growth factor (VEGF), transforming growth factor beta (TGF-β), glial-derived neurotrophic factor (GDNF), fibroblast growth factor 2 (FGF-2), stem cell factor (SCF), granulocyte colony-stimulating factor (G-CSF) and stromal cell-derived factor (SDF-1) both in vitro and in vivo. All of these molecules have been shown to have beneficial effects towards the treatment of neurodegenerative diseases. [ 9 ] With regard to orthopaedic conditions such as arthritis , the paracrine factors of stem cell-based therapies appeared to be responsible for the majority of regenerative effects. Extracellular vesicles have a prominent role in the development of joints and in the regulation of the intra-articular homeostasis. In the case of arthritis, this homeostasis is disrupted due to different reasons. Hypothetically, one reason may be related to the accumulation of senescent cells and their associated secretory phenotype . The secretome of (mesenchymal) Stem Cells have a positive effects on reestablishing the intra-articular homeostasis and stimulating regeneration by different growth factors, cytokines and miRNA that are contained within the extracellular vesicles of the secretome. [ 10 ] As a consequence, efforts have been made to synthesize specific stem cell secretomes efficiently, in vitro. In general, stem cells become activated and produce higher amounts of secretome in response to external stress (for example, by damaged tissues in vivo). As such, the main preconditioning mechanism to induce secretome (extracellular vesicles) production are stress-inducing methods, most prominently anoxia and hypoxia, but also pharmacological, physical or cytokine-related methods that force the cells to produce secretome in vitro. This approach is also known as cell-free stem cell therapy. It has been hypothesized that future therapies aiming at generating a (specific) secretome with a defined profile, and optimized concentrations of paracrine factors will yield a better, more reliable and controlled outcome as compared to previous approaches that rely solely on injecting (mesenchymal) stem cells into the body and hope that their paracrine (or trans differentiation) capacity will have beneficial effects in the body. [ 11 ] However, the controlled therapeutic use of the stem cell secretome demands high-quality standardization of isolation and analysis techniques to yield reproducible secretome preparations. Various pharmaceutical companies and clinical institutions have started to develop protocols for the in vitro extraction of specific secretome profiles from autologous mesenchymal stem cells, as well as for the clinical use of secretome as a novel therapeutic for numerous diseases, either as a private pay procedure or within clinical trials. [ 12 ] [ 13 ] Even though these treatments are in compliance with the regulatory framework in Europe under certain conditions as of May 2017, there is yet no evidence for their proven efficacy in human clinical trials, besides singular case reports. Therefore, at the moment, the clinical use of stem cell secretome is experimental, and it is mainly based on in-vitro and animal data. [ 14 ] One potential application of autologous stem cell secretome has been in veterinary medicine, as commercialized by a Russian company, T-Helper Cell Technologies in 2017 under the name Reparin-Helper .
https://en.wikipedia.org/wiki/Stem_cell_secretome
Stem cell therapy for macular degeneration is an emerging treatment approach aimed at restoring vision in individuals suffering from various forms of macular degeneration, particularly age-related macular degeneration (AMD). [ 1 ] This therapy involves the transplantation of stem cells into the retina to replace damaged or lost retinal pigment epithelium (RPE) and photoreceptor cells , which are critical for central vision. Clinical trials have shown promise in stabilizing or improving visual function, but are nevertheless inefficient. [ 2 ] Age-related macular degeneration (AMD) is associated with abnormality in the retinal pigment epithelium (RPE). However, since the RPE is incapable of regeneration on its own, stem cell based therapy shows potential. [ 3 ] Most stem cell therapies utilize application of derived retinal pigment epithelium (RPE) cells. Transplantation of RPE can take two forms, either as cell sheets or cell suspension. [ 4 ] Cell sheets involve the transplantation of a monolayer, which is formed through the use of biocompatible tissue-engineered materials that have been inoculated with RPE cells. Cell suspension utilizes the transplantation of RPE cells and other components into the eye. [ 3 ] There are treatments being developed beyond RPE transplantation, such as the transplantation of Mesenchymal Stem Cells. [ 5 ] Studies have shown that implantation of human ESCs in animal models displayed improvements in visual performance. These studies mention concerns over complications of transplants, stability, and rejection. [ 6 ] [ 7 ] There have been several clinical trials in humans, however, in which no adverse immune response or abnormal tumorigenic proliferation has been observed. [ 8 ] [ 9 ] A current direction for ESC therapy is the derivation and transplanting of retinal pigment epithelial cells from ESCs for dry age-related macular degeneration. [ 10 ] ASP7317 is one developing human ESC treatment for geographic atrophy caused by dry AMD. Currently it is in phase Ib of clinical testing, and was recruiting participants for dose ranging trials as of January 2025. [ 11 ] In response to complication and ethical concerns of ESC based therapy, iPSC derived treatments could function as an alternative. [ 6 ] Studies indicated the ability of iPSCs to differentiate into various retinal cell types such as photoreceptors and RPE cells. [ 7 ] While transplanting of iPSCs and iPSC derived treatments in mice retina have resulted in improved retinal function, the practicality of such treatment is still unclear. [ 6 ] Studies in humans have indicated that iPSC derived RPE transplantations can not only be safe, but successful . [ 12 ] There are concerns regarding immune rejection of transplants and oncogenic mutations though. [ 6 ] In one procedure, the neovascular membrane RPE complex was removed before subsequent transplantation of iPSC derived RPE cells under the retina. After a year post surgery, consistent results were still observed despite the patient having cystic macular edema . [ 7 ] Derived from bone marrow or adipose tissue, MSCs have shown potential in retinal degenerative cell therapy. [ 6 ] This is in part due to the immunoregulatory properties of MSCs, as well as their ability to promote the proliferation of ocular cells. [ 13 ] While studies using intravitreal interjections of MSCs have displayed a positive effect on the retina and vision function, an additional study utilizing intravitreal interjections of bone marrow mononuclear stem cells showed no improvement in retinal function. [ 6 ] Through clinical trials, a variety of results have been observed with some improvements in retinal function being transient, while other persisted. [ 13 ] With the increase in MSC therapies, reports of associated ocular complications have been reported. However, a study utilizing a suprachoroidal approach where MSCs were transplanted under a deep scleral flap within the suprachoroidal space expressed improvements in visual function six months after the treatment. This method displayed reduced risks of complications. [ 8 ] The first fetal retinal transplant into the anterior chamber of animal eyes was reported in 1959. In 1980, experiments involving cell cultures of retinal pigment epithelium (RPE) began. Human RPE cells grown in culture were subsequently transplanted into animal eyes, initially using open techniques and later through closed cavity vitrectomy methods. [ 1 ] In 1991, Gholam Peyman attempted to transplant RPE in humans, but the success rate was limited. Later efforts focused on allogenic fetal RPE cell transplantation, which faced significant challenges due to immune rejection. It was observed that rejection rates were lower in cases of dry age-related macular degeneration (AMD) compared to the wet form of the disease . Autologous RPE transplantation became more common, using two main techniques: RPE suspension and full-thickness RPE-choroid transplantation. Clinical outcomes from autologous RPE-choroid transplantation, where tissue from the eye's periphery is transplanted to a diseased area, have shown promise. [ 14 ] Since 2003, researchers have successfully transplanted corneal stem cells into damaged eyes to restore vision. Sheets of retinal cells used in these procedures were initially harvested from aborted fetuses, which raised ethical concerns for some. These retinal sheets, when transplanted over damaged corneas , stimulated repair and eventually restored vision. [ 15 ] [ 16 ] In June 2005, a team led by Sheraz Daya at Queen Victoria Hospital in Sussex , England, restored sight in forty patients using a similar technique with adult stem cells sourced from the patient, a relative, or a cadaver . [ 17 ] In 2009, the first ESC derived retinal pigment epithelium cell suspension transplantation for macular disease was performed by Ocata Therapeutics. [ 18 ] In 2010, the United States Food and Drug Administration approved phase I/II clinical trials for retinal pathology stem cell therapies in humans. [ 8 ] [ 7 ] In 2014, surgeons at Riken Institute's Center for Developmental Biology reported the first transplantation of induced pluripotent stem cells (iPSCs) into a human patient. This clinical study involved creating a retinal sheet from iPSCs, developed by Shinya Yamanaka , which were reprogrammed from the patient's own mature cells. The retinal sheet was transplanted into a woman in her 70s suffering from age-related macular degeneration (AMD), a condition that blurs central vision and can lead to blindness. The use of iPSCs aimed to halt the progression of AMD. In March 2017, the team conducted the first successful transplant of retinal cells created from donor-derived iPSCs into a patient with advanced wet AMD . This surgery was made more efficient by using "super donor" cells, derived from individuals with specific white blood cell types that reduce the risk of immune rejection. Approximately 250,000 retinal pigment epithelial cells, generated from these donor-derived iPSCs, were transplanted into the patient's eye. [ 19 ] In April 2024, Eyecyte-RPE was approved for its first human trial. The researchers, working in India, aim to create a pluripotent stem cell treatment for mid to late stage geographic atrophy that will be more cost-efficient than existing cell and gene therapies. [ 11 ]
https://en.wikipedia.org/wiki/Stem_cell_therapy_for_macular_degeneration
In phylogenetics , the crown group or crown assemblage is a collection of species composed of the living representatives of the collection, the most recent common ancestor of the collection, and all descendants of the most recent common ancestor. It is thus a way of defining a clade , a group consisting of a species and all its extant or extinct descendants. For example, Neornithes (birds) can be defined as a crown group, which includes the most recent common ancestor of all modern birds, and all of its extant or extinct descendants. The concept was developed by Willi Hennig , the formulator of phylogenetic systematics , as a way of classifying living organisms relative to their extinct relatives in his "Die Stammesgeschichte der Insekten", [ 1 ] and the "crown" and "stem" group terminology was coined by R. P. S. Jefferies in 1979. [ 2 ] Though formulated in the 1970s, the term was not commonly used until its reintroduction in 2000 by Graham Budd and Sören Jensen . [ 3 ] It is not necessary for a species to have living descendants in order for it to be included in the crown group. Extinct side branches on the family tree that are descended from the most recent common ancestor of living members will still be part of a crown group. For example, if we consider the crown-birds (i.e. all extant birds and the rest of the family tree back to their most recent common ancestor), extinct side branches like the dodo or great auk are still descended from the most recent common ancestor of all living birds , so fall within the bird crown group. [ 4 ] One very simplified cladogram for birds is shown below: [ 5 ] † Archaeopteryx other extinct groups Neornithes (modern birds, some extinct like the dodo) In this diagram, the clade labelled "Neornithes" is the crown group of birds: it includes the most recent common ancestor of all living birds and its descendants, living or not. Although considered to be birds (i.e. members of the clade Aves), Archaeopteryx and other extinct groups are not included in the crown group, as they fall outside the Neornithes clade, being descended from an earlier ancestor. An alternative definition does not require any members of a crown group to be extant, only to have resulted from a "major cladogenesis event". [ 6 ] The first definition forms the basis of this article. Often, the crown group is given the designation "crown-", to separate it from the group as commonly defined. Both birds and mammals are traditionally defined by their traits, [ 7 ] [ 8 ] and contain fossil members that lived before the last common ancestors of the living groups or, like the mammal Haldanodon , [ 9 ] were not descended from that ancestor although they lived later. Crown-Aves and Crown-Mammalia therefore differ slightly in content from the common definition of Aves and Mammalia. This has caused some confusion in the literature. [ 10 ] [ 11 ] The cladistic idea of strictly using the topology of the phylogenetic tree to define groups necessitates other definitions than crown groups to adequately define commonly discussed fossil groups. Thus, a host of prefixes have been defined to describe various branches of the phylogenetic tree relative to extant organisms. [ 12 ] A pan-group or total group is the crown group and all organisms more closely related to it than to any other extant organisms. In a tree analogy, it is the crown group and all branches back to (but not including) the split with the closest branch to have living members. The Pan-Aves thus contain the living birds and all (fossil) organisms more closely related to birds than to crocodilians (their closest living relatives). The phylogenetic lineage leading back from Neornithes to the point where it merges with the crocodilian lineage, along with all side branches, constitutes pan-birds. In addition to non-crown group primitive birds like Archaeopteryx , Hesperornis and Confuciusornis , therefore, pan-group birds would include all dinosaurs and pterosaurs as well as an assortment of non-crocodilian animals like Marasuchus . Pan-Mammalia consists of all mammals and their fossil ancestors back to the phylogenetic split from the remaining amniotes (the Sauropsida ). Pan-Mammalia is thus an alternative name for Synapsida . A stem group is a paraphyletic assemblage composed of the members of a pan-group or total group, above, minus the crown group itself (and therefore minus all living members of the pan-group). This leaves primitive relatives of the crown groups , back along the phylogenetic line to (but not including) the last common ancestor of the crown group and their closest living relatives. It follows from the definition that all members of a stem group are extinct. The "stem group" is the most used and most important of the concepts linked to crown groups, as it offers a means to reify and name paraphyletic assemblages of fossils that otherwise do not fit into systematics based on living organisms. While often attributed to Jefferies (1979), Willmann (2003) [ 13 ] traced the origin of the stem group concept to Austrian systematist Othenio Abel (1914), [ 14 ] and it was discussed and diagrammed in English as early as 1933 by A. S. Romer . [ 15 ] Alternatively, the term "stem group" is sometimes used in a wider sense to cover any members of the traditional taxon falling outside the crown group. Permian synapsids like Dimetrodon or Anteosaurus are stem mammals in the wider sense but not in the narrower one. [ 16 ] Often, an (extinct) grouping is identified as belonging together. Later, it may be realized other (extant) groupings actually emerged within such grouping, rendering them a stem grouping. Cladistically , the new groups should then be added to the group, as paraphyletic groupings are not natural. In any case, stem groupings with living descendants should not be viewed as a cohesive group, but their tree should be further resolved to reveal the full bifurcating phylogeny. Stem birds perhaps constitute the most cited example of a stem group, as the phylogeny of this group is fairly well known. The following cladogram, based on Benton (2005), [ 8 ] illustrates the concept: Crocodilia Pterosauria Hadrosauridae Stegosauria Sauropoda Tyrannosauridae Archaeopteryx Neognathae (including the extinct dodo ) Paleognathae (including the extinct moa ) The crown group here is Neornithes , all modern bird lineages back to their last common ancestor. The closest living relatives of birds are crocodilians . If we follow the phylogenetic lineage leading to Neornithes to the left, the line itself and all side branches belong to the stem birds until the lineage merges with that of the crocodilians. In addition to non-crown group primitive birds like Archaeopteryx , Hesperornis and Confuciusornis , stem group birds include the dinosaurs and the pterosaurs . The last common ancestor of birds and crocodilians—the first crown group archosaur—was neither bird nor crocodilian and possessed none of the features unique to either. As the bird stem group evolved, distinctive bird features such as feathers and hollow bones appeared. Finally, at the base of the crown group, all traits common to extant birds were present. Under the widely used total-group perspective, [ 17 ] the Crocodylomorpha would become synonymous with the Crocodilia, and the Avemetatarsalia would become synonymous with the birds, and the above tree could be summarized as Crocodilia Birds An advantage of this approach is that declaring Theropoda to be birds (or Pan-aves ) is more specific than declaring it to be a member of the Archosauria, which would not exclude it from the Crocodilia branch. Basal branch names such as Avemetatarsalia are usually more obscure. However, not so advantageous are the facts that "Pan-Aves" and "Aves" are not the same group, the circumscription of the concept of "Pan-Aves" (synonymous with Avemetatarsalia) is only evident by examination of the above tree, and calling both groups "birds" is ambiguous. Stem mammals are those in the lineage leading to living mammals, together with side branches, from the divergence of the lineage from the Sauropsida to the last common ancestor of the living mammals. This group includes the synapsids as well as mammaliaforms like the morganucodonts and the docodonts ; the latter groups have traditionally and anatomically been considered mammals even though they fall outside the crown group mammals. [ 18 ] Stem tetrapods are the animals belonging to the lineage leading to tetrapods from their divergence from the lungfish , our nearest relatives among the fishes. In addition to a series of lobe-finned fishes , they also include some of the early labyrinthodonts . Exactly what labyrinthodonts are in the stem group tetrapods rather than the corresponding crown group is uncertain, as the phylogeny of early tetrapods is not well understood. [ 19 ] This example shows that crown and stem group definitions are of limited value when there is no consensus phylogeny. Stem arthropods constitute a group that has seen attention in connection with the Burgess Shale fauna. Several of the finds , including the enigmatic Opabinia and Anomalocaris have some, though not all, features associated with arthropods , and are thus considered stem arthropods. [ 20 ] [ 21 ] The sorting of the Burgess Shale fauna into various stem groups finally enabled phylogenetic sorting of this enigmatic assemblage and also allowed for identifying velvet worms as the closest living relatives of arthropods. [ 21 ] Stem priapulids are other early Cambrian to middle Cambrian faunas, appearing in Chengjiang to Burgess Shale. The genus Ottoia has more or less the same build as modern priapulids , but phylogenetic analysis indicates that it falls outside the crown group, making it a stem priapulid. [ 3 ] The name plesion has a long history in biological systematics, and plesion group has acquired several meanings over the years. One use is as "nearby group" (plesion means close to in Greek ), i.e. sister group to a given taxon , whether that group is a crown group or not. [ 22 ] The term may also mean a group, possibly paraphyletic , defined by primitive traits (i.e. symplesiomorphies ). [ 23 ] It is generally taken to mean a side branch splitting off earlier on the phylogenetic tree than the group in question. Placing fossils in their right order in a stem group allows the order of these acquisitions to be established, and thus the ecological and functional setting of the evolution of the major features of the group in question. Stem groups thus offer a route to integrate unique palaeontological data into questions of the evolution of living organisms. Furthermore, they show that fossils that were considered to lie in their own separate group because they did not show all the diagnostic features of a living clade, can nevertheless be related to it by lying in its stem group. Such fossils have been of particular importance in considering the origins of the tetrapods , mammals , and animals . The application of the stem group concept also influenced the interpretation of the organisms of the Burgess shale . Their classification in stem groups to extant phyla, rather than in phyla of their own, is thought by some to make the Cambrian explosion easier to understand without invoking unusual evolutionary mechanisms; [ 21 ] however, application of the stem group concept does nothing to ameliorate the difficulties that phylogenetic telescoping [ 24 ] [ 25 ] poses to evolutionary theorists attempting to understand both macroevolutionary change and the abrupt character of the Cambrian explosion . Overemphasis on the stem group concept threatens to delay or obscure proper recognition of new higher taxa. [ 26 ] As originally proposed by Karl-Ernst Lauterbach , stem groups should be given the prefix "stem" (i.e. Stem-Aves, Stem-Arthropoda), however the crown group should have no prefix. [ 27 ] The latter has not been universally accepted for known groups. A number of paleontologists have opted to apply this approach anyway. [ 28 ]
https://en.wikipedia.org/wiki/Stem_group
Stem-mixing is a method of mixing audio material based on creating groups of audio tracks called stems and processing them separately prior to combining them into a final master mix. Stems are also sometimes referred to as submixes, subgroups, or buses . The distinction between a stem and a separation is rather unclear. Some consider stem manipulation to be the same as separation mastering , although others consider stems to be sub-mixes to be used along with separation mastering. It depends on how many separate channels of input are available for mixing and/or at which stage they are on the way towards reducing them to a final stereo mix. The technique originated in the 1960s [ citation needed ] , with the introduction of mixing boards equipped with the capability to assign individual inputs to sub-group faders and to work with each sub-group (stem mix) independently from the others. The approach is widely used in recording studios to control, process and manipulate entire groups of instruments such as drums, strings, or backup vocals, in order to streamline and simplify the mixing process. Additionally, as each stem-bus usually has its own inserts, sends and returns, the stem-mix (sub-mix) can be routed independently through its own signal processing chain , to achieve a different effect for each group of instruments. A similar method is also utilised with digital audio workstations (DAWs) , where separate groups of audio tracks may be digitally processed and manipulated through discrete chains of plugins . Stem-mastering is a technique derived from stem mixing. Just as in stem-mixing, the individual audio tracks are grouped together, to allow for independent control and signal processing of each stem, and can be manipulated independently from each other. Most of the mastering engineers [ who? ] require music producers to have at least -3db headroom at each individual track before starting stem mastering process. The reason for this is to leave more space in the mix to make the mastered version sound cleaner and louder [ citation needed ] . Even though it is not commonly practiced by mastering studios, it does have its proponents [ who? ] . In audio production, a stem is a group of audio sources mixed together, usually by one person, to be dealt with downstream as one unit. A single stem may be delivered in mono , stereo , or in multiple tracks for surround sound . [ 1 ] In sound mixing for film , the preparation of stems is a common stratagem to facilitate the final mix. Dialogue, music and sound effects, called "D-M-E", are brought to the final mix as separate stems. Using stem mixing, the dialogue can easily be replaced by a foreign-language version, the effects can easily be adapted to different mono, stereo and surround systems, and the music can be changed to fit the desired emotional response. If the music and effects stems are sent to another production facility for foreign dialogue replacement, these non-dialogue stems are called "M&E". [ 1 ] [ 2 ] [ 3 ] The dialogue stem is used by itself when editing various scenes together to construct a trailer of the film; after this some music and effects are mixed in to form a cohesive sequence. [ 4 ] In music mixing for recordings and for live sound , stems are subgroups of similar sound sources. When a large project uses more than one person mixing, stems can facilitate the job of the final mix engineer. Such stems may consist of all of the string instruments, a full orchestra, just background vocals, only the percussion instruments, a single drum set, or any other grouping that may ease the task of the final mix. Stems prepared in this fashion may be blended together later in time, as for a recording project or for consumer listening, or they may be mixed simultaneously, as in a live sound performance with multiple elements. [ 5 ] For instance, when Barbra Streisand toured in 2006 and 2007, the audio production crew used three people to run three mixing consoles : one to mix strings, one to mix brass, reeds and percussion, and one under main engineer Bruce Jackson's control out in the audience, containing Streisand's microphone inputs and stems from the other two consoles. [ 6 ] Stems may be supplied to a musician in the recording studio so that the musician can adjust a headphones monitor mix by varying the levels of other instruments and vocals relative to the musician's own input. Stems may also be delivered to the consumer so they can listen to a piece of music with a custom blend of the separate elements. Stems can also be used by DJs for performing DJ sets and creating mashups . If music is not available in stem format, software and online services can extract song elements such as vocals, instruments, and drums into separate files.
https://en.wikipedia.org/wiki/Stem_mixing_and_mastering
Stemagen is a corporation headed by Dr. Samuel Wood , notable for cloning adult skin cells [ 1 ] [ 2 ] In January 2008, Dr. Andrew French, Stemagen's chief scientific officer and Wood in California , announced that they successfully created the first 5 mature human embryos using DNA from adult skin cells, aiming to provide a less-controversial source of viable embryonic stem cells . [ 3 ] Dr. Wood and a colleague donated skin cells, and DNA from those cells was transferred to human eggs. It is not clear if the embryos produced would have been capable of further development, but Dr. Wood stated that if that were possible, using the technology for reproductive cloning would be both unethical and illegal. The 5 cloned embryos, created in Stemagen Corporation lab, in La Jolla , were later destroyed to confirm the nuclear transfer process. [ 4 ] Dr. French, lead author, and five other researchers published their findings in the online research journal Stem Cells , in an article entitled Development of human cloned blastocysts following somatic cell nuclear transfer (SCNT) with adult fibroblasts. [ 3 ] [ 5 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stemagen
In hydrology , stemflow is the flow of intercepted water down the trunk or stem of a plant . Stemflow, along with throughfall , is responsible for the transferral of precipitation and nutrients from the canopy to the soil . In tropical rainforests , where this kind of flow can be substantial, erosion gullies can form at the base of the trunk. However, in more temperate climates stemflow levels are low and have little erosional power. [ citation needed ] There are a variety of ways stemflow volume is measured in the field. The most common direct measurement currently used is the bonding of bisected PVC or other plastic tubing around the circumference of the tree trunk, connected and funneled into a graduated cylinder for manual or a tipping bucket rain gauge for automatic collection. At times the tubing is wrapped multiple times around the trunk is order to ensure more complete collection. [ 1 ] Precipitation The primary meteorological characteristics of a rainfall event that influence stemflow are: [ 2 ] Species The species of the tree affects the amount of timing and stemflow. The particular morphological characteristics that are key factors are: Stand characteristics In addition to the effects of individual tree species, the overall structure of the forest stand also influences the amount of stemflow that will ultimately occur, these factors are: [ 3 ] Other Chemistry Nutrients that have accumulated on the canopy from dry deposition or animal feces are able to directly enter the soil through stemflow. When precipitation occurs, canopy nutrients are leached into the water because of the differences in nutrient concentration between the tree and the rainfall. Conversely, nutrients are taken up by the tree when concentration is lower in the canopy than the rainfall, the presence of epiphytes or lichens also contributes to uptake. The nutrients that enter the soil can also reflect the particular environmental conditions around them, for example, plants located in industrialized areas exhibit higher rates of sulfur and nitrogen (from air pollution ), whereas those located near the oceans have higher rates of sodium (from seawater ). [ 4 ] Soil acidification can be seen around some stems, for example beech trees from dry deposition. [ 5 ] Precipitation and morphological factors that influence stemflow timing and volume also affect the chemical composition; in general, stemflow water becomes more dilute during the course of a storm event, and rough-barked species contain more nutrients than smooth-barked species. Water distribution In forested areas, stemflow is considered a point-source input of water into the soil, thus water is more able to effectively penetrate past the topsoil into deeper layers of the soil horizon along tree roots and their subsequent creation of macropores (termed preferential flow ). The loosening of the soil can result in minor landslides .
https://en.wikipedia.org/wiki/Stemflow
Textual criticism [ a ] is a branch of textual scholarship , philology , and literary criticism that is concerned with the identification of textual variants, or different versions, of either manuscripts (mss) or of printed books. Such texts may range in dates from the earliest writing in cuneiform , impressed on clay, for example, to multiple unpublished versions of a 21st-century author's work. Historically, scribes who were paid to copy documents may have been literate, but many were simply copyists, mimicking the shapes of letters without necessarily understanding what they meant. [ citation needed ] This means that unintentional alterations were common when copying manuscripts by hand. [ 1 ] Intentional alterations may have been made as well, for example, the censoring of printed work for political, religious or cultural reasons. The objective of the textual critic's work is to provide a better understanding of the creation and historical transmission of the text and its variants. This understanding may lead to the production of a critical edition containing a scholarly curated text. If a scholar has several versions of a manuscript but no known original, then established methods of textual criticism can be used to seek to reconstruct the original text as closely as possible. The same methods can be used to reconstruct intermediate versions, or recensions , of a document's transcription history, depending on the number and quality of the text available. [ b ] On the other hand, the one original text that a scholar theorizes to exist is referred to as the urtext (in the context of Biblical studies ), archetype or autograph ; however, there is not necessarily a single original text for every group of texts. For example, if a story was spread by oral tradition , and then later written down by different people in different locations, the versions can vary greatly. There are many approaches or methods to the practice of textual criticism, notably eclecticism , stemmatics , and copy-text editing . Quantitative techniques are also used to determine the relationships between witnesses to a text, called textual witnesses, with methods from evolutionary biology ( phylogenetics ) appearing to be effective on a range of traditions. [ 3 ] In some domains, such as religious and classical text editing, the phrase "lower criticism" refers to textual criticism and " higher criticism " to the endeavor to establish the authorship, date, and place of composition of the original text . Textual criticism has been practiced for over two thousand years, as one of the philological arts. [ 4 ] Early textual critics, especially the librarians of Hellenistic Alexandria in the last two centuries BC, were concerned with preserving the works of antiquity , and this continued through the Middle Ages into the early modern period and the invention of the printing press . Textual criticism was an important aspect of the work of many Renaissance humanists , such as Desiderius Erasmus , who edited the Greek New Testament , creating what developed as the Textus Receptus . In Italy, scholars such as Petrarch and Poggio Bracciolini collected and edited many Latin manuscripts, while a new spirit of critical enquiry was boosted by the attention to textual states, for example in the work of Lorenzo Valla on the purported Donation of Constantine . [ citation needed ] Many ancient works, such as the Bible and the Greek tragedies , survive in hundreds of copies, and the relationship of each copy to the original may be unclear. Textual scholars have debated for centuries which sources are most closely derived from the original, hence which readings in those sources are correct. [ citation needed ] Although texts such as Greek plays presumably had one original, the question of whether some biblical books, like the Gospels , ever had just one original has been discussed. [ 5 ] [ page needed ] Interest in applying textual criticism to the Quran has also developed after the discovery of the Sana'a manuscripts in 1972, which possibly date back to the seventh to eighth centuries. [ citation needed ] In the English language, the works of William Shakespeare have been a particularly fertile ground for textual criticism—both because the texts, as transmitted, contain a considerable amount of variation, and because the effort and expense of producing superior editions of his works have always been widely viewed as worthwhile. [ 6 ] The principles of textual criticism, although originally developed and refined for works of antiquity and the Bible, and, for Anglo-American Copy-Text editing, Shakespeare, [ 7 ] have been applied to many works, from (near-)contemporary texts to the earliest known written documents. Ranging from ancient Mesopotamia and Egypt to the twentieth century, textual criticism covers a period of about five millennia. [ citation needed ] The basic problem, as described by Paul Maas , is as follows: We have no autograph [handwritten by the original author] manuscripts of the Greek and Roman classical writers and no copies which have been collated with the originals; the manuscripts we possess derive from the originals through an unknown number of intermediate copies, and are consequently of questionable trustworthiness. The business of textual criticism is to produce a text as close as possible to the original ( constitutio textus ). [ 8 ] Maas comments further that "A dictation revised by the author must be regarded as equivalent to an autograph manuscript". The lack of autograph manuscripts applies to many cultures other than Greek and Roman. In such a situation, a key objective becomes the identification of the first exemplar before any split in the tradition. That exemplar is known as the archetype . "If we succeed in establishing the text of [the archetype], the constitutio (reconstruction of the original) is considerably advanced." [ 9 ] The textual critic's ultimate objective is the production of a "critical edition". [ citation needed ] This contains the text that the author has determined most closely approximates the original, and is accompanied by an apparatus criticus or critical apparatus . The critical apparatus presents the author's work in three parts: first, a list or description of the evidence that the editor used (names of manuscripts, or abbreviations called sigla ); second, the editor's analysis of that evidence (sometimes a simple likelihood rating), [ citation needed ] ; and third, a record of rejected variants of the text (often in order of preference). [ c ] Before inexpensive mechanical printing, literature was copied by hand, and many variations were introduced by copyists. The age of printing made the scribal profession effectively redundant. Printed editions, while less susceptible to the proliferation of variations likely to arise during manual transmission, are nonetheless not immune to introducing variations from an author's autograph. Instead of a scribe miscopying his source, a compositor or a printing shop may read or typeset a work in a way that differs from the autograph. [ 11 ] Since each scribe or printer commits different errors, reconstruction of the lost original is often aided by a selection of readings taken from many sources. An edited text that draws from multiple sources is said to be eclectic . In contrast to this approach, some textual critics prefer to identify the single best surviving text, and not to combine readings from multiple sources. [ d ] When comparing different documents, or "witnesses", of a single, original text, the observed differences are called variant readings , or simply variants or readings . It is not always apparent which single variant represents the author's original work. The process of textual criticism seeks to explain how each variant may have entered the text, either by accident (duplication or omission) or intention (harmonization or censorship), as scribes or supervisors transmitted the original author's text by copying it. The textual critic's task, therefore, is to sort through the variants, eliminating those most likely to be un -original, hence establishing a critical text , or critical edition, that is intended to best approximate the original. At the same time, the critical text should document variant readings, so the relation of extant witnesses to the reconstructed original is apparent to a reader of the critical edition. In establishing the critical text, the textual critic considers both "external" evidence (the age, provenance, and affiliation of each witness) and "internal" or "physical" considerations (what the author and scribes, or printers, were likely to have done). [ 5 ] [ page needed ] The collation of all known variants of a text is referred to as a variorum , namely a work of textual criticism whereby all variations and emendations are set side by side so that a reader can track how textual decisions have been made in the preparation of a text for publication. [ 13 ] The Bible and the works of William Shakespeare have often been the subjects of variorum editions, although the same techniques have been applied with less frequency to many other works, such as Walt Whitman 's Leaves of Grass , [ 14 ] and the prose writings of Edward Fitzgerald . [ 15 ] In practice, citation of manuscript evidence implies any of several methodologies. The ideal, but most costly, method is physical inspection of the manuscript itself; alternatively, published photographs or facsimile editions may be inspected. This method involves paleographical analysis—interpretation of handwriting, incomplete letters and even reconstruction of lacunae . More typically, editions of manuscripts are consulted, which have done this paleographical work already. [ citation needed ] Eclecticism refers to the practice of consulting a wide diversity of witnesses to a particular original. The practice is based on the principle that the more independent transmission histories there are, the less likely they will be to reproduce the same errors. What one omits, the others may retain; what one adds, the others are unlikely to add. Eclecticism allows inferences to be drawn regarding the original text, based on the evidence of contrasts between witnesses. [ citation needed ] Eclectic readings also normally give an impression of the number of witnesses to each available reading. Although a reading supported by the majority of witnesses is frequently preferred, this does not follow automatically. For example, a second edition of a Shakespeare play may include an addition alluding to an event known to have happened between the two editions. Although nearly all subsequent manuscripts may have included the addition, textual critics may reconstruct the original without the addition. [ citation needed ] The result of the process is a text with readings drawn from many witnesses. It is not a copy of any particular manuscript, and may deviate from the majority of existing manuscripts. In a purely eclectic approach, no single witness is theoretically favored. Instead, the critic forms opinions about individual witnesses, relying on both external and internal evidence. [ 16 ] Since the mid-19th century, eclecticism, in which there is no a priori bias to a single manuscript, has been the dominant method of editing the Greek text of the New Testament (currently, the United Bible Society, 5th ed. and Nestle-Åland, 28th ed.). Even so, the oldest manuscripts, being of the Alexandrian text-type , are the most favored, and the critical text has an Alexandrian disposition. [ 17 ] External evidence is evidence of each physical witness, its date, source, and relationship to other known witnesses. Critics [ who? ] will often prefer the readings supported by the oldest witnesses. Since errors tend to accumulate, older manuscripts should have fewer errors. Readings supported by a majority of witnesses are also usually preferred, since these are less likely to reflect accidents or individual biases. For the same reasons, the most geographically diverse witnesses are preferred. Some manuscripts [ which? ] show evidence that particular care was taken in their composition, for example, by including alternative readings in their margins, demonstrating that more than one prior copy (exemplar) was consulted in producing the current one. Other factors being equal, these are the best witnesses. The role of the textual critic is necessary when these basic criteria are in conflict. For instance, there will typically be fewer early copies, and a larger number of later copies. The textual critic will attempt to balance these criteria, to determine the original text. [ citation needed ] There are many other more sophisticated considerations. For example, readings that depart from the known practice of a scribe or a given period may be deemed more reliable, since a scribe is unlikely on his own initiative to have departed from the usual practice. [ 18 ] Internal evidence is evidence that comes from the text itself, independent of the physical characteristics of the document. Various considerations can be used to decide which reading is the most likely to be original. Sometimes these considerations can be in conflict. [ 18 ] Two common considerations have the Latin names lectio brevior (shorter reading) and lectio difficilior (more difficult reading). The first is the general observation that scribes tended to add words, for clarification or out of habit, more often than they removed them. The second, lectio difficilior potior (the harder reading is stronger), recognizes the tendency for harmonization—resolving apparent inconsistencies in the text. Applying this principle leads to taking the more difficult (unharmonized) reading as being more likely to be the original. Such cases also include scribes simplifying and smoothing texts they did not fully understand. [ 19 ] Another scribal tendency is called homoioteleuton , meaning "similar endings". Homoioteleuton occurs when two words/phrases/lines end with the similar sequence of letters. The scribe, having finished copying the first, skips to the second, omitting all intervening words. Homoioarche refers to eye-skip when the beginnings of two lines are similar. [ 20 ] The critic may also examine the other writings of the author to decide what words and grammatical constructions match his style. The evaluation of internal evidence also provides the critic with information that helps him evaluate the reliability of individual manuscripts. Thus, the consideration of internal and external evidence is related. [ citation needed ] After considering all relevant factors, the textual critic seeks the reading that best explains how the other readings would arise. That reading is then the most likely candidate to have been original. [ citation needed ] Various scholars have developed guidelines, or canons of textual criticism, to guide the exercise of the critic's judgment in determining the best readings of a text. One of the earliest was Johann Albrecht Bengel (1687–1752), who in 1734 produced an edition of the Greek New Testament . In his commentary, he established the rule Proclivi scriptioni praestat ardua , ("the harder reading is to be preferred"). [ 21 ] Johann Jakob Griesbach (1745–1812) published several editions of the New Testament. In his 1796 edition, [ 22 ] he established fifteen critical rules. Among them was a variant of Bengel's rule, Lectio difficilior potior , "the harder reading is better." Another was Lectio brevior praeferenda , "the shorter reading is better", based on the idea that scribes were more likely to add than to delete. [ 23 ] This rule cannot be applied uncritically, as scribes may omit material inadvertently. [ citation needed ] Brooke Foss Westcott (1825–1901) and Fenton Hort (1828–1892) published an edition of the New Testament in Greek in 1881 . They proposed nine critical rules, including a version of Bengel's rule, "The reading is less likely to be original that shows a disposition to smooth away difficulties." They also argued that "Readings are approved or rejected by reason of the quality, and not the number, of their supporting witnesses", and that "The reading is to be preferred that most fitly explains the existence of the others." [ 24 ] Many of these rules, although originally developed for biblical textual criticism, have wide applicability to any text susceptible to errors of transmission. [ citation needed ] Since the canons of criticism are highly susceptible to interpretation, and at times even contradict each other, they may be employed to justify a result that fits the textual critic's aesthetic or theological agenda. Starting in the 19th century, scholars sought more rigorous methods to guide editorial judgment. Stemmatics and copy-text editing – while both eclectic, in that they permit the editor to select readings from multiple sources – sought to reduce subjectivity by establishing one or a few witnesses presumably as being favored by "objective" criteria. [ citation needed ] The citing of sources used, and alternate readings, and the use of original text and images helps readers and other critics determine to an extent the depth of research of the critic, and to independently verify their work. [ citation needed ] Stemmatics or stemmatology is a rigorous approach to textual criticism. Karl Lachmann (1793–1851) greatly contributed to making this method famous, even though he did not invent it. [ 25 ] The method takes its name from the word stemma . The Ancient Greek word στέμματα [ 26 ] and its loanword in classical Latin stemmata [ 26 ] [ 27 ] [ 28 ] may refer to " family trees ". This specific meaning shows the relationships of the surviving witnesses (the first known example of such a stemma, albeit without the name, dates from 1827). [ 29 ] The family tree is also referred to as a cladogram . [ 30 ] The method works from the principle that "community of error implies community of origin". That is, if two witnesses have a number of errors in common, it may be presumed that they were derived from a common intermediate source, called a hyparchetype . Relations between the lost intermediates are determined by the same process, placing all extant manuscripts in a family tree or stemma codicum descended from a single archetype . The process of constructing the stemma is called recension , or the Latin recensio . [ 31 ] Having completed the stemma, the critic proceeds to the next step, called selection or selectio , where the text of the archetype is determined by examining variants from the closest hyparchetypes to the archetype and selecting the best ones. If one reading occurs more often than another at the same level of the tree, then the dominant reading is selected. If two competing readings occur equally often, then the editor uses judgment to select the correct reading. [ 32 ] After selectio , the text may still contain errors, since there may be passages where no source preserves the correct reading. The step of examination , or examinatio is applied to find corruptions. Where the editor concludes that the text is corrupt, it is corrected by a process called "emendation", or emendatio (also sometimes called divinatio ). Emendations not supported by any known source are sometimes called conjectural emendations . [ 33 ] The process of selectio resembles eclectic textual criticism, but applied to a restricted set of hypothetical hyparchetypes. The steps of examinatio and emendatio resemble copy-text editing. In fact, the other techniques can be seen as special cases of stemmatics in which a rigorous family history of the text cannot be determined but only approximated. If it seems that one manuscript is by far the best text, then copy text editing is appropriate, and if it seems that a group of manuscripts are good, then eclecticism on that group would be proper. [ 34 ] The Hodges–Farstad edition of the Greek New Testament attempts to use stemmatics for some portions. [ 35 ] Phylogenetics is a technique borrowed from biology , where it was originally named phylogenetic systematics by Willi Hennig . In biology, the technique is used to determine the evolutionary relationships between different species . [ 36 ] In its application in textual criticism, the text of a number of different witnesses may be entered into a computer, which records all the differences between them, or derived from an existing apparatus. The manuscripts are then grouped according to their shared characteristics. The difference between phylogenetics and more traditional forms of statistical analysis is that, rather than simply arranging the manuscripts into rough groupings according to their overall similarity, phylogenetics assumes that they are part of a branching family tree and uses that assumption to derive relationships between them. This makes it more like an automated approach to stemmatics. However, where there is a difference, the computer does not attempt to decide which reading is closer to the original text, and so does not indicate which branch of the tree is the "root"—which manuscript tradition is closest to the original. Other types of evidence must be used for that purpose. [ citation needed ] Phylogenetics faces the same difficulty as textual criticism: the appearance of characteristics in descendants of an ancestor other than by direct copying (or miscopying) of the ancestor, for example where a scribe combines readings from two or more different manuscripts ("contamination"). The same phenomenon is widely present among living organisms, as instances of horizontal gene transfer (or lateral gene transfer) and genetic recombination , particularly among bacteria. Further exploration of the applicability of the different methods for coping with these problems across both living organisms and textual traditions is a promising area of study. [ 37 ] Software developed for use in biology has been applied successfully to textual criticism; for example, it is being used by the Canterbury Tales Project [ 38 ] to determine the relationship between the 84 surviving manuscripts and four early printed editions of The Canterbury Tales . Shaw's edition of Dante's Commedia uses phylogenetic and traditional methods alongside each other in a comprehensive exploration of relations among seven early witnesses to Dante's text. [ 39 ] The stemmatic method assumes that each witness is derived from one, and only one, predecessor. If a scribe refers to more than one source when creating her or his copy, then the new copy will not clearly fall into a single branch of the family tree. In the stemmatic method, a manuscript that is derived from more than one source is said to be contaminated . [ citation needed ] The method also assumes that scribes only make new errors—they do not attempt to correct the errors of their predecessors. When a text has been improved by the scribe, it is said to be sophisticated , but "sophistication" impairs the method by obscuring a document's relationship to other witnesses, and making it more difficult to place the manuscript correctly in the stemma. [ citation needed ] The stemmatic method requires the textual critic to group manuscripts by commonality of error. It is required, therefore, that the critic can distinguish erroneous readings from correct ones. This assumption has often come under attack. W. W. Greg noted: "That if a scribe makes a mistake he will inevitably produce nonsense is the tacit and wholly unwarranted assumption." [ 40 ] Franz Anton Knittel defended the traditional point of view in theology and was against the modern textual criticism. He defended an authenticity of the Pericopa Adulterae (John 7:53–8:11), Comma Johanneum (1 John 5:7), and Testimonium Flavianum . According to him, Erasmus in his Novum Instrumentum omne did not incorporate the Comma from Codex Montfortianus , because of grammar differences, but used Complutensian Polyglotta . According to him, the Comma was known for Tertullian . [ 41 ] The stemmatic method's final step is emendatio , also sometimes referred to as "conjectural emendation". But, in fact, the critic employs conjecture at every step of the process. Some of the method's rules that are designed to reduce the exercise of editorial judgment do not necessarily produce the correct result. For example, where there are more than two witnesses at the same level of the tree, normally the critic will select the dominant reading. However, it may be no more than fortuitous that more witnesses have survived that present a particular reading. A plausible reading that occurs less often may, nevertheless, be the correct one. [ 42 ] Lastly, the stemmatic method assumes that every extant witness is derived, however remotely, from a single source. It does not account for the possibility that the original author may have revised her or his work, and that the text could have existed at different times in more than one authoritative version. [ citation needed ] The critic Joseph Bédier (1864–1938), who had worked with stemmatics, launched an attack on that method in 1928. He surveyed editions of medieval French texts that were produced with the stemmatic method, and found that textual critics tended overwhelmingly to produce bifid trees, divided into just two branches. He concluded that this outcome was unlikely to have occurred by chance, and that therefore, the method was tending to produce bipartite stemmas regardless of the actual history of the witnesses. He suspected that editors tended to favor trees with two branches, as this would maximize the opportunities for editorial judgment (as there would be no third branch to "break the tie" whenever the witnesses disagreed). He also noted that, for many works, more than one reasonable stemma could be postulated, suggesting that the method was not as rigorous or as scientific as its proponents had claimed. [ citation needed ] Bédier's doubts about the stemmatic method led him to consider whether it could be dropped altogether. As an alternative to stemmatics, Bédier proposed a Best-text editing method, in which a single textual witness, judged to be of a 'good' textual state by the editor, is emended as lightly as possible for manifest transmission mistakes, but left otherwise unchanged. This makes a Best-text edition essentially a documentary edition. For an example one may refer to Eugene Vinaver's edition of the Winchester Manuscript of Malory's Le Morte d'Arthur . [ citation needed ] When copy-text editing, the scholar fixes errors in a base text, often with the help of other witnesses. Often, the base text is selected from the oldest manuscript of the text, but in the early days of printing, the copy text was often a manuscript that was at hand. [ citation needed ] Using the copy-text method, the critic examines the base text and makes corrections (called emendations) in places where the base text appears wrong to the critic. This can be done by looking for places in the base text that do not make sense or by looking at the text of other witnesses for a superior reading. Close-call decisions are usually resolved in favor of the copy-text. [ citation needed ] The first published, printed edition of the Greek New Testament was produced by this method. Erasmus , the editor, selected a manuscript from the local Dominican monastery in Basle and corrected its obvious errors by consulting other local manuscripts. The Westcott and Hort text, which was the basis for the Revised Version of the English bible, also used the copy-text method, using the Codex Vaticanus as the base manuscript. [ 44 ] The bibliographer Ronald B. McKerrow introduced the term copy-text in his 1904 edition of the works of Thomas Nashe , defining it as "the text used in each particular case as the basis of mine". McKerrow was aware of the limitations of the stemmatic method, and believed it was more prudent to choose one particular text that was thought to be particularly reliable, and then to emend it only where the text was obviously corrupt. The French critic Joseph Bédier likewise became disenchanted with the stemmatic method, and concluded that the editor should choose the best available text, and emend it as little as possible. [ citation needed ] In McKerrow's method as originally introduced, the copy-text was not necessarily the earliest text. In some cases, McKerrow would choose a later witness, noting that "if an editor has reason to suppose that a certain text embodies later corrections than any other, and at the same time has no ground for disbelieving that these corrections, or some of them at least, are the work of the author, he has no choice but to make that text the basis of his reprint". [ 45 ] By 1939, in his Prolegomena for the Oxford Shakespeare , McKerrow had changed his mind about this approach, as he feared that a later edition—even if it contained authorial corrections—would "deviate more widely than the earliest print from the author's original manuscript". He therefore concluded that the correct procedure would be "produced by using the earliest 'good' print as copy-text and inserting into it, from the first edition which contains them, such corrections as appear to us to be derived from the author". But, fearing the arbitrary exercise of editorial judgment, McKerrow stated that, having concluded that a later edition had substantive revisions attributable to the author, "we must accept all the alterations of that edition, saving any which seem obvious blunders or misprints". [ 46 ] Anglo-American textual criticism in the last half of the 20th century came to be dominated by a landmark 1950 essay by Sir Walter W. Greg , "The Rationale of Copy-Text". Greg proposed: [A] distinction between the significant, or as I shall call them 'substantive', readings of the text, those namely that affect the author's meaning or the essence of his expression, and others, such in general as spelling, punctuation, word-division, and the like, affecting mainly its formal presentation, which may be regarded as the accidents, or as I shall call them 'accidentals', of the text. [ 47 ] Greg observed that compositors at printing shops tended to follow the "substantive" readings of their copy faithfully, except when they deviated unintentionally; but that "as regards accidentals they will normally follow their own habits or inclination, though they may, for various reasons and to varying degrees, be influenced by their copy". [ 48 ] He concluded: The true theory is, I contend, that the copy-text should govern (generally) in the matter of accidentals, but that the choice between substantive readings belongs to the general theory of textual criticism and lies altogether beyond the narrow principle of the copy-text. Thus it may happen that in a critical edition the text rightly chosen as copy may not by any means be the one that supplies most substantive readings in cases of variation. The failure to make this distinction and to apply this principle has naturally led to too close and too general a reliance upon the text chosen as basis for an edition, and there has arisen what may be called the tyranny of the copy-text, a tyranny that has, in my opinion, vitiated much of the best editorial work of the past generation. [ 49 ] Greg's view, in short, was that the "copy-text can be allowed no over-riding or even preponderant authority so far as substantive readings are concerned". The choice between reasonable competing readings, he said: [W]ill be determined partly by the opinion the editor may form respecting the nature of the copy from which each substantive edition was printed, which is a matter of external authority; partly by the intrinsic authority of the several texts as judged by the relative frequency of manifest errors therein; and partly by the editor's judgment of the intrinsic claims of individual readings to originality—in other words their intrinsic merit, so long as by 'merit' we mean the likelihood of their being what the author wrote rather than their appeal to the individual taste of the editor. [ 50 ] Although Greg argued that editors should be free to use their judgment to choose between competing substantive readings, he suggested that an editor should defer to the copy-text when "the claims of two readings ... appear to be exactly balanced. ... In such a case, while there can be no logical reason for giving preference to the copy-text, in practice, if there is no reason for altering its reading, the obvious thing seems to be to let it stand." [ 51 ] The "exactly balanced" variants are said to be indifferent . [ citation needed ] Editors who follow Greg's rationale produce eclectic editions, in that the authority for the "accidentals" is derived from one particular source (usually the earliest one) that the editor considers to be authoritative, but the authority for the "substantives" is determined in each individual case according to the editor's judgment. The resulting text, except for the accidentals, is constructed without relying predominantly on any one witness. [ citation needed ] W. W. Greg did not live long enough to apply his rationale of copy-text to any actual editions of works. His rationale was adopted and significantly expanded by Fredson Bowers (1905–1991). Starting in the 1970s, G. Thomas Tanselle vigorously took up the method's defense and added significant contributions of his own. Greg's rationale as practiced by Bowers and Tanselle has come to be known as the "Greg–Bowers" or the "Greg–Bowers–Tanselle" method. [ citation needed ] In his 1964 essay, "Some Principles for Scholarly Editions of Nineteenth-Century American Authors", Bowers said that "the theory of copy-text proposed by Sir Walter Greg rules supreme". [ 52 ] Bowers's assertion of "supremacy" was in contrast to Greg's more modest claim that "My desire is rather to provoke discussion than to lay down the law". [ 53 ] Whereas Greg had limited his illustrative examples to English Renaissance drama, where his expertise lay, Bowers argued that the rationale was "the most workable editorial principle yet contrived to produce a critical text that is authoritative in the maximum of its details whether the author be Shakespeare, Dryden , Fielding , Nathaniel Hawthorne , or Stephen Crane . The principle is sound without regard for the literary period." [ 54 ] For works where an author's manuscript survived—a case Greg had not considered—Bowers concluded that the manuscript should generally serve as copy-text. Citing the example of Nathaniel Hawthorne, he noted: When an author's manuscript is preserved, this has paramount authority, of course. Yet the fallacy is still maintained that since the first edition was proofread by the author, it must represent his final intentions and hence should be chosen as copy-text. Practical experience shows the contrary. When one collates the manuscript of The House of the Seven Gables against the first printed edition, one finds an average of ten to fifteen differences per page between the manuscript and the print, many of them consistent alterations from the manuscript system of punctuation, capitalization, spelling, and word-division. It would be ridiculous to argue that Hawthorne made approximately three to four thousand small changes in proof, and then wrote the manuscript of The Blithedale Romance according to the same system as the manuscript of the Seven Gables , a system that he had rejected in proof. [ 55 ] Following Greg, the editor would then replace any of the manuscript readings with substantives from printed editions that could be reliably attributed to the author: "Obviously, an editor cannot simply reprint the manuscript, and he must substitute for its readings any words that he believes Hawthorne changed in proof." [ 55 ] McKerrow had articulated textual criticism's goal in terms of "our ideal of an author's fair copy of his work in its final state". [ 56 ] Bowers asserted that editions founded on Greg's method would "represent the nearest approximation in every respect of the author's final intentions." [ 57 ] Bowers stated similarly that the editor's task is to "approximate as nearly as possible an inferential authorial fair copy." [ 58 ] Tanselle notes that, "Textual criticism ... has generally been undertaken with a view to reconstructing, as accurately as possible, the text finally intended by the author". [ 59 ] Bowers and Tanselle argue for rejecting textual variants that an author inserted at the suggestion of others. Bowers said that his edition of Stephen Crane 's first novel, Maggie , presented "the author's final and uninfluenced artistic intentions." [ 60 ] In his writings, Tanselle refers to "unconstrained authorial intention" or "an author's uninfluenced intentions." [ 61 ] This marks a departure from Greg, who had merely suggested that the editor inquire whether a later reading "is one that the author can reasonably be supposed to have substituted for the former", [ 62 ] not implying any further inquiry as to why the author had made the change. [ citation needed ] Tanselle discusses the example of Herman Melville 's Typee . After the novel's initial publication, Melville's publisher asked him to soften the novel's criticisms of missionaries in the South Seas. Although Melville pronounced the changes an improvement, Tanselle rejected them in his edition, concluding that "there is no evidence, internal or external, to suggest that they are the kinds of changes Melville would have made without pressure from someone else." [ 63 ] Bowers confronted a similar problem in his edition of Maggie . Crane originally printed the novel privately in 1893. To secure commercial publication in 1896, Crane agreed to remove profanity, but he also made stylistic revisions. Bowers's approach was to preserve the stylistic and literary changes of 1896, but to revert to the 1893 readings where he believed that Crane was fulfilling the publisher's intention rather than his own. There were, however, intermediate cases that could reasonably have been attributed to either intention, and some of Bowers's choices came under fire—both as to his judgment, and as to the wisdom of conflating readings from the two different versions of Maggie . [ 64 ] Hans Zeller argued that it is impossible to tease apart the changes Crane made for literary reasons and those made at the publisher's insistence: Firstly, in anticipation of the character of the expected censorship, Crane could be led to undertake alterations which also had literary value in the context of the new version. Secondly, because of the systematic character of the work, purely censorial alterations sparked off further alterations, determined at this stage by literary considerations. Again in consequence of the systemic character of the work, the contamination of the two historical versions in the edited text gives rise to a third version. Though the editor may indeed give a rational account of his decision at each point on the basis of the documents, nevertheless to aim to produce the ideal text which Crane would have produced in 1896 if the publisher had left him complete freedom is to my mind just as unhistorical as the question of how the first World War or the history of the United States would have developed if Germany had not caused the USA to enter the war in 1917 by unlimited submarine combat. The nonspecific form of censorship described above is one of the historical conditions under which Crane wrote the second version of Maggie and made it function. From the text which arose in this way it is not possible to subtract these forces and influences, in order to obtain a text of the author's own. Indeed I regard the "uninfluenced artistic intentions" of the author as something which exists only in terms of aesthetic abstraction. Between influences on the author and influences on the text are all manner of transitions. [ 65 ] Bowers and Tanselle recognize that texts often exist in more than one authoritative version. Tanselle argues that: [T]wo types of revision must be distinguished: that which aims at altering the purpose, direction, or character of a work, thus attempting to make a different sort of work out of it; and that which aims at intensifying, refining, or improving the work as then conceived (whether or not it succeeds in doing so), thus altering the work in degree but not in kind. If one may think of a work in terms of a spatial metaphor, the first might be labeled "vertical revision," because it moves the work to a different plane, and the second "horizontal revision," because it involves alterations within the same plane. Both produce local changes in active intention; but revisions of the first type appear to be in fulfillment of an altered programmatic intention or to reflect an altered active intention in the work as a whole, whereas those of the second do not. [ 66 ] He suggests that where a revision is "horizontal" ( i.e. , aimed at improving the work as originally conceived), then the editor should adopt the author's later version. But where a revision is "vertical" ( i.e. , fundamentally altering the work's intention as a whole), then the revision should be treated as a new work, and edited separately on its own terms. [ citation needed ] Bowers was also influential in defining the form of critical apparatus that should accompany a scholarly edition. In addition to the content of the apparatus, Bowers led a movement to relegate editorial matter to appendices, leaving the critically established text "in the clear", that is, free of any signs of editorial intervention. Tanselle explained the rationale for this approach: In the first place, an editor's primary responsibility is to establish a text; whether his goal is to reconstruct that form of the text which represents the author's final intention or some other form of the text, his essential task is to produce a reliable text according to some set of principles. Relegating all editorial matter to an appendix and allowing the text to stand by itself serves to emphasize the primacy of the text and permits the reader to confront the literary work without the distraction of editorial comment and to read the work with ease. A second advantage of a clear text is that it is easier to quote from or to reprint. Although no device can insure accuracy of quotation, the insertion of symbols (or even footnote numbers) into a text places additional difficulties in the way of the quoter. Furthermore, most quotations appear in contexts where symbols are inappropriate; thus when it is necessary to quote from a text which has not been kept clear of apparatus, the burden of producing a clear text of the passage is placed on the quoter. Even footnotes at the bottom of the text pages are open to the same objection, when the question of a photographic reprint arises. [ 67 ] Some critics [ who? ] believe that a clear-text edition gives the edited text too great a prominence, relegating textual variants to appendices that are difficult to use, and suggesting a greater sense of certainty about the established text than it deserves. As Shillingsburg notes, "English scholarly editions have tended to use notes at the foot of the text page, indicating, tacitly, a greater modesty about the "established" text and drawing attention more forcibly to at least some of the alternative forms of the text". [ 68 ] In 1963, the Modern Language Association of America (MLA) established the Center for Editions of American Authors (CEAA). The CEAA's Statement of Editorial Principles and Procedures , first published in 1967, adopted the Greg–Bowers rationale in full. A CEAA examiner would inspect each edition, and only those meeting the requirements would receive a seal denoting "An Approved Text." [ citation needed ] Between 1966 and 1975, the Center allocated more than $1.5 million in funding from the National Endowment for the Humanities to various scholarly editing projects, which were required to follow the guidelines (including the structure of editorial apparatus) as Bowers had defined them. [ 69 ] According to Davis, the funds coordinated by the CEAA over the same period were more than $6 million, counting funding from universities, university presses, and other bodies. [ 70 ] The Center for Scholarly Editions (CSE) replaced the CEAA in 1976. The change of name indicated the shift to a broader agenda than just American authors. The center also ceased its role in the allocation of funds. The center's latest guidelines (2003) no longer prescribe a particular editorial procedure. [ 71 ] The Church of Jesus Christ of Latter-day Saints ( LDS Church ) includes the Book of Mormon as a foundational reference. LDS members typically believe the book to be a literal historical record. [ citation needed ] Although some earlier unpublished studies had been prepared, [ citation needed ] not until the early 1970s was true textual criticism applied to the Book of Mormon. At that time BYU Professor Ellis Rasmussen and his associates were asked by the LDS Church to begin preparation for a new edition of the Holy Scriptures. One aspect of that effort entailed digitizing the text and preparing appropriate footnotes; another aspect required establishing the most dependable text. To that latter end, Stanley R. Larson (a Rasmussen graduate student) set about applying modern text critical standards to the manuscripts and early editions of the Book of Mormon as his thesis project—which he completed in 1974. To that end, Larson carefully examined the Original Manuscript (the one dictated by Joseph Smith to his scribes) and the Printer's Manuscript (the copy Oliver Cowdery prepared for the Printer in 1829–1830), and compared them with the first, second, and third editions of the Book of Mormon to determine what sort of changes had occurred over time and to make judgments as to which readings were the most original. [ 72 ] Larson proceeded to publish a useful set of well-argued articles on the phenomena which he had discovered. [ 73 ] Many of his observations were included as improvements in the 1981 LDS edition of the Book of Mormon. [ citation needed ] By 1979, with the establishment of the Foundation for Ancient Research and Mormon Studies ( FARMS ) as a California non-profit research institution, an effort led by Robert F. Smith began to take full account of Larson's work and to publish a Critical Text of the Book of Mormon. Thus was born the FARMS Critical Text Project which published the first volume of the 3-volume Book of Mormon Critical Text in 1984. The third volume of that first edition was published in 1987, but was already being superseded by a second, revised edition of the entire work, [ 74 ] greatly aided through the advice and assistance of then Yale doctoral candidate Grant Hardy , Dr. Gordon C. Thomasson , Professor John W. Welch (the head of FARMS), Professor Royal Skousen , and others too numerous to mention here. However, these were merely preliminary steps to a far more exacting and all-encompassing project. [ citation needed ] In 1988, with that preliminary phase of the project completed, Professor Skousen took over as editor and head of the FARMS Critical Text of the Book of Mormon Project and proceeded to gather still scattered fragments of the Original Manuscript of the Book of Mormon and to have advanced photographic techniques applied to obtain fine readings from otherwise unreadable pages and fragments. He also closely examined the Printer's Manuscript (owned by the Community of Christ —RLDS Church in Independence, Missouri) for differences in types of ink or pencil, in order to determine when and by whom they were made. He also collated the various editions of the Book of Mormon down to the present to see what sorts of changes have been made through time. [ citation needed ] Thus far, Professor Skousen has published complete transcripts of the Original and Printer's Manuscripts, [ 75 ] as well as a six-volume analysis of textual variants. [ 76 ] Still in preparation are a history of the text, and a complete electronic collation of editions and manuscripts (volumes 3 and 5 of the Project, respectively). Yale University has in the meantime published an edition of the Book of Mormon which incorporates all aspects of Skousen's research. [ 77 ] Textual criticism of the Hebrew Bible compares manuscript versions of the following sources (dates refer to the oldest extant manuscripts in each family): As in the New Testament, changes, corruptions, and erasures have been found, particularly in the Masoretic texts. This is ascribed to the fact that early soferim (scribes) did not treat copy errors in the same manner later on. [ 78 ] There are three separate new editions of the Hebrew Bible currently in development: Biblia Hebraica Quinta , the Hebrew University Bible , and the Hebrew Bible: A Critical Edition (formerly known as the Oxford Hebrew Bible ). Biblia Hebraica Quinta is a diplomatic edition based on the Leningrad Codex . The Hebrew University Bible is also diplomatic, but based on the Aleppo Codex. The Hebrew Bible: A Critical Edition is an eclectic edition. [ 79 ] Early New Testament texts include more than 5,800 Greek manuscripts, 10,000 Latin manuscripts and 9,300 manuscripts in various other ancient languages (including Syriac , Slavic , Ethiopic and Armenian ). The manuscripts contain approximately 300,000 textual variants, most of them involving changes of word order and other comparative trivialities. [ 80 ] [ 81 ] As according to Wescott and Hort: With regard to the great bulk of the words of the New Testament, as of most other ancient writings, there is no variation or other ground of doubt, and therefore no room for textual criticism... The proportion of words virtually accepted on all hands as raised above doubt is very great, not less, on a rough computation, than seven eights of the whole. The remaining eighth therefore, formed in great part by changes of order and other comparative trivialities, constitutes the whole area of criticism. [ 81 ] Since the 18th century, Protestant New Testament scholars have argued that textual variants by themselves have not affected doctrine. Evangelical theologian D. A. Carson has claimed: "nothing we believe to be doctrinally true, and nothing we are commanded to do, is in any way jeopardized by the variants. This is true for any textual tradition. The interpretation of individual passages may well be called in question; but never is a doctrine affected." [ 80 ] [ 82 ] Historically, attempts have been made to sort new New Testament manuscripts into one of three or four theorized text-types (also styled unhyphenated: text types ), or into looser clusters. However, the sheer number of witnesses presents unique difficulties, chiefly in that it makes stemmatics in many cases impossible, because many copyists used two or more different manuscripts as sources. Consequently, New Testament textual critics have adopted eclecticism. As of 2017 [update] the most common division today is as follows: Textual criticism of the Quran is a beginning area of study, [ 90 ] [ 91 ] as Muslims have historically disapproved of higher criticism being applied to the Quran. [ 92 ] In some countries textual criticism can be seen as apostasy. [ 93 ] Amongst Muslims, the original Arabic text is commonly considered to be the final revelation, revealed to Muhammad from AD 610 to his death in 632. In Islamic tradition, the Quran was memorised and written down by Muhammad's companions and copied as needed. [ citation needed ] The Quran is believed to have had some oral tradition of passing down at some point. Differences that affected the meaning were noted, and around AD 650 Uthman began a process of standardization, presumably to rid the Quran of these differences. Uthman's standardization did not eliminate the textual variants. [ 94 ] In the 1970s, 14,000 fragments of Quran were discovered in the Great Mosque of Sana'a , the Sana'a manuscripts. About 12,000 fragments belonged to 926 copies of the Quran, the other 2,000 were loose fragments. The oldest known copy of the Quran so far belongs to this collection: it dates to the end of the seventh to eighth centuries. [ citation needed ] The German scholar Gerd R. Puin has been investigating these Quran fragments for years. His research team made 35,000 microfilm photographs of the manuscripts, which he dated to early part of the eighth century. Puin has not published the entirety of his work, but noted unconventional verse orderings, minor textual variations, and rare styles of orthography. He also suggested that some of the parchments were palimpsests which had been reused. Puin believed that this implied an evolving text as opposed to a fixed one. [ 89 ] In an article in the 1999 Atlantic Monthly , [ 89 ] Gerd Puin is quoted as saying that: My idea is that the Koran is a kind of cocktail of texts that were not all understood even at the time of Muhammad. Many of them may even be a hundred years older than Islam itself. Even within the Islamic traditions there is a huge body of contradictory information, including a significant Christian substrate; one can derive a whole Islamic anti-history from them if one wants. The Koran claims for itself that it is "mubeen", or "clear", but if you look at it, you will notice that every fifth sentence or so simply doesn't make sense. Many Muslims—and Orientalists—will tell you otherwise, of course, but the fact is that a fifth of the Koranic text is just incomprehensible. This is what has caused the traditional anxiety regarding translation. If the Koran is not comprehensible—if it can't even be understood in Arabic—then it's not translatable. People fear that. And since the Koran claims repeatedly to be clear but obviously is not—as even speakers of Arabic will tell you—there is a contradiction. Something else must be going on. [ 89 ] Canadian Islamic scholar, Andrew Rippin has likewise stated: The impact of the Yemeni manuscripts is still to be felt. Their variant readings and verse orders are all very significant. Everybody agrees on that. These manuscripts say that the early history of the Koranic text is much more of an open question than many have suspected: the text was less stable, and therefore had less authority, than has always been claimed. [ 89 ] For these reasons, some scholars, especially those who are associated with the Revisionist school of Islamic studies , have proposed that the traditional account of the Quran's composition needs to be discarded and a new perspective on the Quran is needed. Puin, comparing Quranic studies with Biblical studies, has stated: So many Muslims have this belief that everything between the two covers of the Koran is just God's unaltered word. They like to quote the textual work that shows that the Bible has a history and did not fall straight out of the sky, but until now the Koran has been out of this discussion. The only way to break through this wall is to prove that the Koran has a history too. The Sana'a fragments will help us to do this. [ 89 ] In 2015, some of the earliest known Quranic fragments , containing 62 out of 6236 verses of the Quran and with proposed dating from between approximately AD 568 and 645, were identified at the University of Birmingham . David Thomas, Professor of Christianity and Islam, commented: These portions must have been in a form that is very close to the form of the Koran read today, supporting the view that the text has undergone little or no alteration and that it can be dated to a point very close to the time it was believed to be revealed. [ 95 ] David Thomas pointed out that the radiocarbon testing found the death date of the animal whose skin made up the Quran, not the date when the Quran was written. Since blank parchment was often stored for years after being produced, he said the Quran could have been written as late as 650–655, during the Quranic codification under Uthman . [ 96 ] Marijn van Putten, who has published work on idiosyncratic orthography common to all early manuscripts of the Uthmanic text type [ 97 ] has stated and demonstrated with examples that due to a number of these same idiosyncratic spellings present in the Birmingham fragment (Mingana 1572a + Arabe 328c), it is "clearly a descendant of the Uthmanic text type" and that it is "impossible" that it is a pre-Uthmanic copy, despite its early radiocarbon dating. [ 98 ] Similarly, Stephen J. Shoemaker has also argued that it is extremely unlikely that the Birmingham manuscript was a pre-Uthmanic manuscript. [ 99 ] Textual criticism of the Talmud has a long pre-history but has become a separate discipline from Talmudic study only recently. [ 100 ] Much of the research is in Hebrew and German language periodicals. [ 101 ] Textual criticism originated in the classical era and its development in modern times began with classics scholars, in an effort to determine the original content of texts like Plato 's Republic . [ 102 ] There are far fewer witnesses to classical texts than to the Bible, so scholars can use stemmatics and, in some cases, copy text editing. However, unlike the New Testament where the earliest witnesses are within 200 years of the original, the earliest existing manuscripts of most classical texts were written about a millennium after their composition. All things being equal, textual scholars expect that a larger time gap between an original and a manuscript means more changes in the text. [ citation needed ] Scientific and critical editions can be protected by copyright as works of authorship if enough creativity/originality is provided. The mere addition of a word, or substitution of a term with another one believed to be more correct, usually does not achieve such level of originality/creativity. All the notes accounting for the analysis and why and how such changes have been made represent a different work autonomously copyrightable if the other requirements are satisfied. In the European Union critical and scientific editions may be protected also by the relevant neighboring right that protects critical and scientific publications of public domain works as made possible by art. 5 of the Copyright Term Directive . As of 2011 not all EU member States had transposed art. 5 into national law. [ 109 ] Digital textual criticism is a relatively new branch of textual criticism working with digital tools to establish a critical edition. The development of digital editing tools has allowed editors to transcribe, archive and process documents much faster than before. Some scholars claim digital editing has radically changed the nature of textual criticism; but others believe the editing process has remained fundamentally the same, and digital tools have simply made aspects of it more efficient. [ citation needed ] From its beginnings, digital scholarly editing involved developing a system for displaying both a newly "typeset" text and a history of variations in the text under review. Until about halfway through the first decade of the twenty-first century, digital archives relied almost entirely on manual transcriptions of texts. Notable exceptions are the earliest digital scholarly editions published in Budapest in the 1990s. These editions contained high resolution images next to the diplomatic transcription of the texts, as well as a newly typeset text with annotations. [ 110 ] These old websites are still available at their original location. Over the course of the early twenty-first century, image files became much faster and cheaper, and storage space and upload times ceased to be significant issues. The next step in digital scholarly editing was the wholesale introduction of images of historical texts, particularly high-definition images of manuscripts, formerly offered only in samples. [ 111 ] In view of the need to represent historical texts primarily through transcription, and because transcriptions required encoding for every aspect of text that could not be recorded by a single keystroke on the QWERTY keyboard, encoding was invented. Text Encoding Initiative (TEI) uses encoding for the same purpose, although its particulars were designed for scholarly uses in order to offer some hope that scholarly work on digital texts had a good chance of migrating from aging operating systems and digital platforms to new ones and the hope that standardization would lead to easy interchange of data among different projects. [ 111 ] Several computer programs and standards exist to support the work of the editors of critical editions. These include
https://en.wikipedia.org/wiki/Stemmatics
A stenotherm (from Greek στενός stenos "narrow" and θέρμη therme "heat") is a species or living organism capable of surviving only within a narrow temperature range. This specialization is often found in organisms that inhabit environments with relatively stable environments, such as deep sea environments or polar regions. The opposite of a stenotherm is a eurytherm , an organism that can function across a wide range of body temperatures. Eurythermic organisms are typically found in environments with significant temperature variations, such as temperate or tropical regions. The size, shape, and composition of an organism's body can influence its temperature regulation, with larger organisms generally maintaining a more stable internal temperature than smaller ones. Chionoecetes opilio is a stenothermic organism, and temperature significantly affects its biology throughout its life history, from embryo to adult. Small changes in temperature (< 2 °C) can increase the duration of egg incubation for C. opilio by a full year. [ 1 ] This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stenotherm
Stentrode ( Stent-electrode recording array ) is a small stent -mounted electrode array permanently implanted into a blood vessel in the brain, without the need for open brain surgery. It is in clinical trials as a brain–computer interface (BCI) for people with paralyzed or missing limbs, [ 1 ] who will use their neural signals or thoughts to control external devices, which currently include computer operating systems. The device may ultimately be used to control powered exoskeletons , robotic prosthesis , computers or other devices. [ 2 ] The device was conceived by Australian neurologist Thomas Oxley and built by Australian biomedical engineer Nicholas Opie , who have been developing the medical implant since 2010, using sheep for testing. Human trials started in August 2019 [ 3 ] with participants who suffer from amyotrophic lateral sclerosis , a type of motor neuron disease . [ 1 ] [ 4 ] Graeme Felstead was the first person to receive the implant. [ 5 ] To date, eight patients have been implanted and are able to wirelessly control an operating system to text, email, shop and bank using direct thought through the Stentrode brain computer interface, marking the first time a brain-computer interface was implanted via the patient's blood vessels, eliminating the need for open brain surgery. The FDA granted breakthrough designation to the device in August 2020. [ 6 ] In January 2023, researchers demonstrated that it can record brain activity from a nearby blood vessel and be used to operate a computer with no serious adverse events during the first year in all four patients. [ 7 ] [ 8 ] Opie began designing the implant in 2010, through Synchron, a company he founded with Oxley and cardiologist Rahul Sharma. [ 9 ] The small implant is an electrode array made of platinum electrodes embedded within a nitinol endovascular stent . The device measures about 5 cm long and a maximum of 8 mm in diameter. [ 10 ] The implant is capable of two-way communication, meaning it can both sense thoughts and stimulate movement, essentially acting as a feedback loop within the brain, which offers potential applications for helping people with spinal cord injuries and control robotic prosthetic limbs with their thoughts. [ 11 ] [ 12 ] [ 13 ] The Stentrode device, developed by Opie and a team at the Vascular Bionics Laboratory within the Department of Medicine at the University of Melbourne , [ 14 ] is implanted via the jugular vein into a blood vessel next to cortical tissue near to the motor cortex and sensory cortex , so open brain surgery is avoided. [ 15 ] Insertion via the blood vessel avoids direct penetration and damage of the brain tissue. As for blood clotting concerns, Oxley says neurologists routinely use permanent stents in patients' brains to keep blood vessels open. [ 15 ] Once in place, it expands to press the electrodes against the vessel wall close to the brain where it can record neural information and deliver currents directly to targeted areas. [ 10 ] The signals are captured and sent to a wireless antenna unit implanted in the chest, which sends them to an external receiver. The patient would need to learn how to control a computer operating system that interacts with assistive technologies. The Stentrode technology has been tested on sheep and humans, with human trials being approved by the St Vincent's Hospital, Melbourne Human Research Ethics Committee, Australia in November 2018. [ 16 ] [ 4 ] Oxley originally expressed that he expected human clinical trials to help paralyzed people regain movement to operate a motorized wheelchair or even a powered exoskeleton. [ 10 ] However, he switched focus before beginning clinical trials. Opie and colleagues began evaluating the Stentrode for its ability to restore functional independence in patients with paralysis, by enabling them to engage in activities of daily living. [ 17 ] Clinical study results demonstrated the capability of two ALS patients, surgically fitted with a Stentrode, to learn to control texting and typing, through direct thought and the assistance of eye-tracking technology for cursor navigation. [ 18 ] They achieved this with at least 92% accuracy within 3 months of use, and continued to maintain that ability up to 9 months (as of November 2020). [ 18 ] This study helped to dispel some criticism that data rates may not be as high as systems requiring open brain surgery, and also pointed out the benefits of using well-established neuro-interventional techniques which do not require any automated assistance, dedicated surgical space or expensive machinery. [ citation needed ] Selected patients are people with paralyzed or missing limbs, including people who have suffered strokes , spinal cord injuries, ALS, muscular dystrophy , and amputations. [ 15 ] [ 10 ]
https://en.wikipedia.org/wiki/Stent-electrode_recording_array
A stenter (sometimes called a tenter) [ 1 ] is a machine used in textile finishing . It serves multiple purposes, including heat setting , drying, and applying various chemical treatments . This may be achieved through the use of certain attachments such as padding or coating . [ 2 ] [ 3 ] The machine works by holding the fabric's edges while it is fed from rollers, allowing it to advance gradually while maintaining its dimensions. Eventually, the stretched sheet is pulled off at a specific speed by a second set of rollers. At the delivery end, the edges are released by the stenter pins or clamps that were holding it. [ 4 ] The earlier non-mechanized equivalent was the tenter frame . Stenter is derived from "tenter", which has its origins in the Latin word tendere , meaning "to stretch", passing through an intermediate French stage. The primary purpose of this machine is to stretch and dry fabric. In the past, frames used for this purpose were called "tenter", and the metal hooks employed to hold the fabric to the frame were known as "tentering hooks". [ 5 ] Tenters were primarily utilized to process woolen fabric. [ 5 ] During the cleaning process, after squeezing out excess water, crumpled woolen cloth needed to be straightened and dried under tension; otherwise, it would shrink. The wet cloth was stretched on a large wooden frame, referred to as a "tenter", and left to dry. To accomplish this, lengths of wet cloth were fastened to the tenter's perimeter using hooks (nails driven through the wood) all around the frame. This ensured that, as the cloth dried, it would maintain its shape and size. [ 5 ] Initially, the tentering process was conducted in the open air when Higher Mill was constructed, with the tenter frames erected on the hillside to the east of the mill. However, toward the end of World War I, the process was brought indoors and utilized steam heating for drying. Over time, this technique evolved into the modern-day stenter machine. [ 5 ] The process of drying textiles is known to consume a significant amount of energy. The stenter machine is a commonly used machinery within the textile finishing section. [ 6 ] There is a variety of stenters with multiple functionalities that are commercially available. [ 3 ] The Stenter machine consists of heated chambers, adjustable to the width of the fabric being treated. The fabric is fed into the heated chamber and supported at either selvedge by a series of stenter pins or clamps, which help maintain its position as it is moved through the drying chambers. (Note: Stenter pins are the modern equivalent of tenterhooks ) The input and output speed of the fabric are closely controlled, as is the output width, which determines the moisture content of the fabric after drying and its dimensional stability. [ 3 ] [ 2 ] A stenter is a very useful machine in a textile process house, and the machine plays a vital role in finishing . The machine may be equipped with a padding mangle , which is useful in squeezing excess moisture and applying various finishes [ 7 ] such as wrinkle-free, water repellent, waterproof , anti-static , or flame retardant . Coating and dyeing applications are also possible on a stenter machine with suitable padders and coating attachments. There are various optional attachments such as a tendamatic, weft straightener, bowing and skew cameras, or ones that can affect over-feeding, edge gumming and trimming, or residual moisture control which help increase its functionality and usage. A stenter is primarily used for the following:
https://en.wikipedia.org/wiki/Stenter
The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions . In electronic engineering and control theory , step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter . From a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another. This section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system S {\displaystyle {\mathfrak {S}}} : all notations and assumptions required for the following description are listed here. For a general dynamical system, the step response is defined as follows: It is the evolution function when the control inputs (or source term , or forcing inputs ) are Heaviside functions: the notation emphasizes this concept showing H ( t ) as a subscript. For a linear time-invariant (LTI) black box, let S ≡ S {\displaystyle {\mathfrak {S}}\equiv S} for notational convenience: the step response can be obtained by convolution of the Heaviside step function control and the impulse response h ( t ) of the system itself which for an LTI system is equivalent to just integrating the latter. Conversely, for an LTI system, the derivative of the step response yields the impulse response: However, these simple relations are not true for a non-linear or time-variant system . [ 1 ] Instead of frequency response, system performance may be specified in terms of parameters describing time-dependence of response. The step response can be described by the following quantities related to its time behavior , In the case of linear dynamic systems, much can be inferred about the system from these characteristics. Below the step response of a simple two-pole amplifier is presented, and some of these terms are illustrated. In LTI systems, the function that has the steepest slew rate that doesn't create overshoot or ringing is the Gaussian function. This is because it is the only function whose Fourier transform has the same shape. This section describes the step response of a simple negative feedback amplifier shown in Figure 1. The feedback amplifier consists of a main open-loop amplifier of gain A OL and a feedback loop governed by a feedback factor β. This feedback amplifier is analyzed to determine how its step response depends upon the time constants governing the response of the main amplifier, and upon the amount of feedback used. A negative-feedback amplifier has gain given by (see negative feedback amplifier ): where A OL = open-loop gain, A FB = closed-loop gain (the gain with negative feedback present) and β = feedback factor . In many cases, the forward amplifier can be sufficiently well modeled in terms of a single dominant pole of time constant τ, that it, as an open-loop gain given by: with zero-frequency gain A 0 and angular frequency ω = 2π f . This forward amplifier has unit step response an exponential approach from 0 toward the new equilibrium value of A 0 . The one-pole amplifier's transfer function leads to the closed-loop gain: This closed-loop gain is of the same form as the open-loop gain: a one-pole filter. Its step response is of the same form: an exponential decay toward the new equilibrium value. But the time constant of the closed-loop step function is τ / (1 + β A 0 ), so it is faster than the forward amplifier's response by a factor of 1 + β A 0 : As the feedback factor β is increased, the step response will get faster, until the original assumption of one dominant pole is no longer accurate. If there is a second pole, then as the closed-loop time constant approaches the time constant of the second pole, a two-pole analysis is needed. In the case that the open-loop gain has two poles (two time constants , τ 1 , τ 2 ), the step response is a bit more complicated. The open-loop gain is given by: with zero-frequency gain A 0 and angular frequency ω = 2 πf . The two-pole amplifier's transfer function leads to the closed-loop gain: The time dependence of the amplifier is easy to discover by switching variables to s = j ω, whereupon the gain becomes: The poles of this expression (that is, the zeros of the denominator) occur at: which shows for large enough values of βA 0 the square root becomes the square root of a negative number, that is the square root becomes imaginary, and the pole positions are complex conjugate numbers, either s + or s − ; see Figure 2: with and Using polar coordinates with the magnitude of the radius to the roots given by | s | (Figure 2): and the angular coordinate φ is given by: Tables of Laplace transforms show that the time response of such a system is composed of combinations of the two functions: which is to say, the solutions are damped oscillations in time. In particular, the unit step response of the system is: [ 2 ] which simplifies to when A 0 tends to infinity and the feedback factor β is one. Notice that the damping of the response is set by ρ, that is, by the time constants of the open-loop amplifier. In contrast, the frequency of oscillation is set by μ, that is, by the feedback parameter through β A 0 . Because ρ is a sum of reciprocals of time constants, it is interesting to notice that ρ is dominated by the shorter of the two. Figure 3 shows the time response to a unit step input for three values of the parameter μ. It can be seen that the frequency of oscillation increases with μ, but the oscillations are contained between the two asymptotes set by the exponentials [ 1 − exp(− ρt ) ] and [ 1 + exp(−ρt) ]. These asymptotes are determined by ρ and therefore by the time constants of the open-loop amplifier, independent of feedback. The phenomenon of oscillation about the final value is called ringing . The overshoot is the maximum swing above final value, and clearly increases with μ. Likewise, the undershoot is the minimum swing below final value, again increasing with μ. The settling time is the time for departures from final value to sink below some specified level, say 10% of final value. The dependence of settling time upon μ is not obvious, and the approximation of a two-pole system probably is not accurate enough to make any real-world conclusions about feedback dependence of settling time. However, the asymptotes [ 1 − exp(− ρt ) ] and [ 1 + exp (− ρt ) ] clearly impact settling time, and they are controlled by the time constants of the open-loop amplifier, particularly the shorter of the two time constants. That suggests that a specification on settling time must be met by appropriate design of the open-loop amplifier. The two major conclusions from this analysis are: As an aside, it may be noted that real-world departures from this linear two-pole model occur due to two major complications: first, real amplifiers have more than two poles, as well as zeros; and second, real amplifiers are nonlinear, so their step response changes with signal amplitude. How overshoot may be controlled by appropriate parameter choices is discussed next. Using the equations above, the amount of overshoot can be found by differentiating the step response and finding its maximum value. The result for maximum step response S max is: [ 3 ] The final value of the step response is 1, so the exponential is the actual overshoot itself. It is clear the overshoot is zero if μ = 0, which is the condition: This quadratic is solved for the ratio of time constants by setting x = ( τ 1 / τ 2 ) 1/2 with the result Because β A 0 ≫ 1, the 1 in the square root can be dropped, and the result is In words, the first time constant must be much larger than the second. To be more adventurous than a design allowing for no overshoot we can introduce a factor α in the above relation: and let α be set by the amount of overshoot that is acceptable. Figure 4 illustrates the procedure. Comparing the top panel (α = 4) with the lower panel (α = 0.5) shows lower values for α increase the rate of response, but increase overshoot. The case α = 2 (center panel) is the maximally flat design that shows no peaking in the Bode gain vs. frequency plot . That design has the rule of thumb built-in safety margin to deal with non-ideal realities like multiple poles (or zeros), nonlinearity (signal amplitude dependence) and manufacturing variations, any of which can lead to too much overshoot. The adjustment of the pole separation (that is, setting α) is the subject of frequency compensation , and one such method is pole splitting . The amplitude of ringing in the step response in Figure 3 is governed by the damping factor exp(− ρt ). That is, if we specify some acceptable step response deviation from final value, say Δ, that is: this condition is satisfied regardless of the value of β A OL provided the time is longer than the settling time, say t S , given by: [ 4 ] where the τ 1 ≫ τ 2 is applicable because of the overshoot control condition, which makes τ 1 = αβA OL τ 2 . Often the settling time condition is referred to by saying the settling period is inversely proportional to the unity gain bandwidth, because 1/(2 π τ 2 ) is close to this bandwidth for an amplifier with typical dominant pole compensation . However, this result is more precise than this rule of thumb . As an example of this formula, if Δ = 1/e 4 = 1.8 %, the settling time condition is t S = 8 τ 2 . In general, control of overshoot sets the time constant ratio, and settling time t S sets τ 2 . [ 5 ] [ 6 ] [ 7 ] This method uses significant points of the step response. There is no need to guess tangents to the measured Signal. The equations are derived using numerical simulations, determining some significant ratios and fitting parameters of nonlinear equations. See also. [ 8 ] Here the steps: Next, the choice of pole ratio τ 1 / τ 2 is related to the phase margin of the feedback amplifier. [ 9 ] The procedure outlined in the Bode plot article is followed. Figure 5 is the Bode gain plot for the two-pole amplifier in the range of frequencies up to the second pole position. The assumption behind Figure 5 is that the frequency f 0 dB lies between the lowest pole at f 1 = 1/(2πτ 1 ) and the second pole at f 2 = 1/(2πτ 2 ). As indicated in Figure 5, this condition is satisfied for values of α ≥ 1. Using Figure 5 the frequency (denoted by f 0 dB ) is found where the loop gain β A 0 satisfies the unity gain or 0 dB condition, as defined by: The slope of the downward leg of the gain plot is (20 dB/decade); for every factor of ten increase in frequency, the gain drops by the same factor: The phase margin is the departure of the phase at f 0 dB from −180°. Thus, the margin is: Because f 0 dB / f 1 = βA 0 ≫ 1, the term in f 1 is 90°. That makes the phase margin: In particular, for case α = 1, φ m = 45°, and for α = 2, φ m = 63.4°. Sansen [ 10 ] recommends α = 3, φ m = 71.6° as a "good safety position to start with". If α is increased by shortening τ 2 , the settling time t S also is shortened. If α is increased by lengthening τ 1 , the settling time t S is little altered. More commonly, both τ 1 and τ 2 change, for example if the technique of pole splitting is used. As an aside, for an amplifier with more than two poles, the diagram of Figure 5 still may be made to fit the Bode plots by making f 2 a fitting parameter, referred to as an "equivalent second pole" position. [ 11 ]
https://en.wikipedia.org/wiki/Step_response
Stephan Körner , FBA (26 September 1913 – 17 August 2000 [ a ] ) was a British philosopher, who specialised in the work of Kant , the study of concepts , and in the philosophy of mathematics . Born to a Jewish family in what would soon become Czechoslovakia , Körner left that country to avoid certain death at the hands of the Nazis after the German occupation in 1939, and came to the United Kingdom as a refugee, where he began his study of philosophy; by 1952 he was a professor of philosophy at the University of Bristol , taking up a second professorship at Yale in 1970. He was married to Edith Körner , and was the father of the mathematician Thomas Körner and the biochemist, writer and translator Ann M. Körner. Körner was born in Ostrava , then part of Austria-Hungary , on 26 September 1913. [ 2 ] [ 3 ] [ 4 ] He was the only son of a teacher of classics and his wife. His father had studied classics in Vienna, while at the same time, winning prizes in mathematics to supplement his meagre income (a fellow student was a certain Leon Trotsky , who was frequently asked, "When is that great revolution that you are always talking about going to happen?"). Despite an early wish to study philosophy, Stephan was dissuaded by his father, who feared that his son would become a penniless academic; he was persuaded to study something more practical, and took his degree in law at Charles University in Prague, completing it in 1935. (He practised law only briefly but retained a strong interest, attending seminars at Yale Law School after his appointment as a visiting professor at Yale in the 1970s.) From 1936 to 1939 he carried out his military service , serving as an officer in the cavalry . After German troops moved into the country in March 1939, a schoolmate of his, an officer in the SS, warned the Jewish family that life in German-occupied Moravia was no longer safe. His parents refused to leave, believing that they had nothing to fear since they were not communists. His father died in 1939, most likely by his own hand, during deportation to Nisko and his mother was murdered in 1941 after deportation to Minsk Ghetto , Belarus, on Transport F. His first cousin Ruth Maier was one of many other family members who was murdered at Auschwitz, after her arrest in and deportation from Norway in 1944. She is remembered as "Norway's Anne Frank ". Stephan travelled with two friends, Otto Eisner and Willi Haas, through Poland to the United Kingdom, arriving a refugee just as the Second World War began. In Britain, he rejoined the army of the émigré Czechoslovak government; he saw service with them during the Battle of France in 1940 before returning to Britain. He received a small grant to continue his education at the University of Cambridge , where he studied philosophy under R. B. Braithwaite at Trinity Hall ; among others, he was taught by Ludwig Wittgenstein . Professor Braithwaite was exceedingly kind to his refugee student. On one occasion, Braithwaite invited him to his home saying, "Someone has given me a Hungarian salami; would you come to my house and show me how to eat it?" Such invitations were welcome since Stephan made little money as a waiter in a Greek restaurant and survived on "one fourpenny meat pie per day." In 1943 he was recalled to the Czechoslovak army, serving as a sergeant in the infantry during the push through France and into Germany. He would later say that he survived the fighting outside Dunkirk due to Dickens ; recuperating in hospital from a minor wound, a doctor refused to discharge him until he had had another day to finish his novel. As a result, he missed the heavy fighting the next day, when many of his close friends were killed. He was awarded his PhD in 1944; shortly afterwards, he married Edith Laner ("Diti"; born Edita Leah Löwy; in 1938/39, her father changed the family name to Laner in a vain attempt to deceive the Nazis into thinking that he and his family were not Jewish), a fellow Czech refugee, whom he had met in London in 1941. He remained in the Czechoslovak army until 1946. After his army service, he worked at Cardiff University , tutoring students in German. He took up his first academic post in 1947, lecturing in philosophy at the University of Bristol . In 1952, he was appointed to the sole professorship and chairmanship of his department, which he would hold until 1979. In 1965 and 1966 he was Dean of the Faculty of Arts, and from 1968 to 1971 a Pro-Vice-Chancellor. During this time he worked as a visiting professor of philosophy at Brown University in 1957, Yale University in 1960, the University of North Carolina, Chapel Hill in 1963, University of Texas at Austin in 1964 and Indiana University in 1967. In 1970 he returned to Yale with a tenured visiting professorship in philosophy, holding it jointly with the Bristol post for nine years, and then as his sole post from 1979 to 1984. [ 3 ] Bristol appointed him a professor emeritus on his retirement, and he subsequently held a visiting professorship at the University of Graz from 1980 to 1989. He received honorary doctorates from the Queen's University Belfast in 1981, and Graz in 1984, where he was appointed to an honorary professorship in 1986. Bristol appointed him an honorary fellow in 1987. [ 5 ] Trinity Hall bestowed upon him the same honour in 1991. He was President of the British Society for the Philosophy of Science in 1965, the Aristotelian Society in 1967, the International Union of History and Philosophy of Science in 1969, and the Mind Association in 1973. He edited the journal Ratio from 1961 to 1980. He also served on the editorial board of Erkenntnis from 1974 to 1999. In 1967 he was elected a Fellow of the British Academy . In 1955 he published his first two major works. Kant , an introduction for non-specialists to Immanuel Kant 's work, went through several impressions over the next three decades and is still regarded as a minor classic in the field; it was one of the first post-war books to reintroduce Kant to the English-speaking world. The fact that in this and later works Korner put forward a controversial view that Kant's categories apply directly to ordinary empirical science, was little noticed by a public grateful for any short work covering all of Kant's philosophy. [ 6 ] The second, Conceptual Thinking , was a more specialised study, studying the way in which people deal with "exact" and "inexact" concepts – exact concepts, like logical constructs or mathematical ideas, could be clearly defined, whilst inexact concepts, like 'colour', would always have unclear boundaries. In 1957 he expanded on this, editing Observation and Interpretation , a collection of papers arising from a seminar which brought together both philosophers and physicists to discuss these questions. His work led him into the philosophy of mathematics , on which he would publish a textbook in 1960; Philosophy of Mathematics, which took as its central theme the question of how applied mathematics can be metaphysically possible. He also wrote on the philosophy of science in Experience and Theory (1966), including work on theoretical incommensurability , the concept that two directly contradictory theories – such as classical mechanics and relativity – can coexist, without either being specifically "wrong". In 1969 he published What is Philosophy? , and in 1970 Categorial Frameworks , attempts to put forward his views to a general audience. Experience and Conduct , published in 1979, discussed how we evaluate and develop our own preferences and value systems; his final work, Metaphysics: Its Structure and Function (1984) was a wide-ranging study of metaphysics . Körner was remembered by colleagues and pupils as "extraordinarily handsome with an astonishing Czech accent ... [with] a certain sense of grandeur about him". [ 7 ] He retained an old-fashioned sense of manners, formal but courteous, as well as a formal appearance. Even on the hottest days, he was never seen without a tie and jacket. He lived a happy and contented home life; he and Edith were remembered by friends as exceptionally close and devoted to one another. In their early married life they fitted the conventional academic mould – whilst he worked incessantly at his studies, she raised the family, looked after the house, managed the finances – but after the children had grown and left she worked at her own career, eventually becoming the chairman of the magistrates' court in Bristol and overseeing the redevelopment of the National Health Service 's information-management system. Edith managed their lives, as with everything else, in a practical, organised and forceful way, ensuring that he could work as freely as possible; he was fond of saying that "Diti does everything, but leaves the philosophy to me". [ 2 ] The couple had two children – Thomas , a professor of mathematics, and Ann, a biochemist, writer and translator, [ 8 ] who married Sidney Altman (a joint winner of the Nobel Prize in Chemistry in 1989). Following Edith's diagnosis with advanced cancer in the summer of 2000, they chose to die together in August of that year, on the 17th. [ 9 ] [ 10 ] [ 11 ] [ 12 ] They were survived by both children and by four grandchildren. [ 13 ] Books edited For more complete publication details see Körner's PhilPapers entry or 1987 bibliography. Festschrift
https://en.wikipedia.org/wiki/Stephan_Körner
Space Operations and Astronaut Training, German Space Agency (DLR) Stephan Ulamec is an Austrian geophysicist , born in Salzburg on January 27, 1966, with more than 100 articles in peer-reviewed journals and several participations in space missions and payloads operated by diverse space agencies. [ 1 ] [ 2 ] He is working at the German Aerospace Center ( Deutsches Zentrum für Luft- und Raumfahrt , DLR) in Cologne . [ 3 ] [ 4 ] He is regularly giving lectures about his publications in aerospace engineering at the University of Applied Sciences: Fachhochschule FH-Aachen . [ 5 ] [ 6 ] Main aspects of his work are related to the exploration of small bodies in the Solar System ( asteroids and comets) . [ 7 ] [ 8 ] Ulamec studied Geophysics at the Karl-Franzens University in Graz (Austria) as student of Prof. Siegfried J. Bauer . [ 9 ] He finished his PhD on “ Acoustic and Electrical Methods for the Exploration of Atmospheres and Surfaces, with Application to Saturn's Moon Titan ” in 1991. [ 10 ] From 1991 till 1993 he worked as a research fellow at the European Space Agency (ESA), specifically at European Space Research and Technology Centre (ESTEC) in Noordwijk , in The Netherlands . Since 1994, he is at the Microgravity User Support Center (MUSC) which is part of the DLR Space Operations and Astronaut Training (SOAT). [ 11 ] [ 12 ] He has made several presentations at the International Astronautical Congress (IAC). [ 13 ] [ 14 ] Stephan Ulamec has been the project manager of the Rosetta lander Philae , which successfully landed on comet 67P/Churyumov-Gerasimenko in 2014. [ 15 ] [ 16 ] He has also been Payload Manager of MASCOT , a lander made in common by the French space agency ( CNES ) and the DLR, that has been delivered by the JAXA Hayabusa2 spacecraft to asteroid (162173) Ryugu in 2018. [ 17 ] He is one of two lead scientists (Co Principal Investigator) of the French-German MMX rover called IDEFIX©, together with Dr Patrick Michel . [ 18 ] This rover is to be launched in 2026 by the Japanese Mars Moons eXploration (MMX), a JAXA (Japan Aerospace eXploration Agency) mission to the Mars natural satellite Phobos . [ 19 ] [ 20 ] [ 21 ] He is also part of the Science Management Board for the ESA Hera mission, to be launched in 2024 with a Space X Falcon 9 shuttle, aimed at operating a rendezvous and characterising in details the asteroid (65803) Didymos and its natural satellite Dimorphos , and also analysing the artificial impact created by the American space agency NASA probe DART in September 2022. [ 22 ] [ 23 ] He is involved in NEO-MAPP , a European Union Horizon 2020 project to study mitigation and characterisation techniques for potentially hazardous asteroids . [ 24 ] [ 25 ] From January 2020 till December 2023, he is chairing the ESA Solar System and Exploration Working Group (SSEWG) and is a member of the Space Science Advisory Committee (SSAC). [ 26 ] [ 27 ]
https://en.wikipedia.org/wiki/Stephan_Ulamec
Stephanie Lee Brock is an American chemist who is professor of inorganic chemistry at Wayne State University . Her research considers transition metal pnictides and chalcogenide nanomaterials. She is a Fellow of the American Association for the Advancement of Science and the American Chemical Society . Brock completed her undergraduate degree in chemistry at the University of Washington . She was a graduate student at the University of California, Davis , where she investigated structure-property relationships in pnictide oxide compounds under the supervision of Susan M. Kauzlarich . [ 1 ] [ 2 ] During her doctorate she made use of powder diffraction and magnetic susceptibility measurements. [ 1 ] Brock was a postdoctoral research associate at the University of Connecticut where she worked with Steven Suib on the use of manganese oxide nanocrystalline materials. [ 3 ] In 1999, Brock joined Wayne State University as an assistant professor in the department of chemistry and was promoted to full professor in 2009. [ 4 ] Her research considers pnictide, pnictide oxides and chalcogenides . In particular, Brock is interested in the controlled growth of functional nanoparticles and nanostructures. She demonstrated that manganese arsenide nanoparticles have magnetic properties that depend on their dopant concentration, and offer hope for magnetic refrigeration. [ 5 ] [ 6 ] Brock has also realised sol–gel processes that allow the formation of functional chalcogenide self-assemblies. The gel-like cadmium selenide (CdSe) and zinc sulfide (ZnS) nanoparticles are akin to a cross-linked polymer network, and can be supercritically dried to form porous aerogels. The aerogels have high surface areas and form a conductive network with the optical properties of the nanoparticles themselves. [ 5 ] Brock is responsible for the development of electron microscopy at Wayne State University . [ 7 ] She serves as Deputy Editor of the American Chemical Society journal ACS Materials . [ 4 ]
https://en.wikipedia.org/wiki/Stephanie_Brock
Stephen C. Harrison is professor of biological chemistry and molecular pharmacology, professor of pediatrics, and director of the Center for Molecular and Cellular Dynamics of Harvard Medical School , head of the Laboratory of Molecular Medicine at Boston Children's Hospital , and investigator of the Howard Hughes Medical Institute . [ 1 ] He received his B.A. in chemistry and physics from Harvard in 1963, and was then a Henry fellow at the MRC Laboratory of Molecular Biology at Cambridge . In 1967, he received his Ph.D. in biophysics from Harvard, was a research fellow there as well as a junior fellow in the Society of Fellows, and joined the Harvard faculty in 1971. His wide-ranging studies of protein structure have contributed insights to viral architecture, DNA – protein recognition, and cellular signaling . Harrison has made important contributions to structural biology , most notably by determining and analyzing the structures of viruses and viral proteins, by crystallographic analysis of protein–DNA complexes, [ 2 ] [ 3 ] and by structural studies of protein-kinase switching mechanisms. [ 4 ] The initiator of high-resolution virus crystallography, he has moved from his early work on tomato bushy stunt virus [ 5 ] (1978) to the study of more complex human pathogens, including the capsid of human papillomavirus , [ 6 ] the envelope of dengue virus , [ 7 ] and several components of HIV . [ 8 ] He has also turned some of his research attention to even more complex assemblies, such as clathrin-coated vesicles . [ 9 ] He led the Structural Biology team at the Center for HIV/AIDS Vaccine Immunology (CHAVI) when it received National Institute of Allergy and Infectious Diseases (NIAID) funding [ 10 ] of around $300 million to address key immunological roadblocks to HIV vaccine development and to design, develop and test novel HIV vaccine candidates. He is a member of American Academy of Arts and Sciences , [ 11 ] National Academy of Sciences , [ 12 ] American Philosophical Society , [ 13 ] European Molecular Biology Organization , American Crystallographic Association and American Association for the Advancement of Science . Harrison has been married to Tomas Kirchhausen , who is currently a Professor at Harvard Medical School , [ 18 ] since 2013. [ 19 ] [ 20 ] They first met in 1978 at a small dinner hosted by Ada Yonath . In the fall of 1979, Tom moved to Cambridge, MA, to work with Harrison, and the two have been in a relationship ever since. [ 21 ]
https://en.wikipedia.org/wiki/Stephen_C._Harrison
Stephen Lee ( Chinese : 李中漢 ; pinyin : Lǐ Zhōnghàn ; born 25 October 1955) is an American chemist. He is the son of Tsung-Dao Lee , the winner of the 1957 Nobel Prize in Physics . He is currently a professor at Cornell University . [ 1 ] Lee attended the International School of Geneva , Switzerland and Yale University , from which he graduated with a BA in 1978. He later received his PhD from the University of Chicago in 1985. In 1993, Lee received the MacArthur Award for his work in the field of physics and chemistry. [ citation needed ] In addition, he has received an award from the Alfred P. Sloan Foundation for his continued research. In 1999, Lee joined Cornell University as a professor of solid state chemistry in the chemistry and chemical biology department from the University of Michigan , where he had been associate professor of chemistry since 1993. He currently continues his teaching career at Cornell, where he instructs students in (honors) general chemistry and introduction to chemistry courses. During the past 10 years, [ clarification needed ] Lee has devoted his summer to helping incoming freshmen learn basic chemistry to prepare them for the academic year. This has been considered part of Lee's philanthropic work, as he teaches these summer courses probono . [ citation needed ] His current research involves developing stronger porous solids in which all the host porous bonds are covalent in character. Lee is also researching ways to introduce cross-linkable guests (such as di-isocyanides or disilyltriflates) which will react with nucleophilic groups, leading to a fully covalent organic porous solid. He also hopes to develop a long range order in intermetallic phases: Examine noble metal alloys where unit cell dimensions range from just a few, to almost 104 Å. [ 1 ] Stephen Lee was born to 1957 Nobel Prize winner in Physics Tsung-Dao Lee and Hui-Chun Jeannette Chin ( Chinese : 秦惠莙 ; pinyin : Qín Huìjūn ), who died in 1996. Lee has one brother, James Lee ( Chinese : 李中清 ; pinyin : Lǐ Zhōngqīng ; born 1952), who is the dean of the School of Humanities and Social Science at the Hong Kong University of Science and Technology and chair professor of the Division of Social Science at the same university.
https://en.wikipedia.org/wiki/Stephen_Lee_(chemist)
Stephen O'Brien (born 27 February 1991) is an Irish Gaelic footballer who plays for the Kenmare Shamrocks club and at senior level for the Kerry county team since 2014. [ 1 ] He joined the county's under-21 team in 2011. Kerry sustained a 22-point loss to Cork in the Munster final. He was still underage in 2012 and for the second year in a row he was on the losing side in a Munster final to Cork after extra time. He made his debut for Kerry in the 2014 Munster Senior Football Championship semi-final against Clare and scored two points. [ 2 ] He lined out in a first Munster Senior Football Championship when they faced Cork , he scored a point in a 0-24 to 0-12 win and a first Munster title. He missed out on Kerry's win over Galway in the All-Ireland quarter-final. He was back in the starting line up for the semi-final with Mayo scoring a point in a 1-16 each draw. An injury while out racing on his quad caused him to miss the 2014 All-Ireland Senior Football Championship semi-final replay win over Mayo . [ 2 ] However, he started the 2014 All-Ireland Senior Football Championship Final at right half forward. [ 3 ] Retirements brought new opportunities for O'Brien in 2019. [ 2 ] He scored Kerry's first goal against Dublin in five games during a 2019 National Football League fixture in Tralee . [ 2 ] In the 56th minute of the 2019 All-Ireland Senior Football Championship semi-final against Tyrone , O'Brien scored his fifth championship goal after intercepting a pass from Kieran McGeary , distributing the ball and running the length of the field. [ 2 ] Above is the starting lineup vs Armagh on 13 July 2024
https://en.wikipedia.org/wiki/Stephen_O'Brien_(Kerry_Gaelic_footballer)
Stephen Webb (born February 25, 1963) is a physicist and author of numerous popular science and math books, as well as academic publications. Webb was educated at Bristol University (BSc (Hons) Physics – First Class) and, as a graduate student, attended Manchester University (PhD – Theoretical Particle Physics). Webb is currently on the academic staff at the University of Portsmouth , and is a presenter of numerous science-related non-academic talks and academic lectures. [ 1 ] [ 2 ] In 2018, Webb was a featured science speaker at the annual TED conference . [ 3 ] Webb has worked at the University of Cardiff (Physics Department; 1993–95), University of Sheffield (Math & Statistics; 1995–98), University of Loughborough (Math & Sciences; 1998–99), Northumbria University (Information Sciences; 1999–2000), The Open University (2000–2006) and the University of Portsmouth (2006–current). [ 4 ] In addition, Webb has been a Member of the Institute of Physics (M. Inst. P.), Chartered Physicist (C. Phys.), Senior Fellow of the Higher Education Academy (SFHEA), a Member of international editorial board (Springer S&F Series), Member of the UK SETI Research Network, and a Project lead for the UK Advance HE Collaborative Award in Teaching Excellence (CATE 2022). [ 4 ] Note: Listings of many more books and publications by Stephen Webb are on GoodReads , Webb's WebSite and elsewhere .
https://en.wikipedia.org/wiki/Stephen_Webb_(scientist)
Stephen Yablo ( / ˈ j æ b l oʊ / ; [ 1 ] born 1957) is a Canadian-born American philosopher. He is the Emeritus David W. Skinner Professor of Philosophy at the Massachusetts Institute of Technology (MIT) and taught previously at the University of Michigan, Ann Arbor . [ 2 ] He specializes in the philosophy of logic , philosophy of mind , metaphysics , philosophy of language , and philosophy of mathematics . He was born in Toronto , on 30 September 1957, to a Polish father Saul Yablo and Romanian-Canadian mother Gloria Yablo (née Herman), both Jewish . [ 3 ] He is married to fellow MIT philosopher Sally Haslanger . His Ph.D. is from University of California, Berkeley , where he worked with Donald Davidson and George Myro . In 2012, he was elected a Fellow of the American Academy of Arts and Sciences . Yablo has published a number of influential papers in philosophy of mind, philosophy of language, and metaphysics, and gave the John Locke Lectures at Oxford in 2012, which formed the basis for his book Aboutness , which one reviewer described as "an important and far-reaching book that philosophers will be discussing for a long time." [ 4 ] In papers published in 1985 [ 5 ] and 1993, [ 6 ] Yablo showed how to create a paradox similar to the liar paradox , but without self-reference . Unlike the liar paradox, which uses a single sentence, Yablo's paradox uses an infinite list of sentences, each referring to sentences occurring later in the list. Analysis of the list shows that there is no consistent way to assign truth values to any of its members. Since everything on the list refers only to later sentences, Yablo claims that his paradox is "not in any way circular". However, Graham Priest disputes this. [ 7 ] [ 8 ] Consider the following infinite set of sentences: For any n , the proposition S n is of universally quantified form, expressing an unending number of claims (each the negation of a statement with a larger index). As a proposition, any S n also expresses that S n + 1 is not true, for example. For any pair of numbers n and m with n < m , the proposition S n subsumes all the claims also made by the later S m . As this holds for all such pairs of numbers, one finds that all S n imply any S m with n < m . For example, any S n implies S n + 1 . Claims made by any of the propositions ("the next statement is not true") stand in contradiction with an implication we can also logically derive from the lot (the validity of the next statement is implied by the current one). This establishes that assuming any S n leads to a contradiction. And this just means that all S n are proven false. But all S n being false also exactly validates the very claims made by them. So we have the paradox that each sentence in Yablo's list is both not true and true. For any P {\displaystyle P} , the negation introduction principle of propositional logic negates P ↔ ¬ P {\displaystyle P\leftrightarrow \neg P} . So no consistent theory proves that one of its propositions equivalent to itself. Metalogically, it means any axiom of the form of such an equivalence is inconsistent. This is one formal pendant of the liar paradox. Similarly, for any unary predicate Q {\displaystyle Q} and if R {\displaystyle R} is an entire transitive relation , then by a formal analysis as above, predicate logic negates the universal closure of On the natural numbers, for R {\displaystyle R} taken to be equality " = {\displaystyle =} ", this also follows from the analysis of the liar paradox. For R {\displaystyle R} taken to be the standard order " > {\displaystyle >} ", it is still possible to obtain a non-standard model of arithmetic for the omega-inconsistent theory defined by adjoining all the equivalences individually. [ 9 ]
https://en.wikipedia.org/wiki/Stephen_Yablo
Stephen aldehyde synthesis , a named reaction in chemistry, was invented by Henry Stephen ( OBE / MBE ). This reaction involves the preparation of aldehydes (R-CHO) from nitriles (R-CN) using tin(II) chloride (SnCl 2 ), hydrochloric acid (HCl) and quenching the resulting iminium salt ([R-CH=NH 2 ] + Cl − ) with water (H 2 O). [ 1 ] [ 2 ] During the synthesis, ammonium chloride is also produced. It is a type of nucleophilic addition reaction. The following scheme shows the reaction mechanism: By addition of hydrogen chloride the used nitrile ( 1 ) reacts to its corresponding salt ( 2 ). It is believed that this salt is reduced by a single electron transfer by the tin(II) chloride ( 3a and 3b ). [ 3 ] The resulting salt ( 4 ) precipitates after some time as aldimine tin chloride ( 5 ). Hydrolysis of 5 produces a hemiaminal ( 6 ) from which an aldehyde ( 7 ) is formed. Substitutes that increase the electron density promote the formation of the aldimine-tin chloride adduct. With electron withdrawing substituents, the formation of an amide chloride is facilitated. [ 4 ] In the past, the reaction was carried out by precipitating the aldimine-tin chloride, washing it with ether and then hydrolyzing it. However, it has been found that this step is unnecessary and the aldimine tin chloride can be hydrolysed directly in the solution. [ 5 ] This reaction is more efficient when aromatic nitriles are used instead of aliphatic ones. However, even for some aromatic nitriles (e. g. 2-cyanobenzoic acid ethyl ester) the yield can be low. [ 5 ] In the Sonn-Müller method [ 6 ] [ 7 ] the intermediate iminium salt is obtained from reaction of an amide PhCONHPh with phosphorus pentachloride .
https://en.wikipedia.org/wiki/Stephen_aldehyde_synthesis
A steppe belt is a contiguous phytogeographic region of predominantly grassland ( steppe ), which has common characteristics in soil , climate , vegetation and fauna . [ 1 ] A forest-steppe belt is a region of forest steppe . The largest steppe and (forest-steppe) belt is the Eurasian steppe belt which stretches from Central Europe via Ukraine , southern Russia , northern Central Asia , southern Siberia , into Mongolia and China , [ 1 ] often called the Great Steppe . The term "steppe belt" may also be applied to some grassland zones in biogeographical zoning of mountains . This article related to topography is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steppe_belt
The Steppic Biogeographic Region is a biogeographic region of Europe, as defined by the European Environment Agency . The Steppic region encompasses parts of Romania, Moldova, Ukraine, Russia, and western Kazakhstan. Additionally, it extends further west into Asia. This vast region is characterized by low-lying plains, as well as rolling hills or plateaus. On average, the elevation in this area ranges from 200–300 metres (660–980 ft) The natural vegetation is mostly grasses such as Elymus repens (couch grass), Stipa (feather grass) and Festuca (fescue), among which are scattered herbaceous plants such as Potentilla (cinquefoil), Verbascum (mullein and Artemisia (wormwood). The humus-rich soils are very fertile, and much of the region has been converted to cultivated land, with few remaining pockets of the original vegetation. [ 1 ] Romania has the only part of the Steppic Region in the European Union. This is a small intensively farmed area. The list of Natura 2000 sites in region was adopted in December 2008, with 34 Sites of Community Importance under the Habitats Directive and 40 Special Protection Areas under the Birds Directive . Some sites are in both categories. Together they cover about 20% of the land in the Romanian part of the region. [ 2 ]
https://en.wikipedia.org/wiki/Steppic_Biogeographic_Region
In floating-point arithmetic , the Sterbenz lemma or Sterbenz's lemma [ 1 ] is a theorem giving conditions under which floating-point differences are computed exactly. It is named after Pat H. Sterbenz, who published a variant of it in 1974. [ 2 ] Sterbenz lemma — In a floating-point number system with subnormal numbers , if x {\displaystyle x} and y {\displaystyle y} are floating-point numbers such that y 2 ≤ x ≤ 2 y , {\displaystyle {\frac {y}{2}}\leq x\leq 2y,} then x − y {\displaystyle x-y} is also a floating-point number. Thus, a correctly rounded floating-point subtraction x ⊖ y = fl ⁡ ( x − y ) = x − y {\displaystyle x\ominus y=\operatorname {fl} (x-y)=x-y} is computed exactly. The Sterbenz lemma applies to IEEE 754 , the most widely used floating-point number system in computers. Let β {\displaystyle \beta } be the radix of the floating-point system and p {\displaystyle p} the precision. Consider several easy cases first: For the rest of the proof, assume 0 < y < x ≤ 2 y {\displaystyle 0<y<x\leq 2y} without loss of generality. Write x , y > 0 {\displaystyle x,y>0} in terms of their positive integral significands s x , s y ≤ β p − 1 {\displaystyle s_{x},s_{y}\leq \beta ^{p}-1} and minimal exponents e x , e y {\displaystyle e_{x},e_{y}} : x = s x ⋅ β e x − p + 1 y = s y ⋅ β e y − p + 1 {\displaystyle {\begin{aligned}x&=s_{x}\cdot \beta ^{e_{x}-p+1}\\y&=s_{y}\cdot \beta ^{e_{y}-p+1}\end{aligned}}} Note that x {\displaystyle x} and y {\displaystyle y} may be subnormal—we do not assume s x , s y ≥ β p − 1 {\displaystyle s_{x},s_{y}\geq \beta ^{p-1}} . The subtraction gives: x − y = s x ⋅ β e x − p + 1 − s y ⋅ β e y − p + 1 = s x β e x − e y ⋅ β e y − p + 1 − s y ⋅ β e y − p + 1 = ( s x β e x − e y − s y ) ⋅ β e y − p + 1 . {\displaystyle {\begin{aligned}x-y&=s_{x}\cdot \beta ^{e_{x}-p+1}-s_{y}\cdot \beta ^{e_{y}-p+1}\\&=s_{x}\beta ^{e_{x}-e_{y}}\cdot \beta ^{e_{y}-p+1}-s_{y}\cdot \beta ^{e_{y}-p+1}\\&=(s_{x}\beta ^{e_{x}-e_{y}}-s_{y})\cdot \beta ^{e_{y}-p+1}.\end{aligned}}} Let s ′ = s x β e x − e y − s y {\displaystyle s'=s_{x}\beta ^{e_{x}-e_{y}}-s_{y}} . Since 0 < y < x {\displaystyle 0<y<x} we have: Further, since x ≤ 2 y {\displaystyle x\leq 2y} , we have x − y ≤ y {\displaystyle x-y\leq y} , so that s ′ ⋅ β e y − p + 1 = x − y ≤ y = s y ⋅ β e y − p + 1 {\displaystyle s'\cdot \beta ^{e_{y}-p+1}=x-y\leq y=s_{y}\cdot \beta ^{e_{y}-p+1}} which implies that 0 < s ′ ≤ s y ≤ β p − 1. {\displaystyle 0<s'\leq s_{y}\leq \beta ^{p}-1.} Hence x − y = s ′ ⋅ β e y − p + 1 , for 0 < s ′ ≤ β p − 1 , {\displaystyle x-y=s'\cdot \beta ^{e_{y}-p+1},\quad {\text{for}}\quad 0<s'\leq \beta ^{p}-1,} so x − y {\displaystyle x-y} is a floating-point number. ∎ Note: Even if x {\displaystyle x} and y {\displaystyle y} are normal, i.e. , s x , s y ≥ β p − 1 {\displaystyle s_{x},s_{y}\geq \beta ^{p-1}} , we cannot prove that s ′ ≥ β p − 1 {\displaystyle s'\geq \beta ^{p-1}} and therefore cannot prove that x − y {\displaystyle x-y} is also normal. For example, the difference of the two smallest positive normal floating-point numbers x = ( β p − 1 + 1 ) ⋅ β e m i n − p + 1 {\displaystyle x=(\beta ^{p-1}+1)\cdot \beta ^{e_{\mathrm {min} }-p+1}} and y = β p − 1 ⋅ β e m i n − p + 1 {\displaystyle y=\beta ^{p-1}\cdot \beta ^{e_{\mathrm {min} }-p+1}} is x − y = 1 ⋅ β e m i n − p + 1 {\displaystyle x-y=1\cdot \beta ^{e_{\mathrm {min} }-p+1}} which is necessarily subnormal. In floating-point number systems without subnormal numbers , such as CPUs in nonstandard flush-to-zero mode instead of the standard gradual underflow, the Sterbenz lemma does not apply. The Sterbenz lemma may be contrasted with the phenomenon of catastrophic cancellation : In other words, the Sterbenz lemma shows that subtracting nearby floating-point numbers is exact, but if the numbers one has are approximations then even their exact difference may be far off from the difference of numbers one wanted to subtract. The Sterbenz lemma is instrumental in proving theorems on error bounds in numerical analysis of floating-point algorithms. For example, Heron's formula A = s ( s − a ) ( s − b ) ( s − c ) {\displaystyle A={\sqrt {s(s-a)(s-b)(s-c)}}} for the area of triangle with side lengths a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , where s = ( a + b + c ) / 2 {\displaystyle s=(a+b+c)/2} is the semi-perimeter, may give poor accuracy for long narrow triangles if evaluated directly in floating-point arithmetic. However, for a ≥ b ≥ c {\displaystyle a\geq b\geq c} , the alternative formula A = 1 4 ( a + ( b + c ) ) ( c − ( a − b ) ) ( c + ( a − b ) ) ( a + ( b − c ) ) {\displaystyle A={\frac {1}{4}}{\sqrt {{\bigl (}a+(b+c){\bigr )}{\bigl (}c-(a-b){\bigr )}{\bigl (}c+(a-b){\bigr )}{\bigl (}a+(b-c){\bigr )}}}} can be proven, with the help of the Sterbenz lemma, to have low forward error for all inputs. [ 3 ] [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Sterbenz_lemma
Stercobilin is a tetrapyrrolic bile pigment and is one end-product of heme catabolism . [ 1 ] [ 2 ] It is the chemical responsible for the brown color of human feces and was originally isolated from feces in 1932. Stercobilin (and related urobilin ) can be used as a marker for biochemical identification of fecal pollution levels in rivers. [ 3 ] Stercobilin results from breakdown of the heme moiety of hemoglobin found in erythrocytes (red blood cells). Macrophages break down senescent erythrocytes and break the heme down into biliverdin , which rapidly reduces to free bilirubin . Bilirubin binds tightly to plasma proteins (especially albumin ) in the blood stream and is transported to the liver, where it is conjugated with one or two glucuronic acid residues into bilirubin diglucuronide , and secreted into the small intestine as bile . In the small intestine, some bilirubin glucuronide is converted back to bilirubin via bacterial enzymes in the terminal ileum. This bilirubin is further converted to colorless urobilinogen by the bacterial enzyme bilirubin reductase. [ 4 ] Urobilinogen that remains in the colon can either be reduced to stercobilinogen and finally oxidized to stercobilin, or it can be directly reduced to stercobilin. Stercobilin is responsible for the brown color of human feces. Stercobilin is then excreted in the feces. [ 5 ] In obstructive jaundice , no bilirubin reaches the small intestine, meaning that there is no formation of stercobilinogen. The lack of stercobilin and other bile pigments causes feces to become clay-colored. [ 5 ] An analysis of two infants suffering from cholelithiasis observed that a substantial amount of stercobilin was present in brown pigment gallstones. This study suggested that brown pigment gallstones could form spontaneously in infants suffering from bacterial infections of the biliary tract. [ 6 ] A 1996 study by McPhee et al. suggested that stercobilin and other related pyrrolic pigments — including urobilin, biliverdin, and xanthobilirubic acid — has potential to function as a new class of HIV-1 protease inhibitors when delivered at low micromolar concentrations. These pigments were selected due to a similarity in shape to the successful HIV-1 protease inhibitor Merck L-700,417 ( N , N -bis(2-hydroxy-1-indanyl)-2,6-diphenylmethyl-4-hydroxy-1,7-heptandiamide). Further research is suggested to study the pharmacological efficacy of these pigments. [ 7 ]
https://en.wikipedia.org/wiki/Stercobilin
The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision . [ 1 ] In this approach, two cameras with a known physical relationship (i.e. a common field of view the cameras can see, and how far apart their focal points sit in physical space) are correlated via software. By finding mappings of common pixel values, and calculating how far apart these common areas reside in pixel space, a rough depth map can be created. This is very similar to how the human brain uses stereoscopic information from the eyes to gain depth cue information, i.e. how far apart any given object in the scene is from the viewer. The camera attributes must be known, focal length and distance apart etc., and a calibration done. Once this is completed, the systems can be used to sense the distances of objects by triangulation. Finding the same singular physical point in the two left and right images is known as the correspondence problem . Correctly locating the point gives the computer the capability to calculate the distance that the robot or camera is from the object. On the BH2 Lunar Rover the cameras use five steps: a bayer array filter, photometric consistency dense matching algorithm, a Laplace of Gaussian (LoG) edge detection algorithm, a stereo matching algorithm and finally uniqueness constraint. [ 2 ] This type of stereoscopic image processing technique is used in applications such as 3D reconstruction , [ 3 ] robotic control and sensing, crowd dynamics monitoring and off-planet terrestrial rovers; for example, in mobile robot navigation, tracking , gesture recognition , targeting, 3D surface visualization, immersive and interactive gaming. [ 4 ] Although the Xbox Kinect sensor is also able to create a depth map of an image, it uses an infrared camera for this purpose, and does not use the dual-camera technique. Other approaches to stereoscopic sensing include time of flight sensors and ultrasound . This robotics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stereo_cameras
The stereoautograph is a complex opto-mechanical measurement instrument for the evaluation of analog or digital photograms . It is based on the stereoscopy effect by using two aero photos or two photograms of the topography or of buildings from different standpoints. It was invented by Eduard von Orel in 1907. [ 1 ] The photograms or photographic plates are oriented by measured passpoints in the field or on the building. This procedure can be carried out digitally (by methods of triangulation and projective geometry or iteratively (repeated angle corrections by congruent rays). The accuracy of modern autographs is about 0.001 mm. Well known are the instruments of the companies Wild Heerbrugg (Leica), e.g. analog A7 , B8 of the 1980s and the digital autographs beginning in the 1990s, or special instruments of Zeiss and Contraves.
https://en.wikipedia.org/wiki/Stereoautograph
In stereochemistry , a stereocenter of a molecule is an atom (center), axis or plane that is the focus of stereoisomerism ; that is, when having at least three different groups bound to the stereocenter, interchanging any two different groups creates a new stereoisomer . [ 1 ] [ 2 ] Stereocenters are also referred to as stereogenic centers . A stereocenter is geometrically defined as a point (location) in a molecule; a stereocenter is usually but not always a specific atom, often carbon. [ 2 ] [ 3 ] Stereocenters can exist on chiral or achiral molecules; stereocenters can contain single bonds or double bonds. [ 1 ] The number of hypothetical stereoisomers can be predicted by using 2 n , with n being the number of tetrahedral stereocenters; however, exceptions such as meso compounds can reduce the prediction to below the expected 2 n . [ 4 ] Chirality centers are a type of stereocenter with four different substituent groups; chirality centers are a specific subset of stereocenters because they can only have sp 3 hybridization, meaning that they can only have single bonds . [ 5 ] Stereocenters can exist on chiral or achiral molecules. They are defined as a location (point) within a molecule, rather than a particular atom, in which the interchanging of two groups creates a stereoisomer. [ 3 ] A stereocenter can have either four different attachment groups, or three different attachment groups where one group is connected by a double bond. [ 1 ] Since stereocenters can exist on achiral molecules, stereocenters can have either sp 3 or sp 2 hybridization . Stereoisomers are compounds that are identical in composition and connectivity but have a different spatial arrangement of atoms around the central atom. [ 6 ] A molecule having multiple stereocenters will produce many possible stereoisomers. In compounds whose stereoisomerism is due to tetrahedral (sp 3 ) stereogenic centers, the total number of hypothetically possible stereoisomers will not exceed 2 n , where n is the number of tetrahedral stereocenters. However, this is an upper bound because molecules with symmetry frequently have fewer stereoisomers. The stereoisomers produced by the presence of multiple stereocenters can be defined as enantiomers (non-superposable mirror images) and diastereomers (non-superposable, non-identical, non-mirror image molecules). [ 6 ] Enantiomers and diastereomers are produced due to differing stereochemical configurations of molecules containing the same composition and connectivity (bonding); the molecules must have multiple (two or more) stereocenters to be classified as enantiomers or diastereomers. Enantiomers and diastereomers will produce individual stereoisomers that contribute to the total number of possible stereoisomers. However, the stereoisomers produced may also give a meso compound , which is an achiral compound that is superposable on its mirror image; the presence of a meso compound will reduce the number of possible stereoisomers. [ 4 ] Since a meso compound is superposable on its mirror image, the two "stereoisomers" are actually identical. Resultantly, a meso compound will reduce the number of stereoisomers to below the hypothetical 2 n amount due to symmetry. [ 6 ] Additionally, certain configurations may not exist due to steric reasons. Cyclic compounds with chiral centers may not exhibit chirality due to the presence of a two-fold rotation axis. Planar chirality may also provide for chirality without having an actual chiral center present. Configuration is defined as the arrangement of atoms around a stereocenter. [ 6 ] The Cahn-Ingold-Prelog (CIP) system uses R and S designations to define the configuration of atoms about any stereocenter. [ 7 ] A designation of R denotes a clockwise direction of substituent priority around the stereocenter, while a designation of S denotes a counter-clockwise direction of substituent priority. [ 7 ] A chirality center (chiral center) is a type of stereocenter. A chirality center is defined as an atom holding a set of four different ligands (atoms or groups of atoms) in a spatial arrangement which is non-superposable on its mirror image. Chirality centers must be sp 3 hybridized, meaning that a chirality center can only have single bonds . [ 5 ] In organic chemistry , a chirality center usually refers to a carbon , phosphorus , or sulfur atom, though it is also possible for other atoms to be chirality centers, especially in areas of organometallic and inorganic chemistry . The concept of a chirality center generalizes the concept of an asymmetric carbon atom (a carbon atom bonded to four different entities) to a broader definition of any atom with four different attachment groups in which an interchanging of any two attachment groups gives rise to an enantiomer . [ 8 ] A carbon atom that is attached to four different substituent groups is called an asymmetric carbon atom or chiral carbon . Chiral carbons are the most common type of chirality center. [ 6 ] Chirality is not limited to carbon atoms, though carbon atoms are often centers of chirality due to their ubiquity in organic chemistry. Nitrogen and phosphorus atoms can also form bonds in a tetrahedral configuration. A nitrogen in an amine may be a stereocenter if all three groups attached are different because the electron pair of the amine functions as a fourth group. [ 9 ] However, nitrogen inversion , a form of pyramidal inversion , causes racemization which means that both epimers at that nitrogen are present under normal circumstances. [ 9 ] Racemization by nitrogen inversion may be restricted (such as quaternary ammonium or phosphonium cations), or slow, which allows the existence of chirality. [ 9 ] Metal atoms with tetrahedral or octahedral geometries may also be chiral due to having different ligands. For the octahedral case, several chiralities are possible. Having three ligands of two types, the ligands may be lined up along the meridian, giving the mer -isomer, or forming a face—the fac isomer. Having three bidentate ligands of only one type gives a propeller-type structure, with two different enantiomers denoted Λ and Δ. As mentioned earlier, the requirement for an atom to be a chirality center is that the atom must be sp 3 hybridized with four different attachments. [ 5 ] Because of this, all chirality centers are stereocenters. However, only under some conditions is the reverse true. Recall that a point can be considered a sterocenter with a minimum of three attachment points; stereocenters can be either sp 3 or sp 2 hybridized, as long as the interchanging any two different groups creates a new stereoisomer . This means that although all chirality centers are stereocenters, not every stereocenter is a chirality center. Stereocenters are important identifiers for chiral or achiral molecules. As a general rule, if a molecule has no stereocenters, it is considered achiral. If it has at least one stereocenter, the molecule has the potential for chirality. However, there are some exceptions like meso compounds that make molecules with multiple stereocenters considered achiral. [ 6 ]
https://en.wikipedia.org/wiki/Stereocenter
Stereochemistry , a subdiscipline of chemistry , studies the spatial arrangement of atoms that form the structure of molecules and their manipulation. [ 1 ] The study of stereochemistry focuses on the relationships between stereoisomers , which are defined as having the same molecular formula and sequence of bonded atoms (constitution) but differing in the geometric positioning of the atoms in space. For this reason, it is also known as 3D chemistry—the prefix "stereo-" means "three-dimensionality". [ 2 ] Stereochemistry applies to all kinds of compounds and ions, organic and inorganic species alike. Stereochemistry affects biological , physical , and supramolecular chemistry . Stereochemistry reactivity of the molecules in question ( dynamic stereochemistry ). In 1815, Jean-Baptiste Biot 's observation of optical activity marked the beginning of organic stereochemistry history. He observed that organic molecules were able to rotate the plane of polarized light in a solution or in the gaseous phase. [ 3 ] Despite Biot's discoveries, Louis Pasteur is commonly described as the first stereochemist, having observed in 1842 that salts of tartaric acid collected from wine production vessels could rotate the plane of polarized light , but that salts from other sources did not. This was the only physical property that differed between the two types of tartrate salts, which is due to optical isomerism . In 1874, Jacobus Henricus van 't Hoff and Joseph Le Bel explained optical activity in terms of the tetrahedral arrangement of the atoms bound to carbon. Kekulé explored tetrahedral models earlier, in 1862, but never published his work; Emanuele Paternò probably knew of these but was the first to draw and discuss three dimensional structures, such as of 1,2-dibromoethane in the Giornale di Scienze Naturali ed Economiche in 1869. [ 4 ] The term "chiral" was introduced by Lord Kelvin in 1904. Arthur Robertson Cushny , a Scottish Pharmacologist, first provided a clear example in 1908 of a bioactivity difference between enantiomers of a chiral molecule viz. (−)-Adrenaline is two times more potent than the (±)- form as a vasoconstrictor and in 1926 laid the foundation for chiral pharmacology/stereo-pharmacology [ 5 ] [ 6 ] (biological relations of optically isomeric substances). Later in 1966, the Cahn–Ingold–Prelog nomenclature or Sequence rule was devised to assign absolute configuration to stereogenic /chiral center (R- and S- notation) [ 7 ] and extended to be applied across olefinic bonds (E- and Z- notation). Cahn–Ingold–Prelog priority rules are part of a system for describing a molecule's stereochemistry. They rank the atoms around a stereocenter in a standard way, allowing unambiguous descriptions of their relative positions in the molecule. A Fischer projection is a simplified way to depict the stereochemistry around a stereocenter. Stereochemistry has important applications in the field of medicine, particularly pharmaceuticals. An often cited example of the importance of stereochemistry relates to the thalidomide disaster. Thalidomide is a pharmaceutical drug , first prepared in 1957 in Germany, prescribed for treating morning sickness in pregnant women. The drug was discovered to be teratogenic , causing serious genetic damage to early embryonic growth and development, leading to limb deformation in babies. Several proposed mechanisms of teratogenicity involve different biological functions for the (R)- and (S)-thalidomide enantiomers. [ 8 ] In the human body, however, thalidomide undergoes racemization : even if only one of the two enantiomers is administered as a drug, the other enantiomer is produced as a result of metabolism. [ 9 ] Accordingly, it is incorrect to state that one stereoisomer is safe while the other is teratogenic. [ 10 ] Thalidomide is currently used for the treatment of other diseases, notably cancer and leprosy . Strict regulations and controls have been implemented to avoid its use by pregnant women and prevent developmental deformities. This disaster was a driving force behind requiring strict testing of drugs before making them available to the public. In yet another example, the drug ibuprofen can exist as (R)- and (S)-isomers. Only the (S)-ibuprofen is active in reducing inflammation and pain. [ 11 ] [ 12 ] Atropisomerism derives from the inability to rotate about a bond, such as due to steric hindrance between functional groups on two sp 2 -hybridized carbon atoms. Usually atropisomers are chiral, and as such they are a form of axial chirality . Atropisomerism can be described as conformational isomerism Cis-Trans isomers are often associated alkene double bonds. The more general E / Z nomenclature refers to the concept of cis / trans isomerism, and is especially useful for more complex compounds. Diastereomers are non-superposable, non-identical stereoisomers. A common example of diastereomerism is when two compounds differ from each other by the ( R )/( S ) absolute configuration at some, but not all corresponding stereocenters. Epimers are diastereomers that differ at exactly one such position. cis / trans isomerism is another type of diastereomeric relationship. Enantiomers are pairs of non-superposable mirror images. Each member of the pair has a distinct R . Epimers are a subcategory of diastereomers that differ in absolute configuration configurations at only one corresponding stereocenter. They are commonly found in sugar chemistry , where two sugars can differ by the configuration of a single carbon atom.
https://en.wikipedia.org/wiki/Stereochemistry
In chemistry , primarily organic and computational chemistry , a stereoelectronic effect [ 1 ] is an effect on molecular geometry , reactivity , or physical properties due to spatial relationships in the molecules ' electronic structure , in particular the interaction between atomic and/or molecular orbitals . [ 2 ] Phrased differently, stereoelectronic effects can also be defined as the geometric constraints placed on the ground and/or transition states of molecules that arise from considerations of orbital overlap. [ 3 ] Thus, a stereoelectronic effect explains a particular molecular property or reactivity by invoking stabilizing or destabilizing interactions that depend on the relative orientations of electrons (bonding or non-bonding) in space. [ 4 ] Stereoelectronic effects present themselves in other well-known interactions. These include important phenomena such as the anomeric effect and hyperconjugation . It is important to note that stereoelectronic effects should not be misunderstood as a simple combination of steric effects and electronic effects . Founded on a few general principles that govern how orbitals interact, the stereoelectronic effect, along with the steric effect, inductive effect , solvent effect , mesomeric effect , and aromaticity , is an important type of explanation for observed patterns of selectivity, reactivity, and stability in organic chemistry . In spite of the relatively straightforward premises, stereoelectronic effects often provide explanations for counterintuitive or surprising observations. As a result, stereoelectronic factors are now commonly considered and exploited in the development of new organic methodology and in the synthesis of complex targets. The scrutiny of stereoelectronic effects has also entered the realms of biochemistry and pharmaceutical chemistry in recent years. A stereoelectronic effect generally involves a stabilizing donor-acceptor (i.e., filled bonding-empty antibonding, 2-electron 2-orbital) interaction. The donor is usually a higher bonding or nonbonding orbital and the acceptor is often a low-lying antibonding orbital as shown in the scheme below. Whenever possible, if this stereoelectronic effect is to be favored, the donor-acceptor orbitals should have (1) a small energy gap and (2) be geometrically well disposed for interaction. In particular, this means that the shapes of the donor and acceptor orbitals (including π or σ symmetry and size of the interacting lobes) must be well-matched for interaction; an antiperiplanar orientation is especially favorable. Some authors require stereoelectronic effects to be stabilizing. [ 1 ] However, destabilizing donor-donor (i.e., filled bonding-filled antibonding, 4-electron 2-orbital) interactions are occasionally invoked and are also sometimes referred to as stereoelectronic effects, although such effects are difficult to distinguish from generic steric repulsion. [ 3 ] [ 5 ] Take the simplest CH 2 X–CH 3 system as an example; the donor orbital is σ(C–H) orbital and the acceptor is σ*(C–X). When moving from fluorine to chlorine , then to bromine , the electronegativity of the halogen and the energy level of the σ*(C–X) orbitals decreases. [ 6 ] Consequently, the general trend of acceptors can be summarized as: π*(C=O)>σ*(C–Hal)>σ*(C–O)>σ*(C–N)>σ*(C–C), σ*(C–H). For donating orbitals, the nonbonding orbitals, or the lone pairs, are generally more effective than bonding orbitals due to the high energy levels. Also, different from acceptors, donor orbitals require less polarized bonds. Thus, the general trends for donor orbitals would be: n(N)>n(O)>σ(C–C), σ(C–H)>σ(C–N)>σ(C–O)>σ(C–S)>σ(C–Hal). [ 5 ] Stereoelectronic effect can be directional in specific cases. The radius of sulfur is much larger than the radius of carbon and oxygen . Thus the differences in C–S bond distances generate a much-amplified difference in the two stereoelectronic effects in 1,3- dithiane (σ(C–H) → σ*(C–S)) than in 1,3- dioxane (σ(C–H) → σ*(C–O)). [ 6 ] The differences between C–C and C–S bonds shown below causes a significant difference in the distances between C–S and two C–H bonds. The shorter the difference is, the better the interaction and the stronger the stereoelectronic effect. [ 6 ] If there is an electropositive substituent (e.g. –SiR 3 , –SnR 3 , –HgR, etc.) at the β-position of carbocation , the positive charge could be stabilized which is also due largely to the stereoelectronic effect (illustrated below using –SiR3 as an example). The orientation of the two interacting orbitals can have a significant effect on the stabilization effect (σ(C–Si) → empty p orbital), where antiperiplanar (180°) > perpendicular (90°) > syn (0°). [ 7 ] One structural consequence of acyclic systems due to the stereoelectronic effect is the gauche effect . [ 8 ] In 1,2-difluoroethane , despite the steric clash, the preferred conformation is the gauche one because σ(C–H) is a good donor and σ*(C–F) is a good acceptor and the stereoelectronic effect (σ(C–H) → σ*(C–F)) requires the energy minimum to be gauche instead of anti. [ 9 ] This gauche effect and its impact on conformation are important in biochemistry. For example, in HIF-α subunit fragments containing ( 2S,4R )-4-hydroxyproline, the gauche interaction favors the conformer that can bind to the active site of pVHL. [ 10 ] pVHL mediates the proteasomal degradation of HIF1A and with that the physiological response to hypoxia. Stereoelectronic effects can have a significant influence in pharmaceutical research . Generally, the substitution of hydrogen by fluorine could be regarded as a way to tune both the hydrophobicity and the metabolic stability of a drug candidate. Moreover, it can have a profound influence on conformations, often due to stereoelectronic effects, in addition to normal steric effects resulting from the larger size of the fluorine atom. For instance, the ground state geometries of anisole (methoxybenzene) and (trifluoromethoxy)benzene differ dramatically. In anisole , the methyl group prefers to be coplanar with the phenyl group , while (trifluoromethoxy)benzene favors a geometry in which the [C(aryl)–C(aryl)–O–C(F 3 )] dihedral angle is around 90°. In other words, the O–CF 3 bond is perpendicular to the plane of the phenyl group. [ 11 ] Further studies illustrate that even for only one or two hydrogen atoms in a methyl group being replaced by a fluorine atom, the distortion in the structure can also be significant, with the [C(aryl)–C(aryl)–O–C(H 2 F)] dihedral angle in the energy minimized structure being around 24° and the [C(aryl)–C(aryl)–O–C(HF 2 )] dihedral angle 33°. [ 11 ] Although the energy difference between coplanar anisole and its isomer is quite large, the rotation between the O–CH 3 bond becomes favorable when the electronic properties of methoxy group on aromatic rings need to be altered to stabilize an unusual intermediate or a transition state. In the following reaction, the regioselectivity could be rationalized as the out-of-plane rotation of the O–C bond which changes the methoxy group from an in-plane donor group to an out-of-plane acceptor group. [ 12 ] The intermediate of the above reaction is the di-anion and the stereoelectronic effect that stabilizes this intermediate over the other one is the fact that the anionic charge at the para position could delocalize to the oxygen atom via orbital interaction: π(benzene) → σ*(O–CH 3 ). [ 12 ] Even remote substituents on the benzene ring can affect the electron density on the aromatic ring and in turn influence the selectivity. In the hydrogenation of ketones using CBS catalysts , the ketone coordinates to the boron atom with the lone pair on the oxygen atom. In the following example, the inductive influence of the substituents can lead to differentiation of the two sp 2 lone pairs on the oxygen atom. [ 13 ] The relevant stereoelectronic interaction in the starting material is the n O → σ*(C carbonyl –C aryl ) interaction. The electron-withdrawing substituent on the benzene ring depletes the electron density on the aromatic ring and thus makes the σ*(C carbonyl –C aryl(nitro) ) orbital a better acceptor than σ*(C carbonyl –C aryl(methoxy) ). These two stereoelectronic interactions use different lone pairs on the oxygen atom (the one antiperiplanar to the σ* in question for each), leading to lone pairs with different electron densities. In particular, the enhanced depletion of electron density from the lone pair antiperiplanar to the 4-nitrophenyl group leads to weakened ability for that lone pair to coordinate to boron. This in turn results in the lone pair antiperiplanar to the 4-methoxyphenyl binding preferentially to the catalyst, leading to well-defined facial selectivity. Under optimized conditions, the product is formed with excellent levels of enantioselectivity (95% ee). [ 13 ] The stereoelectronic effect influences the thermodynamics of equilibrium. For example, the following equilibrium could be achieved via a cascade of pericyclic reactions. Despite very similar structures, one of the two isomers is strongly favored over the other because of a stereoelectronic effect. Since the σ* C-C orbital adjacent to the electron-withdrawing carbonyl group is lower in energy and is therefore a better acceptor than the σ* C-C orbital adjacent to the methoxy, the isomer in which the n O (σ) lone pair is able to donate into this lower-energy antibonding orbital will be stabilized (orbital interaction illustrated). [ 14 ] Another example of the preference in the equilibrium within the area of pericyclic reaction is shown below. The stereoelectronic effect that affect the equilibrium is the interaction between the delocalized “banana bonds” and the empty p orbital on the boron atom. [ 15 ] In another case, the stereoelectronic effect can result in an increased contribution of one resonance structure over another, which leads to further consequences in reactivity . For 1,4- benzoquinone monoxime, there are significant differences in the physical properties and reactivities between C2-C3 double bond and C5-C6 double bond. For instance, in the 1 H NMR, 3 J 23 higher than 3 J 56. [ 16 ] The C2-C3 double bond also selectively undergoes Diels–Alder reaction with cyclopentadiene , despite the increased steric hindrance on that side of the molecule. [ 17 ] These data illustrate an increased contribution of resonance structure B over structure A . The authors argue that the donation from n N to σ* C4-C3 orbital lengthens the C4–C3 bond (C4 is the carbon bearing the nitrogen substituent), which reduces the p-p overlap between these two atoms. This in turn decreasing the relative importance of structure A which has a double bond between C4 and C3. [ 18 ] In the asymmetric Diels–Alder reactions, instead of using chiral ligands or chiral auxiliaries to differentiate the side selectivity of the dienolphiles, the differentiation of face selectivity of the dienes (especially for cyclopentadiene derivatives) using stereoelectronic effects have been reported by Woodward since 1955. [ 19 ] A systematic research of facial selectivity using substituted cyclopentadiene or permethylcyclopentadiene derivatives have been conducted and the results can be listed as below. [ 20 ] The stereoelectronic effect affecting the outcome of the facial selectivity of the diene in the Diels–Alder reaction is the interaction between the σ(C(sp 2 )–CH 3 ) (when σ(C(sp 2 )–X) is a better acceptor than a donor) or σ(C(sp 2 )–X) (when σ(C(sp 2 )–X) is a better donor than an acceptor) and the σ* orbital of the forming bond between the diene and the dienophile. [ 20 ] If the two geminal substituents are both aromatic rings with different substituents tuning the electron density, the differentiation of the facial selectivity is also facile where the dienophile approaches the diene anti to the more electron-rich C–C bond where the stereoelectronic effect, in this case, is similar to the previous one. [ 21 ] The ring opening of cyclobutene under heating conditions can have two products: inward and outward rotation. The inward rotation transition state of the structure shown below is relatively favored for acceptor R substituents (e.g. NO 2 ) but is especially disfavored by donor R substituents (e.g. NMe 2 ). [ 22 ] Sometimes, stereoelectronic effects can win over extreme steric clash. In a similar cyclobutene ring-opening reaction, the trimethylsilyl group , which is very bulky, still favors the inward rotation. The stereoelectronic effect, which is the interaction shown above when the acceptor orbital is the σ*(Si–CH 3 ), appears to be a more predominant factor in determining the reaction selectivity against the steric hindrance and even wins over the penalty of the disrupted conjugation system of the product due to steric clash. [ 23 ] Furthermore, the acceptor orbitals are not limited to the antibonding orbitals of carbon-heteroatom bonds or the empty orbitals; in the following case, the acceptor orbital is the σ*(B–O) orbital. In the six-membered ring transition state, the stereoelectronic interaction is σ(C–X) → σ*(B–O). [ 24 ] Molecular recognition events mediated through orbital interactions are critical in a number of biological processes such as enzyme catalysis. [ 25 ] Stabilizing interactions between proteins and carbohydrates in glycosylated proteins also exemplify the role of stereoelectronic effects in biomolecules. [ 26 ]
https://en.wikipedia.org/wiki/Stereoelectronic_effect
In mathematics , a stereographic projection is a perspective projection of the sphere , through a specific point on the sphere (the pole or center of projection ), onto a plane (the projection plane ) perpendicular to the diameter through the point. It is a smooth , bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal , meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes . It is neither isometric (distance preserving) nor equiareal (area preserving). [ 1 ] The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates . This is the spherical analog of the Poincaré disk model of the hyperbolic plane . Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis , cartography , geology , and photography . Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net , shortened to stereonet , or Wulff net . The origin of the stereographic projection is not known, but it is believed to have been discovered by Ancient Greek astronomers and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry . Its earliest extant description is found in Ptolemy 's Planisphere (2nd century AD), but it was ambiguously attributed to Hipparchus (2nd century BC) by Synesius ( c. 400 AD ), [ 2 ] and Apollonius 's Conics ( c. 200 BC ) contains a theorem which is crucial in proving the property that the stereographic projection maps circles to circles. Hipparchus, Apollonius, Archimedes , and even Eudoxus (4th century BC) have sometimes been speculatively credited with inventing or knowing of the stereographic projection, [ 3 ] but some experts consider these attributions unjustified. [ 2 ] Ptolemy refers to the use of the stereographic projection in a "horoscopic instrument", perhaps the anaphoric clock [ fr ; it ] described by Vitruvius (1st century BC). [ 4 ] [ 5 ] By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), [ 3 ] a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by medieval Islamic astronomers . It was transmitted to Western Europe during the 11th–12th century, with Arabic texts translated into Latin. In the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres . It is believed that already the map created in 1507 by Gualterius Lud [ 6 ] was in stereographic projection, as were later the maps of Jean Rotz (1542), Rumold Mercator (1595), and many others. [ 7 ] In star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy . [ 8 ] François d'Aguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles (Six Books of Optics, useful for philosophers and mathematicians alike). [ 9 ] In the late 16th century, Thomas Harriot proved that the stereographic projection is conformal ; however, this proof was never published and sat among his papers in a box for more than three centuries. [ 10 ] In 1695, Edmond Halley , motivated by his interest in star charts , was the first to publish a proof. [ 11 ] He used the recently established tools of calculus , invented by his friend Isaac Newton . The unit sphere S 2 in three-dimensional space R 3 is the set of points ( x , y , z ) such that x 2 + y 2 + z 2 = 1 . Let N = (0, 0, 1) be the "north pole", and let M be the rest of the sphere. The plane z = 0 runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane. For any point P on M , there is a unique line through N and P , and this line intersects the plane z = 0 in exactly one point P ′ , known as the stereographic projection of P onto the plane. In Cartesian coordinates ( x , y , z ) on the sphere and ( X , Y ) on the plane, the projection and its inverse are given by the formulas In spherical coordinates ( φ , θ ) on the sphere (with φ the zenith angle , 0 ≤ φ ≤ π , and θ the azimuth , 0 ≤ θ ≤ 2π ) and polar coordinates ( R , Θ ) on the plane, the projection and its inverse are Here, φ is understood to have value π when R = 0. Also, there are many ways to rewrite these formulas using trigonometric identities . In cylindrical coordinates ( r , θ , z ) on the sphere and polar coordinates ( R , Θ ) on the plane, the projection and its inverse are Some authors [ 12 ] define stereographic projection from the north pole (0, 0, 1) onto the plane z = −1 , which is tangent to the unit sphere at the south pole (0, 0, −1). This can be described as a composition of a projection onto the equatorial plane described above, and a homothety from it to the polar plane. The homothety scales the image by a factor of 2 (a ratio of a diameter to a radius of the sphere), hence the values X and Y produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole. Other authors [ 13 ] use a sphere of radius ⁠ 1 / 2 ⁠ and the plane z = − ⁠ 1 / 2 ⁠ . In this case the formulae become In general, one can define a stereographic projection from any point Q on the sphere onto any plane E such that As long as E meets these conditions, then for any point P other than Q the line through P and Q meets E in exactly one point P ′ , which is defined to be the stereographic projection of P onto E . [ 14 ] More generally, stereographic projection may be applied to the unit n -sphere S n in ( n + 1 )-dimensional Euclidean space E n +1 . If Q is a point of S n and E a hyperplane in E n +1 , then the stereographic projection of a point P ∈ S n − { Q } is the point P ′ of intersection of the line QP with E . In Cartesian coordinates ( x i , i from 0 to n ) on S n and ( X i , i from 1 to n ) on E , the projection from Q = (1, 0, 0, ..., 0) ∈ S n is given by X i = x i 1 − x 0 ( i = 1 , … , n ) . {\displaystyle X_{i}={\frac {x_{i}}{1-x_{0}}}\quad (i=1,\dots ,n).} Defining s 2 = ∑ j = 1 n X j 2 = 1 + x 0 1 − x 0 , {\displaystyle s^{2}=\sum _{j=1}^{n}X_{j}^{2}={\frac {1+x_{0}}{1-x_{0}}},} the inverse is given by x 0 = s 2 − 1 s 2 + 1 and x i = 2 X i s 2 + 1 ( i = 1 , … , n ) . {\displaystyle x_{0}={\frac {s^{2}-1}{s^{2}+1}}\quad {\text{and}}\quad x_{i}={\frac {2X_{i}}{s^{2}+1}}\quad (i=1,\dots ,n).} Still more generally, suppose that S is a (nonsingular) quadric hypersurface in the projective space P n +1 . In other words, S is the locus of zeros of a non-singular quadratic form f ( x 0 , ..., x n +1 ) in the homogeneous coordinates x i . Fix any point Q on S and a hyperplane E in P n +1 not containing Q . Then the stereographic projection of a point P in S − { Q } is the unique point of intersection of QP with E . As before, the stereographic projection is conformal and invertible on a non-empty Zariski open set. The stereographic projection presents the quadric hypersurface as a rational hypersurface . [ 15 ] This construction plays a role in algebraic geometry and conformal geometry . The first stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle , the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle. The projection is not defined at the projection point N = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer P is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a point at infinity . This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one-point compactification of the plane. In Cartesian coordinates a point P ( x , y , z ) on the sphere and its image P ′ ( X , Y ) on the plane either both are rational points or none of them: Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in ( X , Y ) coordinates by Along the unit circle, where X 2 + Y 2 = 1 , there is no inflation of area in the limit, giving a scale factor of 1. Near (0, 0) areas are inflated by a factor of 4, and near infinity areas are inflated by arbitrarily small factors. The metric is given in ( X , Y ) coordinates by and is the unique formula found in Bernhard Riemann 's Habilitationsschrift on the foundations of geometry, delivered at Göttingen in 1854, and entitled Über die Hypothesen welche der Geometrie zu Grunde liegen . No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature . The sphere and the plane have different Gaussian curvatures, so this is impossible. Circles on the sphere that do not pass through the point of projection are projected to circles on the plane. [ 16 ] [ 17 ] Circles on the sphere that do pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. These properties can be verified by using the expressions of x , y , z {\displaystyle x,y,z} in terms of X , Y , Z , {\displaystyle X,Y,Z,} given in § First formulation : using these expressions for a substitution in the equation a x + b y + c z − d = 0 {\displaystyle ax+by+cz-d=0} of the plane containing a circle on the sphere, and clearing denominators, one gets the equation of a circle, that is, a second-degree equation with ( c − d ) ( X 2 + Y 2 ) {\displaystyle (c-d)(X^{2}+Y^{2})} as its quadratic part. The equation becomes linear if c = d , {\displaystyle c=d,} that is, if the plane passes through the point of projection. All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, meet at the projection point. Parallel lines, which do not intersect in the plane, are transformed to circles tangent at projection point. Intersecting lines are transformed to circles that intersect transversally at two points in the sphere, one of which is the projection point. (Similar remarks hold about the real projective plane , but the intersection relationships are different there.) The loxodromes of the sphere map to curves on the plane of the form where the parameter a measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals . These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles. The stereographic projection relates to the plane inversion in a simple way. Let P and Q be two points on the sphere with projections P ′ and Q ′ on the plane. Then P ′ and Q ′ are inversive images of each other in the image of the equatorial circle if and only if P and Q are reflections of each other in the equatorial plane. In other words, if: then P ′ and P″ are inversive images of each other in the unit circle. Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy. Instead, it is common to use graph paper designed specifically for the task. This special graph paper is called a stereonet or Wulff net , after the Russian mineralogist George (Yuri Viktorovich) Wulff . [ 18 ] The Wulff net shown here is the stereographic projection of the grid of parallels and meridians of a hemisphere centred at a point on the equator (such as the Eastern or Western hemisphere of a planet). In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right or left. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area of the former. If the grid is made finer, this ratio approaches exactly 4. On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereographic projection. (However, the angle-preserving property is stronger than this property. Not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.) For an example of the use of the Wulff net, imagine two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Let P be the point on the lower unit hemisphere whose spherical coordinates are (140°, 60°) and whose Cartesian coordinates are (0.321, 0.557, −0.766). This point lies on a line oriented 60° counterclockwise from the positive x -axis (or 30° clockwise from the positive y -axis) and 50° below the horizontal plane z = 0 . Once these angles are known, there are four steps to plotting P : To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°. Spacings of 2° are common. To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian. Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold ). This construction has special significance in complex analysis. The point ( X , Y ) in the real plane can be identified with the complex number ζ = X + i Y . The stereographic projection from the north pole onto the equatorial plane is then Similarly, letting ξ = X − i Y be another complex coordinate, the functions define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the ζ - and ξ -coordinates are then ζ = ⁠ 1 / ξ ⁠ and ξ = ⁠ 1 / ζ ⁠ , with ζ approaching 0 as ξ goes to infinity, and vice versa . This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere . The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere. The set of all lines through the origin in three-dimensional space forms a space called the real projective plane . This plane is difficult to visualize, because it cannot be embedded in three-dimensional space. However, one can visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere z ≤ 0 in a point, which can then be stereographically projected to a point on a disk in the XY plane. Horizontal lines through the origin intersect the southern hemisphere in two antipodal points along the equator, which project to the boundary of the disk. Either of the two projected points can be considered part of the disk; it is understood that antipodal points on the equator represent a single line in 3 space and a single point on the boundary of the projected disk (see quotient topology ). So any set of lines through the origin can be pictured as a set of points in the projected disk. But the boundary points behave differently from the boundary points of an ordinary 2-dimensional disk, in that any one of them is simultaneously close to interior points on opposite sides of the disk (just as two nearly horizontal lines through the origin can project to points on opposite sides of the disk). Also, every plane through the origin intersects the unit sphere in a great circle, called the trace of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass . Computers now make this task much easier. Further associated with each plane is a unique line, called the plane's pole , that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces. This construction is used to visualize directional data in crystallography and geology, as described below. Stereographic projection is also applied to the visualization of polytopes . In a Schlegel diagram , an n -dimensional polytope in R n +1 is projected onto an n -dimensional sphere, which is then stereographically projected onto R n . The reduction from R n +1 to R n can make the polytope easier to visualize and understand. In elementary arithmetic geometry , stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples . Specifically, stereographic projection from the north pole (0,1) onto the x -axis gives a one-to-one correspondence between the rational number points ( x , y ) on the unit circle (with y ≠ 1 ) and the rational points of the x -axis. If ( ⁠ m / n ⁠ , 0) is a rational point on the x -axis, then its inverse stereographic projection is the point which gives Euclid's formula for a Pythagorean triple. The pair of trigonometric functions (sin x , cos x ) can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle: Under this reparametrization, the length element dx of the unit circle goes over to This substitution can sometimes simplify integrals involving trigonometric functions. The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation . Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin. The stereographic is the only projection that maps all circles on a sphere to circles on a plane . This property is valuable in planetary mapping where craters are typical features. The set of circles passing through the point of projection have unbounded radius, and therefore degenerate into lines. In crystallography , the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure . In electron diffraction , Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing experimental access to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space, [ 19 ] and fringe visibility maps for use with bend contours in direct space, [ 20 ] thus act as road maps for exploring orientation space with crystals in the transmission electron microscope . Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation . Similarly, a fault plane is a planar feature that may contain linear features such as slickensides . These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection . The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring . [ 21 ] The stereographic projection is one of the most widely used methods for evaluating rock slope stability. It allows for the representation and analysis of three-dimensional orientation data in two dimensions. Kinematic analysis within stereographic projection is used to assess the potential for various modes of rock slope failures—such as plane, wedge, and toppling failures—which occur due to the presence of unfavorably oriented discontinuities. [ 22 ] [ 23 ] This technique is particularly useful for visualizing the orientation of rock slopes in relation to discontinuity sets, facilitating the assessment of the most likely failure type. [ 22 ] For instance, plane failure is more likely when the strike of a discontinuity set is parallel to the slope, and the discontinuities dip towards the slope at an angle steep enough to allow sliding, but not steeper than the slope itself. Additionally, some authors have developed graphical methods based on stereographic projection to easily calculate geometrical correction parameters—such as those related to the parallelism between the slope and discontinuities, the dip of the discontinuity, and the relative angle between the discontinuity and the slope—for rock mass classifications in slopes, including slope mass rating (SMR) [ 24 ] and rock mass rating . [ 25 ] Some fisheye lenses use a stereographic projection to capture a wide-angle view. [ 26 ] Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture. [ 27 ] Image remapping software, such as Panotools , allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection. The stereographic projection has been used to map spherical panoramas , starting with Horace Bénédict de Saussure 's in 1779. This results in effects known as a little planet (when the center of projection is the nadir ) and a tube (when the center of projection is the zenith ). [ 28 ] The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection. [ 28 ]
https://en.wikipedia.org/wiki/Stereographic_projection
In stereochemistry , stereoisomerism , or spatial isomerism , is a form of isomerism in which molecules have the same molecular formula and sequence of bonded atoms (constitution), but differ in the three-dimensional orientations of their atoms in space. [ 1 ] [ 2 ] This contrasts with structural isomers , which share the same molecular formula, but the bond connections or their order differs. By definition, molecules that are stereoisomers of each other represent the same structural isomer. [ 3 ] Enantiomers , also known as optical isomers , are two stereoisomers that are related to each other by a reflection: they are mirror images of each other that are non-superposable. Human hands are a macroscopic analog of this. Every stereogenic center in one has the opposite configuration in the other. Two compounds that are enantiomers of each other have the same physical properties, except for the direction in which they rotate polarized light and how they interact with different enantiomers of other compounds. As a result, different enantiomers of a compound may have substantially different biological effects. Pure enantiomers also exhibit the phenomenon of optical activity and can be separated only with the use of a chiral agent. In nature, only one enantiomer of most chiral biological compounds, such as amino acids (except glycine , which is achiral), is present. Enantiomers differ by the direction they rotate polarized light: the amount of a chiral compound's optical rotation in the (+) direction is equal to the amount of its enantiomer's rotation in the (–) direction. Diastereomers are stereoisomers not related through a reflection operation. [ 4 ] They are not mirror images of each other. These include meso compounds , cis – trans isomers , E-Z isomers , and non-enantiomeric optical isomers . Diastereomers seldom have the same physical properties. In the example shown below, the meso form of tartaric acid forms a diastereomeric pair with both levo- and dextro-tartaric acids, which form an enantiomeric pair. (natural) tartaric acid L -tartaric acid L -(+)-tartaric acid levo-tartaric acid D -tartaric acid D -(-)-tartaric acid dextro-tartaric acid meso-tartaric acid (1:1) DL -tartaric acid "racemic acid" The D - and L - labeling of the isomers above is not the same as the d - and l - labeling more commonly seen, explaining why these may appear reversed to those familiar with only the latter naming convention. A Fischer projection can be used to differentiate between L- and D- molecules Chirality (chemistry) . For instance, by definition, in a Fischer projection the penultimate carbon of D-sugars are depicted with hydrogen on the left and hydroxyl on the right. L-sugars will be shown with the hydrogen on the right and the hydroxyl on the left. The other refers to Optical rotation , when looking at the source of light, the rotation of the plane of polarization may be either to the right (dextrorotary — d-rotary, represented by (+), clockwise), or to the left (levorotary — l-rotary, represented by (−), counter-clockwise) depending on which stereoisomer is dominant. For instance, sucrose and camphor are d-rotary whereas cholesterol is l-rotary. Stereoisomerism about double bonds arises because rotation about the double bond is restricted, keeping the substituents fixed relative to each other. [ 5 ] If the two substituents on at least one end of a double bond are the same, then there is no stereoisomer and the double bond is not a stereocenter, e.g. propene, CH 3 CH=CH 2 where the two substituents at one end are both H. [ 6 ] Traditionally, double bond stereochemistry was described as either cis (Latin, on this side) or trans (Latin, across), in reference to the relative position of substituents on either side of a double bond. A simple example of cis – trans isomerism is the 1,2-disubstituted ethenes, like the dichloroethene (C 2 H 2 Cl 2 ) isomers shown below. [ 7 ] Molecule I is cis -1,2-dichloroethene and molecule II is trans -1,2-dichloroethene. Due to occasional ambiguity, IUPAC adopted a more rigorous system wherein the substituents at each end of the double bond are assigned priority based on their atomic number . If the high-priority substituents are on the same side of the bond, it is assigned Z (Ger. zusammen , together). If they are on opposite sides, it is E (Ger. entgegen , opposite). [ 8 ] Since chlorine has a larger atomic number than hydrogen, it is the highest-priority group. [ 9 ] Using this notation to name the above pictured molecules, molecule I is ( Z )-1,2-dichloroethene and molecule II is ( E )-1,2-dichloroethene. It is not the case that Z and cis , or E and trans , are always interchangeable. Consider the following fluoromethylpentene: The proper name for this molecule is either trans -2-fluoro-3-methylpent-2-ene because the alkyl groups that form the backbone chain (i.e., methyl and ethyl) reside across the double bond from each other, or ( Z )-2-fluoro-3-methylpent-2-ene because the highest-priority groups on each side of the double bond are on the same side of the double bond. Fluoro is the highest-priority group on the left side of the double bond, and ethyl is the highest-priority group on the right side of the molecule. The terms cis and trans are also used to describe the relative position of two substituents on a ring; cis if on the same side, otherwise trans . [ 10 ] [ 11 ] Conformational isomerism is a form of isomerism that describes the phenomenon of molecules with the same structural formula but with different shapes due to rotations about one or more bonds. [ 12 ] [ 13 ] Different conformations can have different energies, can usually interconvert, and are very rarely isolatable. For example, there exists a variety of Cyclohexane conformations (which cyclohexane is an essential intermediate for the synthesis of nylon–6,6) including a chair conformation where four of the carbon atoms form the "seat" of the chair, one carbon atom is the "back" of the chair, and one carbon atom is the "foot rest"; and a boat conformation , the boat conformation represents the energy maximum on a conformational itinerary between the two equivalent chair forms; however, it does not represent the transition state for this process, because there are lower-energy pathways. The conformational inversion of substituted cyclohexanes is a very rapid process at room temperature, with a half-life of 0.00001 seconds. [ 14 ] There are some molecules that can be isolated in several conformations, due to the large energy barriers between different conformations. 2,2',6,6'-Tetrasubstituted biphenyls can fit into this latter category. Anomerism is an identity for single bonded ring structures where "cis" or "Z" and "trans" or "E" (geometric isomerism) needs to name the substitutions on a carbon atom that also displays the identity of chirality; so anomers have carbon atoms that have geometric isomerism and optical isomerism ( enantiomerism ) on one or more of the carbons of the ring. [ 15 ] [ 16 ] Anomers are named "alpha" or "axial" and "beta" or "equatorial" when substituting a cyclic ring structure that has single bonds between the carbon atoms of the ring for example, a hydroxyl group, a methyl hydroxyl group, a methoxy group or another pyranose or furanose group which are typical single bond substitutions but not limited to these. [ 17 ] Axial geometric isomerism will be perpendicular (90 degrees) to a reference plane and equatorial will be 120 degrees away from the axial bond or deviate 30 degrees from the reference plane. [ 18 ] Atropisomers are stereoisomers resulting from hindered rotation about single bonds where the steric strain barrier to rotation is high enough to allow for the isolation of the conformers. [ 19 ] Le Bel-van't Hoff rule states that for a structure with n asymmetric carbon atoms, there is a maximum of 2 n different stereoisomers possible. As an example, D -glucose is an aldohexose and has the formula C 6 H 12 O 6 . Four of its six carbon atoms are stereogenic, which means D -glucose is one of 2 4 =16 possible stereoisomers. [ 20 ] [ 21 ]
https://en.wikipedia.org/wiki/Stereoisomerism
Stereology is the three-dimensional interpretation of two-dimensional cross sections of materials or tissues. It provides practical techniques for extracting quantitative information about a three-dimensional material from measurements made on two-dimensional planar sections of the material. Stereology is a method that utilizes random, systematic sampling to provide unbiased and quantitative data. It is an important and efficient tool in many applications of microscopy (such as petrography , materials science , and biosciences including histology , bone and neuroanatomy ). Stereology is a developing science with many important innovations being developed mainly in Europe. [ citation needed ] New innovations such as the proportionator continue to make important improvements in the efficiency of stereological procedures. In addition to two-dimensional plane sections, stereology also applies to three-dimensional slabs (e.g. 3D microscope images), one-dimensional probes (e.g. needle biopsy), projected images, and other kinds of 'sampling'. It is especially useful when the sample has a lower spatial dimension than the original material. Hence, stereology is often defined as the science of estimating higher- dimensional information from lower-dimensional samples. Stereology is based on fundamental principles of geometry (e.g. Cavalieri's principle ) and statistics (mainly survey sampling inference). It is a completely different approach from computed tomography . Classical applications of stereology include: The popular science fact that the human lungs have a surface area (of gas exchange surface) equivalent to a tennis court (75 square meters), was obtained by stereological methods. Similarly for statements about the total length of nerve fibres, capillaries etc. in the human body. The word Stereology was coined in 1961 and defined as `the spatial interpretation of sections'. This reflects the founders' idea that stereology also offers insights and rules for the qualitative interpretation of sections. Stereologists have helped to detect many fundamental scientific errors arising from the misinterpretation of plane sections. Such errors are surprisingly common. For example: Stereology is a completely different enterprise from computed tomography . A computed tomography algorithm effectively reconstructs the complete internal three-dimensional geometry of an object, given a complete set of all plane sections through it (or equivalent X-ray data). On the contrary, stereological techniques require only a few 'representative' plane sections, from which they statistically extrapolate the three-dimensional material. Stereology exploits the fact that some 3-D quantities can be determined without 3-D reconstruction: for example, the 3-D volume of any object can be determined from the 2-D areas of its plane sections, without reconstructing the object. (This means that stereology only works for certain quantities like volume, and not for other quantities). In addition to using geometrical facts, stereology applies statistical principles to extrapolate three-dimensional shapes from plane section(s) of a material. [ 1 ] The statistical principles are the same as those of survey sampling (used to draw inferences about a human population from an opinion poll, etc.). Statisticians regard stereology as a form of sampling theory for spatial populations. To extrapolate from a few plane sections to the three-dimensional material, essentially the sections must be 'typical' or 'representative' of the entire material. There are basically two ways to ensure this: or The first approach is the one that was used in classical stereology. Extrapolation from the sample to the 3-D material depends on the assumption that the material is homogeneous. This effectively postulates a statistical model of the material. This method of sampling is referred to as model-based sampling inference. The second approach is the one typically used in modern stereology. Instead of relying on model assumptions about the three-dimensional material, we take our sample of plane sections by following a randomized sampling design, for example, choosing a random position at which to start cutting the material. Extrapolation from the sample to the 3-D material is valid because of the randomness of the sampling design, so this is called design-based sampling inference. Design-based stereological methods can be applied to materials which are inhomogeneous or cannot be assumed to be homogeneous. These methods have gained increasing popularity in the biomedical sciences, especially in lung-, kidney-, bone-, cancer- and neuro-science. Many of these applications are directed toward determining the number of elements in a particular structure, e.g. the total number of neurons in the brain. Many classical stereological techniques, in addition to assuming homogeneity, also involved mathematical modeling of the geometry of the structures under investigation. These methods are still popular in materials science, metallurgy and petrology where shapes of e.g. crystals may be modelled as simple geometrical objects. Such geometrical models make it possible to extract additional information (including numbers of crystals). However, they are extremely sensitive to departures from the assumptions. In the classical examples listed above, the target quantities were relative densities: volume fraction, surface area per unit volume, and length per unit volume. Often we are more interested in total quantities such as the total surface area of the lung's gas exchange surface, or the total length of capillaries in the brain. relative densities are also problematic because, unless the material is homogeneous, they depend on the unambiguous definition of the reference volume. Sampling principles also make it possible to estimate total quantities such as the total surface area of lung. Using techniques such as systematic sampling and cluster sampling we can effectively sample a fixed fraction of the entire material (without the need to delineate a reference volume). This allows us to extrapolate from the sample to the entire material, to obtain estimates of total quantities such as the absolute surface area of lung and the absolute number of cells in the brain. The primary scientific journals for stereology are Image Analysis & Stereology (former Acta Stereologica ) and Journal of Microscopy
https://en.wikipedia.org/wiki/Stereology
Stereopsis recovery , also recovery from stereoblindness , is the phenomenon of a stereoblind person gaining partial or full ability of stereo vision ( stereopsis ). Recovering stereo vision as far as possible has long been established as an approach to the therapeutic treatment of stereoblind patients. Treatment aims to recover stereo vision in very young children, as well as in patients who had acquired but lost their ability for stereopsis due to a medical condition. In contrast, this aim has normally not been present in the treatment of those who missed out on learning stereopsis during their first few years of life. In fact, the acquisition of binocular and stereo vision was long thought to be impossible unless the person acquired this skill during a critical period in infancy and early childhood. [ 1 ] This hypothesis normally went unquestioned and has formed the basis for the therapeutic approaches to binocular disorders for decades. It has been put in doubt in recent years. In particular since studies on stereopsis recovery began to appear in scientific journals and it became publicly known that neuroscientist Susan R. Barry achieved stereopsis well into adulthood, that assumption is in retrospect considered to have held the status of a scientific dogma . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Very recently, there has been a rise in scientific investigations into stereopsis recovery in adults and youths who have had no stereo vision before. While it has now been shown that an adult may gain stereopsis, it is currently not yet possible to predict how likely a stereoblind person is to do so, nor is there general agreement on the best therapeutic procedure. Also the possible implications for the treatment of children with infantile esotropia are still under study. In cases of acquired strabismus with double vision (diplopia), it is long-established state of the art to aim at curing the double vision and at the same time recovering a patient's earlier ability for stereo vision. For example, a patient may have had full stereo vision but later had diplopia due to a medical condition, losing stereo vision. In this case, medical interventions, including vision therapy and strabismus surgery , may remove the double vision and recover the stereo vision which had temporarily been absent in the patient. Also when children with congenital (infantile) strabismus (e.g. infantile esotropia ) receive strabismus surgery within the first few years or two of their life, this goes along with the hope that they may yet develop their full potential for binocular vision including stereopsis. In contrast, in a case where a child's eyes are straightened surgically after the age of about five or six years and the child had no opportunity to develop stereo vision in early childhood, normally the clinical expectation is that this intervention will lead to cosmetic improvements but not to stereo vision. Conventionally, no follow-up for stereopsis was performed in such cases. For instance, one author summarized the accepted scientific view of the time with the words: "Stereopsis will never be obtained unless amblyopia is treated, the eyes are aligned, and binocular fusion and function are achieved before the critical period for stereopsis ends. Clinical data suggest that this occurs before 24 months of age,[...] but we do not know exactly when it occurs, because crucial pieces of basic science information are missing." [ 6 ] For purposes of illustration, reference is made to a book of doctors' handouts for patients, written for the general public and published in 2002, which summarizes the limitations in the terms in which they, at the time, were fully accepted as medical state of the art as follows: "If an adult has a childhood strabismus that was never treated, it is too late to improve any amblyopia or depth perception , so the goal may be simply cosmetic – to make the eyes appear to be properly aligned – though sometimes treatment does enlarge the extent of side vision." [ 7 ] It has only been accepted very recently that the therapeutic approach was based on an unquestioned notion that has, since, been referred to as "myth" or "dogma". [ 5 ] Recently, however, stereopsis recovery is known to have occurred in a number of adults. While this has in some cases occurred after visual exercises or spontaneous visual experiences, recently also the medical community's view of strabismus surgery has become more optimistic with regard to outcomes in terms of binocular function and possibly stereopsis. [ 8 ] As one author states: [ 9 ] The majority of adults will experience some improvement in binocular function after strabismus surgery even if the strabismus has been longstanding. Most commonly this takes the form of an expansion of binocular visual fields; however, some patients may also regain stereopsis. Scientific investigations on residual neural plasticity in adulthood now also include studies on the recovery of stereopsis. Now it is a matter of active scientific investigation under which conditions and to which degree binocular fusion and stereo vision can be acquired in adulthood, especially if the person is not known to have had any preceding experience of stereo vision, and how outcomes may depend on the patient's history of therapeutic interventions. Stereopsis recovery has been reported to have occurred in a few adults as a result of either medical treatments including strabismus surgery and vision therapy , or spontaneously after a stereoscopic 3D cinema experience. The most renowned case of regained stereopsis is that of neuroscientist Susan R. Barry , who had had alternating infantile esotropia with diplopia , but no amblyopia, underwent three surgical corrections in childhood without achieving binocular vision at the time, and recovered from stereoblindness in adult age after vision therapy with optometrist Theresa Ruggiero. Barry's case has been reported on by neurologist Oliver Sacks . [ 10 ] Also David H. Hubel , winner of the 1981 Nobel Prize in Physiology or Medicine with Torsten Wiesel for their discoveries concerning information processing in the visual system, commented positively on her case. [ 11 ] In 2009, Barry published a book Fixing My Gaze: A Scientist's Journey into Seeing in Three Dimensions , reporting on her own and several other cases of stereopsis recovery. [ 2 ] In her book Fixing my Gaze , Susan Barry gives a detailed description of her surprise, elation and subsequent experiences when her stereo vision suddenly set in. Hubel wrote of her book: "It has been widely thought that an adult, cross-eyed since infancy, could never acquire stereovision, but to everyone's surprise, Barry succeeded. In Fixing my Gaze , she describes how wonderful it was to have, step-by-step, this new 3D world revealed to her. And as a neurobiologist she is able to discuss the science as an expert, in simple language." Her book includes reports of further persons who have had similar experiences with stereopsis recovery. Barry cites the personal experiences of several persons, including a man who was an artist and described his experience of seeing with stereopsis as "that he could see one hundred more times negative space ", [ 12 ] a woman who had been amblyopic before seeing in 3D described how empty space now "looks and feels palpable, tangible—alive!", [ 13 ] a woman who had been strabismic since age two and saw in 3D after taking vision therapy and stated that "The coolest thing is the feeling you get being 'in the dimension ' ", [ 14 ] a woman who felt quite alarmed at the experience of suddenly seeing roadside trees and signs looming towards her, [ 15 ] and two women who experienced an abrupt onset of stereo vision with a wide-angled view of the world, the first stating: "I was able to take in so much more of the room than I did before" and the second: "It was very dramatic as my peripheral vision suddenly filled in on both sides". [ 16 ] Common to Barry and at least one person on whom she had reported is the finding that also their mental representation of space changed after having acquired stereo vision: that even with one eye closed the feeling is to see "more" than seeing with one eye closed before recovering stereopsis. [ 16 ] Apart from Barry, another formerly stereoblind adult whose acquired ability for stereopsis has received media attention is neuroscientist Bruce Bridgeman, professor of psychology and psychobiology at University of California Santa Cruz , who had grown up nearly stereoblind and acquired stereo vision spontaneously in 2012 at the age of 67, when watching the 3D movie Hugo with polarizing 3D glasses. The scene suddenly appeared to him in depth, and the ability to see the world in stereo stayed with him also after leaving the cinema. [ 17 ] [ 18 ] [ 19 ] Michael Thomas has described the experience of instantaneous onset of three dimensional vision at the age of 69 in a Public Facebook post. [ 20 ] There is a growing recent body of scientific literature on investigations into the recovery of stereopsis in adults which started to appear shortly before Oliver Sacks' The New Yorker publication [ 10 ] drew public attention to Barry's discovery. A number of scientific publications have systematically assessed patients' post-surgical stereopsis, [ 21 ] [ 22 ] [ 23 ] whereas other studies have investigated the effects of eye training procedures. [ 24 ] [ 25 ] [ 26 ] Certain conditions are known to be a prerequisite for stereo vision, for instance, that the amount of horizontal deviation, if any is present, needs to be small. [ 27 ] In several studies it has been recognized that surgery to correct strabismus can have the effect of improving binocular function. [ 28 ] [ 29 ] One of these studies, published in 2003, explicitly concluded: "We found that improvement in binocularity, including stereopsis, can be obtained in a substantial portion of adults." [ 29 ] That article was published together with a discussion of the results among peers in which the scientific and social implications of the medical treatment were addressed, for example concerning the long-term relevancy of stereopsis, the importance of avoiding diplopia, the necessity of predictable outcomes, and psychosocial and socioeconomic relevance . [ 29 ] Among the investigations into post-surgical stereopsis is a publication of 2005 that reported on a total of 43 adults over 18 years of age who had surgical correction after having lived with from constant-horizontal strabismus for more than 10 years with no previous surgery or stereopsis, with visual acuity of 20/40 or more also in the deviating eye; in this group, stereopsis was present in 80% of exotropes and 31% of esotropes, with the recovery of stereopsis and stereoacuity being uncorrelated to the number of years the deviation had persisted. [ 21 ] A study that was published 2006 included, aside an extensive review of investigations on stereopsis recovery of the last decades, a re-evaluation of all those patients who had had congenital or early-onset strabismus with a large constant horizontal divergence and had undergone strabismus surgery in the years 1997–1999 in a given clinic, excluding those who had a history of neurologic or systemic diseases or with organic retinal diseases. Among the resulting 36 subjects aged 6–30 years, many had regained binocular vision (56% according to an evaluation with Bagolini striated glasses , 39% with Titmus test, 33% with Worth 4-dot test, and 22% with Random dot E test) and 57% had stereoacuity of 200 sec of arc of better, leading to the conclusion that some degrees of stereopsis can be achieved even in cases of infantile or early-childhood strabism. [ 22 ] Another study  found that some chronically strabismic adults with good vision could recover fusion and stereopsis by means of surgical alignment. [ 23 ] In contrast, in a study in which a group of 17 adults and older children of at least 8 years of age, all of whom received strabismus surgery and post-operative evaluation after long-standing untreated infantile esotropia , most showed binocular fusion when tested with Bagolini lenses and an increased visual field , but none demonstrated stereo fusion or stereopsis. [ 30 ] Stereoacuity is limited by the visual acuity of the eyes, and in particular by the visual acuity of the weaker eye. That is, the more a patient's vision of any one of the two eyes is degraded compared to the 20/20 vision standard, the lower are the prospects of improving or re-gaining stereo vision, unless visual acuity itself were improved by other means. Strabismus surgery itself does not improve visual acuity. Orthoptic exercises have proven to be effective for reducing symptoms in patients with convergence insufficiency and decompensating exophoria by improving the near-point convergence of the eyes that is necessary for binocular fusion. [ 31 ] Experiments on monkeys, published 2007, revealed improvements in stereoacuity in monkeys who, after having been raised with binocular deprivation through prisms for the first two years, were exposed to extensive psychophysical training. Their stereo vision recovered in part, but remained far more limited than that of normally raised monkeys. [ 32 ] Scientists at the University of California, Berkeley have stated that perceptual learning appears to play an important role. [ 33 ] One investigation, published 2011, reported on a study on human stereopsis recovery using perceptual learning which was inspired by Barry's work. In this study, a small number of stereoblind subjects who had initially been stereoblind or stereoanomalous recovered stereopsis using perceptual learning exercises. Alongside the scientific assessment of the extent of recovery, also the subjective outcomes are described: [ 24 ] After achieving stereopsis, our observers reported that the depth "popped out", which they found very helpful and joyful in their everyday life. The anisometropic observer GD noticed "a surge in depth" one day when shopping in a supermarket. While playing table tennis, she feels that she is able to track a ping-pong ball more accurately and therefore can play better. Strabismic observer AB is more confident now when walking down stairs because she can judge the depth of the steps better. Strabismics AB, DP, and LR, are able to enjoy 3D movies for the first time, and strabismic GJ finds it easier to catch a fly ball while playing baseball. In a follow-up study, the authors of this study pointed out that the stereopsis that was recovered following perceptual learning was more limited in resolution and precision compared to normal subjects' stereopsis. [ 25 ] Dennis M. Levi was awarded the 2011 Charles F. Prentice Medal of the American Academy of Optometry for this work. [ 34 ] [ 35 ] [ 36 ] There have been several attempts to make use of modern technology for enhanced binocular eye training, in particular for treating amblyopia and interocular suppression . In some cases these modern techniques have improved patients' stereoacuity. Very early technology-enhanced vision therapy efforts have included the cheiroscope , which is a haploscope in which left- and or right-eye images can be blended into view over a drawing pad, and the subject may be given a task such as to reproduce a line image presented to one eye. However, historically these approaches were not developed much further and they were not put to widespread use. Recent systems are based on dichoptic presentation of the elements of a video game or virtual reality such that each eye receives different signals of the virtual world that the player's brain must combine in order to play successfully. One of the earliest systems of this kind has been proposed by a research group in the University of Nottingham with the aim of treating amblyopia, using virtual reality masks [ 37 ] [ 38 ] [ 39 ] or commercially available 3D shutter glasses . [ 40 ] The group also has worked to develop perceptual learning training protocols that specifically target the deficit in stereo acuity to allow the recovery of normal stereo function even in adulthood. [ 41 ] Another system of dichoptic presentation for binocular vision therapy has been proposed by researchers of the Research Institute of the McGill University Health Centre. [ 42 ] Using a modified puzzle video game Tetris , [ 43 ] the interocular suppression of patients with amblyopia was successfully treated with dichotomic training in which certain parameters of the training material were systematically adapted during the course of four weeks. Clinical supervision of such procedures is required to ensure that double vision does not occur. Most of the patients who underwent this treatment gained improved visual acuity of the weaker eye, and some also showed increased stereoacuity. [ 44 ] Another study performed at the same institute showed that dichoptic training can be more effective in adults than the more conventional amblyopia treatment of an eye patch . For this investigation, 18 adults played Tetris for one hour each day, half of the group wearing eye patches and the other half playing a dichoptic version of the game. After two weeks, the group who played dichoptically showed a significant improvement of vision in the weaker eye and in stereopsis acuity; the eye patch group had moderate improvements, which increased substantially after they, too, were given the dichoptic training afterwards. [ 45 ] [ 46 ] Dichoptic-based perceptual learning therapy, presented by means of a head-mounted display, is amenable also to amblyopic children, as it improves both the amblyopic eye's visual acuity and the stereo function. [ 47 ] The researchers at McGill University have shown that one to three weeks of playing a dichoptic video game for one to two hours on a hand-held device "can improve acuity and restore binocular function, including stereopsis in adults". [ 48 ] Furthermore, it has been suggested that these effect can be enhanced by anodal transcranial direct current stimulation (tDCS). [ 49 ] [ 50 ] Together with Levi of the University of California, Berkeley, scientists at the University of Rochester have made further developments in terms of virtual reality computer games [ 26 ] [ 51 ] which have shown some promise in improving both monocular and binocular vision in human subjects. Game developer James Blaha, who developed his own crowd-funded version of a dichoptic VR game for the Oculus Rift together with Manish Gupta and is continuing to experiment with the game, experienced stereopsis for the first time using his game. [ 52 ] [ 53 ] In 2011, two cases of adults with anisometropic amblyopia were reported whose visual acuity and stereoacuity improved due to learning-based therapies. [ 54 ] There are indications that the suppression of binocularity in amblyopic subjects is due to a suppression mechanism that prevents the amblyopic brain from learning to see. [ 45 ] [ 46 ] It has been suggested that desuppression and neuroplasticity may be favored by specific conditions that are commonly associated with perceptual learning tasks and video game playing such as a heightened requirement of attention , a prospect of reward , a feeling of enjoyment and a sense of flow . [ 36 ] [ 55 ] [ 56 ] Health insurances always review therapies in terms of clinical effectiveness in view of existing scientific literature, benefit, risk and cost. Even if individual cases of recovery exist, a treatment is only considered effective under this point of view if there is sufficient likelihood that it will predictably improve outcomes. In this context, medical coverage policy of the global health services organization Cigna "does not cover vision therapy , optometric training , eye exercises or orthoptics because they are considered experimental, investigational or unproven for any indication including the management of visual disorders and learning disabilities" based on a bibliographic review published by Cigna which concludes that "insufficient evidence exists in the published, peer-reviewed literature to conclude that vision therapy is effective for the treatment of any of the strabismic disorders except preoperative prism adaptation for acquired esotropia". [ 57 ] Similarly, the U.S. managed health care company Aetna offers vision therapy only in contracts with supplemental coverage and limits its prescriptions to a number of conditions that are explicitly specified in a list of vision disorders. [ 58 ]
https://en.wikipedia.org/wiki/Stereopsis_recovery
Stereoscopic spectroscopy is a type of imaging spectroscopy that can extract a few spectral parameters over a complete image plane simultaneously. A stereoscopic spectrograph is similar to a normal spectrograph except that (A) it has no slit, and (B) multiple spectral orders (often including the non- dispersed zero order) are collected simultaneously. [ 1 ] The individual images are blurred by the spectral information present in the original data. The images are recombined using stereoscopic algorithms similar to those used to find ground feature altitudes from parallax in aerial photography . Stereoscopic spectroscopy is a special case of the more general field of tomographic spectroscopy . Both types of imaging use an analogy between the ( x , y , λ ) {\displaystyle (x,y,\lambda )} data space of imaging spectrographs and the conventional ( x , y , z ) {\displaystyle (x,y,z)} 3- space of the physical world. Each spectral order in the instrument produces an image plane analogous to the view from a camera with a particular look angle through the ( x , y , λ ) {\displaystyle (x,y,\lambda )} data space, and recombining the views allows recovery of (some aspects of) the spectrum at every location in the image.
https://en.wikipedia.org/wiki/Stereoscopic_spectroscopy
In chemistry , stereoselectivity [ 1 ] is the property of a chemical reaction in which a single reactant forms an unequal mixture of stereoisomers during a non- stereospecific creation of a new stereocenter or during a non-stereospecific transformation of a pre-existing one. [ 2 ] The selectivity arises from differences in steric and electronic effects in the mechanistic pathways leading to the different products. Stereoselectivity can vary in degree but it can never be total since the activation energy difference between the two pathways is finite: both products are at least possible and merely differ in amount. However, in favorable cases, the minor stereoisomer may not be detectable by the analytic methods used. An enantioselective reaction is one in which one enantiomer is formed in preference to the other, in a reaction that creates an optically active product from an achiral starting material, using either a chiral catalyst, an enzyme or a chiral reagent. The degree of selectivity is measured by the enantiomeric excess . An important variant is kinetic resolution , in which a pre-existing chiral center undergoes reaction with a chiral catalyst, an enzyme or a chiral reagent such that one enantiomer reacts faster than the other and leaves behind the less reactive enantiomer, or in which a pre-existing chiral center influences the reactivity of a reaction center elsewhere in the same molecule. A diastereoselective reaction is one in which one diastereomer is formed in preference to another (or in which a subset of all possible diastereomers dominates the product mixture), establishing a preferred relative stereochemistry. In this case, either two or more chiral centers are formed at once such that one relative stereochemistry is favored, [ 3 ] or a pre-existing chiral center (which needs not be optically pure) biases the stereochemical outcome during the creation of another. The degree of relative selectivity is measured by the diastereomeric excess . Stereoconvergence can be considered an opposite of stereospecificity, when the reaction of two different stereoisomers yield a single product stereoisomer. The quality of stereoselectivity is concerned solely with the products, and their stereochemistry. Of a number of possible stereoisomeric products, the reaction selects one or two to be formed. Stereomutation is a general term for the conversion of one stereoisomer into another. For example, racemization (as in S N 1 reactions), epimerization (as in interconversion of D-glucose and D-mannose in Lobry de Bruyn–Van Ekenstein transformation ), or asymmetric transformation (conversion of a racemate into a pure enantiomer or into a mixture in which one enantiomer is present in excess, or of a diastereoisomeric mixture into a single diastereoisomer or into a mixture in which one diastereoisomer predominates). [ 4 ] An example of modest stereoselectivity is the dehydrohalogenation of 2-iodobutane which yields 60% trans -2-butene and 20% cis -2-butene. [ 5 ] Since alkene geometric isomers are also classified as diastereomers, this reaction would also be called diastereoselective. Cram's rule predicts the major diastereomer resulting from the diastereoselective nucleophilic addition to a carbonyl group next to a chiral center. The chiral center need not be optically pure, as the relative stereochemistry will be the same for both enantiomers. In the example below the (S)-aldehyde reacts with a thiazole to form the (S,S) diastereomer but only a small amount of the (S,R) diastereomer: [ 6 ] The Sharpless epoxidation is an example of an enantioselective process, in which an achiral allylic alcohol substrate is transformed into an optically active epoxyalcohol. In the case of chiral allylic alcohols, kinetic resolution results. Another example is Sharpless asymmetric dihydroxylation . In the example below the achiral alkene yields only one of the possible 4 stereoisomers. [ 7 ] With a stereogenic center next to the carbocation the substitution can be stereoselective in inter- [ 8 ] and intramolecular [ 9 ] [ 10 ] reactions. In the reaction depicted below the nucleophile (furan) can approach the carbocation formed from the least shielded side away from the bulky t-butyl group resulting in high facial diastereoselectivity: Pinoresinol biosynthesis involved a protein called a dirigent protein . The first dirigent protein was discovered in Forsythia intermedia . This protein has been found to direct the stereoselective biosynthesis of (+)- pinoresinol from coniferyl alcohol monomers. [ 11 ] Recently, a second, enantiocomplementary dirigent protein was identified in Arabidopsis thaliana , which directs enantioselective synthesis of (−)-pinoresinol. [ 12 ]
https://en.wikipedia.org/wiki/Stereoselectivity
In chemistry , stereospecificity is the property of a reaction mechanism that leads to different stereoisomeric reaction products from different stereoisomeric reactants , or which operates on only one (or a subset) of the stereoisomers. [ 1 ] [ 2 ] [ 3 ] In contrast, stereoselectivity [ 1 ] [ 2 ] is the property of a reactant mixture where a non-stereospecific mechanism allows for the formation of multiple products, but where one (or a subset) of the products is favored by factors, such as steric access , that are independent of the mechanism. A stereospecific mechanism specifies the stereochemical outcome of a given reactant, whereas a stereoselective reaction selects products from those made available by the same, non-specific mechanism acting on a given reactant. Given a single, stereoisomerically pure starting material, a stereospecific mechanism will give 100% of a particular stereoisomer (or no reaction), although loss of stereochemical integrity can easily occur through competing mechanisms with different stereochemical outcomes. A stereoselective process will normally give multiple products even if only one mechanism is operating on an isomerically pure starting material. The term stereospecific reaction is ambiguous, since the term reaction itself can mean a single-mechanism transformation (such as the Diels–Alder reaction ), which could be stereospecific, or the outcome of a reactant mixture that may proceed through multiple competing mechanisms, specific and non-specific. In the latter sense, the term stereospecific reaction is commonly misused to mean highly stereoselective reaction . Chiral synthesis is built on a combination of stereospecific transformations (for the interconversion of existing stereocenters) and stereoselective ones (for the creation of new stereocenters), where also the optical activity of a chemical compound is preserved. The quality of stereospecificity is focused on the reactants and their stereochemistry; it is concerned with the products too, but only as they provide evidence of a difference in behavior between reactants. Of stereoisomeric reactants, each behaves in its own specific way. Stereospecificity towards enantiomers is called enantiospecificity. Nucleophilic substitution at sp 3 centres can proceed by the stereospecific S N 2 mechanism, causing only inversion, or by the non-specific S N 1 mechanism, the outcome of which can show a modest selectivity for inversion, depending on the reactants and the reaction conditions to which the mechanism does not refer. The choice of mechanism adopted by a particular reactant combination depends on other factors (steric access to the reaction centre in the substrate, nucleophile , solvent, temperature). For example, tertiary centres react almost exclusively by the S N 1 mechanism whereas primary centres (except neopentyl centres) react almost exclusively by the S N 2 mechanism. When a nucleophilic substitution results in incomplete inversion, it is because of a competition between the two mechanisms, as often occurs at secondary centres, or because of double inversion (as when iodide is the nucleophile). The addition of singlet carbenes to alkenes is stereospecific in that the geometry of the alkene is preserved in the product. For example, dibromocarbene and cis -2-butene yield cis -2,3-dimethyl-1,1-dibromocyclopropane, whereas the trans isomer exclusively yields the trans cyclopropane. [ 4 ] This addition remains stereospecific even if the starting alkene is not isomerically pure, as the products' stereochemistry will match the reactants'. The disrotatory ring closing reaction of conjugated trienes is stereospecific in that isomeric reactants will give isomeric products. For example, trans,cis,trans -2,4,6-octatriene gives cis -dimethylcyclohexadiene, whereas the trans,cis,cis reactant isomer gives the trans product and the trans,trans,trans reactant isomer does not react in this manner.
https://en.wikipedia.org/wiki/Stereospecificity
Stereotactic biopsy , also known as stereotactic core biopsy , is a biopsy procedure that uses a computer and imaging performed in at least two planes to localize a target lesion (such as a tumor or microcalcifications in the breast) in three-dimensional space and guide the removal of tissue for examination by a pathologist under a microscope . Stereotactic core biopsy makes use of the underlying principle of parallax to determine the depth or "Z-dimension" of the target lesion. Stereotactic core biopsy is extensively used by radiologists specializing in breast imaging to obtain tissue samples containing microcalcifications , which can be an early sign of breast cancer . X-ray-guided stereotactic biopsy is used for impalpable lesions (cannot be felt manually) that are also not visible on ultrasound. [ 1 ] A stereotactic biopsy may be used, with x-ray guidance, for performing a fine needle aspiration for cytology and needle core biopsy to evaluate a breast lesion. However, that type of biopsy is also sometimes performed without any imaging guidance, [ 2 ] and typically, stereotactic guidance is used for core biopsies or vacuum-assisted mammotomy. [ 3 ] Stereotactic core biopsy is necessary for evaluating atypical appearing calcifications found on mammogram of the breast. If the calcifications exhibit the classic "teacup" appearance of benign fibrocystic changes, then a biopsy is usually not necessary. [ 4 ] This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute . This oncology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stereotactic_biopsy
Steric effects arise from the spatial arrangement of atoms. When atoms come close together there is generally a rise in the energy of the molecule. Steric effects are nonbonding interactions that influence the shape ( conformation ) and reactivity of ions and molecules. Steric effects complement electronic effects , which dictate the shape and reactivity of molecules. Steric repulsive forces between overlapping electron clouds result in structured groupings of molecules stabilized by the way that opposites attract and like charges repel. Steric hindrance is a consequence of steric effects. Steric hindrance is the slowing of chemical reactions due to steric bulk. It is usually manifested in intermolecular reactions , whereas discussion of steric effects often focus on intramolecular interactions . Steric hindrance is often exploited to control selectivity, such as slowing unwanted side-reactions. Steric hindrance between adjacent groups can also affect torsional bond angles . Steric hindrance is responsible for the observed shape of rotaxanes and the low rates of racemization of 2,2'-disubstituted biphenyl and binaphthyl derivatives. Because steric effects have profound impact on properties, the steric properties of substituents have been assessed by numerous methods. Relative rates of chemical reactions provide useful insights into the effects of the steric bulk of substituents. Under standard conditions, methyl bromide solvolyzes 10 7 faster than does neopentyl bromide . The difference reflects the inhibition of attack on the compound with the sterically bulky (CH 3 ) 3 C group. [ 3 ] A-values provide another measure of the bulk of substituents. A-values are derived from equilibrium measurements of monosubstituted cyclohexanes . [ 4 ] [ 5 ] [ 6 ] [ 7 ] The extent that a substituent favors the equatorial position gives a measure of its bulk. Ceiling temperature ( T c {\displaystyle T_{c}} ) is a measure of the steric properties of the monomers that comprise a polymer. T c {\displaystyle T_{c}} is the temperature where the rate of polymerization and depolymerization are equal. Sterically hindered monomers give polymers with low T c {\displaystyle T_{c}} 's, which are usually not useful. Ligand cone angles are measures of the size of ligands in coordination chemistry . It is defined as the solid angle formed with the metal at the vertex and the hydrogen atoms at the perimeter of the cone (see figure). [ 9 ] Steric effects are critical to chemistry , biochemistry , and pharmacology . In organic chemistry, steric effects are nearly universal and affect the rates and activation energies of most chemical reactions to varying degrees. In biochemistry, steric effects are often exploited in naturally occurring molecules such as enzymes , where the catalytic site may be buried within a large protein structure. In pharmacology, steric effects determine how and at what rate a drug will interact with its target bio-molecules.
https://en.wikipedia.org/wiki/Steric_effects
The steric factor , usually denoted ρ , [ 1 ] is a quantity used in collision theory . Also called the probability factor , the steric factor is defined as the ratio between the experimental value of the rate constant and the one predicted by collision theory. It can also be defined as the ratio between the pre-exponential factor and the collision frequency , and it is most often less than unity. Physically, the steric factor can be interpreted as the ratio of the cross section for reactive collisions to the total collision cross section. Usually, the more complex the reactant molecules , the lower the steric factors. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions , which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions); and so on. When collision theory is applied to reactions in solution, the solvent cage has an effect on the reactant molecules, as several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions. Usually there is no simple way to accurately estimate steric factors without performing trajectory or scattering calculations. It is also more commonly known as the frequency factor. This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steric_factor
Valence shell electron pair repulsion ( VSEPR ) theory ( / ˈ v ɛ s p ər , v ə ˈ s ɛ p ər / VESP -ər , [ 1 ] : 410 və- SEP -ər [ 2 ] ) is a model used in chemistry to predict the geometry of individual molecules from the number of electron pairs surrounding their central atoms. [ 3 ] It is also named the Gillespie-Nyholm theory after its two main developers, Ronald Gillespie and Ronald Nyholm . The premise of VSEPR is that the valence electron pairs surrounding an atom tend to repel each other. The greater the repulsion, the higher in energy (less stable) the molecule is. Therefore, the VSEPR-predicted molecular geometry of a molecule is the one that has as little of this repulsion as possible. Gillespie has emphasized that the electron-electron repulsion due to the Pauli exclusion principle is more important in determining molecular geometry than the electrostatic repulsion . [ 4 ] The insights of VSEPR theory are derived from topological analysis of the electron density of molecules. Such quantum chemical topology (QCT) methods include the electron localization function (ELF) and the quantum theory of atoms in molecules (AIM or QTAIM). [ 4 ] [ 5 ] The idea of a correlation between molecular geometry and number of valence electron pairs (both shared and unshared pairs) was originally proposed in 1939 by Ryutaro Tsuchida in Japan, [ 6 ] and was independently presented in a Bakerian Lecture in 1940 by Nevil Sidgwick and Herbert Powell of the University of Oxford . [ 7 ] In 1957, Ronald Gillespie and Ronald Sydney Nyholm of University College London refined this concept into a more detailed theory, capable of choosing between various alternative geometries. [ 8 ] [ 9 ] VSEPR theory is used to predict the arrangement of electron pairs around central atoms in molecules, especially simple and symmetric molecules. A central atom is defined in this theory as an atom which is bonded to two or more other atoms, while a terminal atom is bonded to only one other atom. [ 1 ] : 398 For example, in the molecule methyl isocyanate (H 3 C-N=C=O), the two carbons and one nitrogen are central atoms, and the three hydrogens and one oxygen are terminal atoms. [ 1 ] : 416 The geometry of the central atoms and their non-bonding electron pairs in turn determine the geometry of the larger whole molecule. The number of electron pairs in the valence shell of a central atom is determined after drawing the Lewis structure of the molecule, and expanding it to show all bonding groups and lone pairs of electrons. [ 1 ] : 410–417 In VSEPR theory, a double bond or triple bond is treated as a single bonding group. [ 1 ] The sum of the number of atoms bonded to a central atom and the number of lone pairs formed by its nonbonding valence electrons is known as the central atom's steric number. The electron pairs (or groups if multiple bonds are present) are assumed to lie on the surface of a sphere centered on the central atom and tend to occupy positions that minimize their mutual repulsions by maximizing the distance between them. [ 1 ] : 410–417 [ 10 ] The number of electron pairs (or groups), therefore, determines the overall geometry that they will adopt. For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. Therefore, the central atom is predicted to adopt a linear geometry. If there are 3 electron pairs surrounding the central atom, their repulsion is minimized by placing them at the vertices of an equilateral triangle centered on the atom. Therefore, the predicted geometry is trigonal . Likewise, for 4 electron pairs, the optimal arrangement is tetrahedral . [ 1 ] : 410–417 As a tool in predicting the geometry adopted with a given number of electron pairs, an often used physical demonstration of the principle of minimal electron pair repulsion utilizes inflated balloons. Through handling, balloons acquire a slight surface electrostatic charge that results in the adoption of roughly the same geometries when they are tied together at their stems as the corresponding number of electron pairs. For example, five balloons tied together adopt the trigonal bipyramidal geometry, just as do the five bonding pairs of a PCl 5 molecule. The steric number of a central atom in a molecule is the number of atoms bonded to that central atom, called its coordination number , plus the number of lone pairs of valence electrons on the central atom. [ 11 ] In the molecule SF 4 , for example, the central sulfur atom has four ligands ; the coordination number of sulfur is four. In addition to the four ligands, sulfur also has one lone pair in this molecule. Thus, the steric number is 4 + 1 = 5. The overall geometry is further refined by distinguishing between bonding and nonbonding electron pairs. The bonding electron pair shared in a sigma bond with an adjacent atom lies further from the central atom than a nonbonding (lone) pair of that atom, which is held close to its positively charged nucleus. VSEPR theory therefore views repulsion by the lone pair to be greater than the repulsion by a bonding pair. As such, when a molecule has 2 interactions with different degrees of repulsion, VSEPR theory predicts the structure where lone pairs occupy positions that allow them to experience less repulsion. Lone pair–lone pair (lp–lp) repulsions are considered stronger than lone pair–bonding pair (lp–bp) repulsions, which in turn are considered stronger than bonding pair–bonding pair (bp–bp) repulsions, distinctions that then guide decisions about overall geometry when 2 or more non-equivalent positions are possible. [ 1 ] : 410–417 For instance, when 5 valence electron pairs surround a central atom, they adopt a trigonal bipyramidal molecular geometry with two collinear axial positions and three equatorial positions. An electron pair in an axial position has three close equatorial neighbors only 90° away and a fourth much farther at 180°, while an equatorial electron pair has only two adjacent pairs at 90° and two at 120°. The repulsion from the close neighbors at 90° is more important, so that the axial positions experience more repulsion than the equatorial positions; hence, when there are lone pairs, they tend to occupy equatorial positions as shown in the diagrams of the next section for steric number five. [ 10 ] The difference between lone pairs and bonding pairs may also be used to rationalize deviations from idealized geometries. For example, the H 2 O molecule has four electron pairs in its valence shell: two lone pairs and two bond pairs. The four electron pairs are spread so as to point roughly towards the apices of a tetrahedron. However, the bond angle between the two O–H bonds is only 104.5°, rather than the 109.5° of a regular tetrahedron, because the two lone pairs (whose density or probability envelopes lie closer to the oxygen nucleus) exert a greater mutual repulsion than the two bond pairs. [ 1 ] : 410–417 [ 10 ] A bond of higher bond order also exerts greater repulsion since the pi bond electrons contribute. [ 10 ] For example, in isobutylene , (H 3 C) 2 C=CH 2 , the H 3 C−C=C angle (124°) is larger than the H 3 C−C−CH 3 angle (111.5°). However, in the carbonate ion, CO 2− 3 , all three C−O bonds are equivalent with angles of 120° due to resonance . The "AXE method" of electron counting is commonly used when applying the VSEPR theory. The electron pairs around a central atom are represented by a formula AX m E n , where A represents the central atom and always has an implied subscript one. Each X represents a ligand (an atom bonded to A). Each E represents a lone pair of electrons on the central atom. [ 1 ] : 410–417 The total number of X and E is known as the steric number. For example, in a molecule AX 3 E 2 , the atom A has a steric number of 5. When the substituent (X) atoms are not all the same, the geometry is still approximately valid, but the bond angles may be slightly different from the ones where all the outside atoms are the same. For example, the double-bond carbons in alkenes like C 2 H 4 are AX 3 E 0 , but the bond angles are not all exactly 120°. Likewise, SOCl 2 is AX 3 E 1 , but because the X substituents are not identical, the X–A–X angles are not all equal. Based on the steric number and distribution of X s and E s, VSEPR theory makes the predictions in the following tables. For main-group elements , there are stereochemically active lone pairs E whose number can vary from 0 to 3. Note that the geometries are named according to the atomic positions only and not the electron arrangement. For example, the description of AX 2 E 1 as a bent molecule means that the three atoms AX 2 are not in one straight line, although the lone pair helps to determine the geometry. The lone pairs on transition metal atoms are usually stereochemically inactive, meaning that their presence does not change the molecular geometry. For example, the hexaaquo complexes M(H 2 O) 6 are all octahedral for M = V 3+ , Mn 3+ , Co 3+ , Ni 2+ and Zn 2+ , despite the fact that the electronic configurations of the central metal ion are d 2 , d 4 , d 6 , d 8 and d 10 respectively. [ 13 ] : 542 The Kepert model ignores all lone pairs on transition metal atoms, so that the geometry around all such atoms corresponds to the VSEPR geometry for AX n with 0 lone pairs E. [ 15 ] [ 13 ] : 542 This is often written ML n , where M = metal and L = ligand. The Kepert model predicts the following geometries for coordination numbers of 2 through 9: The methane molecule (CH 4 ) is tetrahedral because there are four pairs of electrons. The four hydrogen atoms are positioned at the vertices of a tetrahedron , and the bond angle is cos −1 (− 1 ⁄ 3 ) ≈ 109° 28′. [ 16 ] [ 17 ] This is referred to as an AX 4 type of molecule. As mentioned above, A represents the central atom and X represents an outer atom. [ 1 ] : 410–417 The ammonia molecule (NH 3 ) has three pairs of electrons involved in bonding, but there is a lone pair of electrons on the nitrogen atom. [ 1 ] : 392–393 It is not bonded with another atom; however, it influences the overall shape through repulsions. As in methane above, there are four regions of electron density. Therefore, the overall orientation of the regions of electron density is tetrahedral. On the other hand, there are only three outer atoms. This is referred to as an AX 3 E type molecule because the lone pair is represented by an E. [ 1 ] : 410–417 By definition, the molecular shape or geometry describes the geometric arrangement of the atomic nuclei only, which is trigonal-pyramidal for NH 3 . [ 1 ] : 410–417 Steric numbers of 7 or greater are possible, but are less common. The steric number of 7 occurs in iodine heptafluoride (IF 7 ); the base geometry for a steric number of 7 is pentagonal bipyramidal. [ 10 ] The most common geometry for a steric number of 8 is a square antiprismatic geometry. [ 18 ] : 1165 Examples of this include the octacyanomolybdate ( Mo(CN) 4− 8 ) and octafluorozirconate ( ZrF 4− 8 ) anions. [ 18 ] : 1165 The nonahydridorhenate ion ( ReH 2− 9 ) in potassium nonahydridorhenate is a rare example of a compound with a steric number of 9, which has a tricapped trigonal prismatic geometry. [ 13 ] : 254 [ 18 ] Steric numbers beyond 9 are very rare, and it is not clear what geometry is generally favoured. [ 19 ] Possible geometries for steric numbers of 10, 11, 12, or 14 are bicapped square antiprismatic (or bicapped dodecadeltahedral ), octadecahedral , icosahedral , and bicapped hexagonal antiprismatic , respectively. No compounds with steric numbers this high involving monodentate ligands exist, and those involving multidentate ligands can often be analysed more simply as complexes with lower steric numbers when some multidentate ligands are treated as a unit. [ 18 ] : 1165, 1721 There are groups of compounds where VSEPR fails to predict the correct geometry. The shapes of heavier Group 14 element alkyne analogues (RM≡MR, where M = Si, Ge, Sn or Pb) have been computed to be bent. [ 20 ] [ 21 ] [ 22 ] One example of the AX 2 E 2 geometry is molecular lithium oxide , Li 2 O, a linear rather than bent structure, which is ascribed to its bonds being essentially ionic and the strong lithium-lithium repulsion that results. [ 23 ] Another example is O(SiH 3 ) 2 with an Si–O–Si angle of 144.1°, which compares to the angles in Cl 2 O (110.9°), (CH 3 ) 2 O (111.7°), and N(CH 3 ) 3 (110.9°). [ 24 ] Gillespie and Robinson rationalize the Si–O–Si bond angle based on the observed ability of a ligand's lone pair to most greatly repel other electron pairs when the ligand electronegativity is greater than or equal to that of the central atom. [ 24 ] In O(SiH 3 ) 2 , the central atom is more electronegative, and the lone pairs are less localized and more weakly repulsive. The larger Si–O–Si bond angle results from this and strong ligand-ligand repulsion by the relatively large -SiH 3 ligand. [ 24 ] Burford et al. showed through X-ray diffraction studies that Cl 3 Al–O–PCl 3 has a linear Al–O–P bond angle and is therefore a non-VSEPR molecule. [ 25 ] Some AX 6 E 1 molecules, e.g. xenon hexafluoride (XeF 6 ) and the Te(IV) and Bi(III) anions, TeCl 2− 6 , TeBr 2− 6 , BiCl 3− 6 , BiBr 3− 6 and BiI 3− 6 , are octahedral, rather than pentagonal pyramids, and the lone pair does not affect the geometry to the degree predicted by VSEPR. [ 26 ] Similarly, the octafluoroxenate ion ( XeF 2− 8 ) in nitrosonium octafluoroxenate(VI) [ 13 ] : 498 [ 27 ] [ 28 ] is a square antiprism with minimal distortion, despite having a lone pair. One rationalization is that steric crowding of the ligands allows little or no room for the non-bonding lone pair; [ 24 ] another rationalization is the inert-pair effect . [ 13 ] : 214 The Kepert model predicts that ML 4 transition metal molecules are tetrahedral in shape, and it cannot explain the formation of square planar complexes. [ 13 ] : 542 The majority of such complexes exhibit a d 8 configuration as for the tetrachloroplatinate ( PtCl 2− 4 ) ion. The explanation of the shape of square planar complexes involves electronic effects and requires the use of crystal field theory . [ 13 ] : 562–4 Some transition metal complexes with low d electron count have unusual geometries, which can be ascribed to d subshell bonding interaction. [ 29 ] Gillespie found that this interaction produces bonding pairs that also occupy the respective antipodal points (ligand opposed) of the sphere. [ 30 ] [ 4 ] This phenomenon is an electronic effect resulting from the bilobed shape of the underlying sd x hybrid orbitals . [ 31 ] [ 32 ] The repulsion of these bonding pairs leads to a different set of shapes. The gas phase structures of the triatomic halides of the heavier members of group 2 , (i.e., calcium, strontium and barium halides, MX 2 ), are not linear as predicted but are bent, (approximate X–M–X angles: CaF 2 , 145°; SrF 2 , 120°; BaF 2 , 108°; SrCl 2 , 130°; BaCl 2 , 115°; BaBr 2 , 115°; BaI 2 , 105°). [ 36 ] It has been proposed by Gillespie that this is also caused by bonding interaction of the ligands with the d subshell of the metal atom, thus influencing the molecular geometry. [ 24 ] [ 37 ] Relativistic effects on the electron orbitals of superheavy elements is predicted to influence the molecular geometry of some compounds. For instance, the 6d 5/2 electrons in nihonium play an unexpectedly strong role in bonding, so NhF 3 should assume a T-shaped geometry, instead of a trigonal planar geometry like its lighter congener BF 3 . [ 38 ] In contrast, the extra stability of the 7p 1/2 electrons in tennessine are predicted to make TsF 3 trigonal planar, unlike the T-shaped geometry observed for IF 3 and predicted for At F 3 ; [ 39 ] similarly, Og F 4 should have a tetrahedral geometry, while XeF 4 has a square planar geometry and Rn F 4 is predicted to have the same. [ 40 ] The VSEPR theory can be extended to molecules with an odd number of electrons by treating the unpaired electron as a "half electron pair"—for example, Gillespie and Nyholm [ 8 ] : 364–365 suggested that the decrease in the bond angle in the series NO + 2 (180°), NO 2 (134°), NO − 2 (115°) indicates that a given set of bonding electron pairs exert a weaker repulsion on a single non-bonding electron than on a pair of non-bonding electrons. In effect, they considered nitrogen dioxide as an AX 2 E 0.5 molecule, with a geometry intermediate between NO + 2 and NO − 2 . Similarly, chlorine dioxide (ClO 2 ) is an AX 2 E 1.5 molecule, with a geometry intermediate between ClO + 2 and ClO − 2 . [ citation needed ] Finally, the methyl radical (CH 3 ) is predicted to be trigonal pyramidal like the methyl anion ( CH − 3 ), but with a larger bond angle (as in the trigonal planar methyl cation ( CH + 3 )). However, in this case, the VSEPR prediction is not quite true, as CH 3 is actually planar, although its distortion to a pyramidal geometry requires very little energy. [ 41 ]
https://en.wikipedia.org/wiki/Steric_number
In chemistry , a sterically induced reduction happens when an oxidized metal behaves as, and exhibits similar reducing properties to, the more reduced form of the metal. This effect is mainly caused by the surrounding ligands that are complexed to the metal and it is the ligands that are involved in the reduction chemistry instead of the metal due to electronic destabilization by being significantly distanced from the metal. Sterically induced reductions commonly involve metals found in the lanthanoid and actinoid series. Divalents Lanthanides are extremely reducing (can reduce alkali cations) compounds. Of these divalent lanthanides, Samarium(II) iodide , SmI 2 , is a common reducing agent that is used in a variety of synthetic applications, mainly because all other divalent lanthanides are unstable. Complexes of Sm (II) have also been investigated and used in similar applications. However, even though Sm(II) complexes and compounds have had tremendous success when used in conjunction with a variety of substrates. There have been instances where chemistry of certain materials cannot be performed due to unclean reactions in which products are not easily isolated from reaction mixtures when Sm(II) compounds are used to perform the desired reduction. In these cases, adjusting the size of the metal (which is commonly and easily done for the trivalent lanthanide compounds) may fine tune the nature of a specific reaction, which should produce desired and clean products. One drawback to this notion is that Sm(II) is uniquely stable compared to other divalent lanthanides, where the other metals in the series tend to exist freely in the trivalent state. The discovery and application of sterically induced reductions allows the unique reducing properties and chemistry to be applicable to all of the lanthanide metals while remaining in their more stable trivalent state. When Sm(III) is complexed with C 5 Me 5 ( pentamethylcyclopentadiene ) to give the compound (C 5 Me 5 ) 3 Sm , this trivalent species has been shown to have the same reducing reactivity of the Sm(II) derivative. [ 1 ] The top reaction is the Sm(III) derivative and the bottom involves the Sm(II) derivative. Notice that the oxidation state of the metal in the top reaction does not change, while the oxidation state changes in the bottom one. If the metal was involved in the reduction the oxidation states should have changed (+3 to +4). For the trivalent compound this is not the case, thus the ligands themselves must be involved in the reduction process via the following redox reaction : But ligand induced reductions are not new and have been known to happen with a variety of lanthanide complexes. However, steric factors must also be considered on the reactivity of the Sm(III) complex as less crowded structures do not have any reductive activity. For years, it was thought that (C 5 Me 5 ) 3 Sm was not a possible compound due to the huge strain of cone angles greater than 120 degrees. However, this compound is formed from the Sm(II) complex, and X-Ray structures of the Sm(II) complex have shown that there was enough room for a third spot. Also, X-ray structures of (C 5 Me 5 ) 3 Sm show that the C 5 Me 5 s are 0.1 Angstroms farther from the metal than normally predicted and expected. This increased distance, forced by sterics, makes the ligands have less electronic stability and may be a possible reason for the observed redox reaction of the ligands instead of the metal. [ 1 ] Sm is a typical and well studied metal due to its unusual stability in a divalent and trivalent state. With the discovery of sterically induced reductions other lanthanide metals can now be studied in their more stable trivalent state, which can allow for more control of reduction reactions by tuning the reaction based on the metal size and electronics.
https://en.wikipedia.org/wiki/Sterically_induced_reduction
Stericycle, Inc. is a compliance company that specializes in collecting and disposing regulated medical waste, such as medical waste and sharps, pharmaceuticals, hazardous waste, and providing services for recalled and expired goods. It also provides related education and training services, and patient communication services. The company was founded in 1989 and is headquartered in Bannockburn, Illinois , with many more bases of operation around the world, including Medical waste incinerators in Utah and North Carolina. Stericycle, Inc., together with its subsidiaries, offers regulated waste management services, sharps disposal containers to reduce the risk of needlestick injuries, healthcare compliance services, and drug disposal services. In addition, with the acquisition of Shred-it in 2015, Stericycle also offers secure information destruction services including document shredding and hard drive destruction. The company serves healthcare facilities such as hospitals , blood banks , and pharmaceutical manufacturers . Stericycle also serves myriad small businesses, which include outpatient clinics, medical and dental offices, abortion clinics, veterinary and animal hospitals, funeral homes , home healthcare agencies, body art studios, and long-term and sub-acute care facilities. Medical device manufacturers, consumer goods manufacturers, and retailers are also key customers. Stericycle has been harshly criticized by residents living near their incinerators and environmentalists across the globe. [ 4 ] In 2018, Stericycle was investigated by the state of Utah for burning hazardous, radioactive [ 5 ] waste above legal levels at their North Salt Lake location. The investigations also are in response to Stericycle's alleged falsification of records to hide the alleged illegal quantity burning near Foxboro Elementary in North Salt Lake. [ 6 ] [ 7 ] [ 8 ] Stericycle has a presence in 10 countries. Approximately 10% of the company's revenue comes from its international operations. Full services are offered the U.S., Canada , Ireland , and Spain . Stericycle offers all services, except for hazardous waste management, in the United Kingdom and Portugal . Only secure information destruction services are provided in Austria , Belgium , France , Germany , the Netherlands , and Luxembourg . Stericycle no longer operates in Argentina , Brazil , Chile , Japan , Mexico , Australia , South Korea , Romania , United Arab Emirates , and Singapore . Cindy Miller joined Stericycle in October 2018 as President and Chief Operating Officer, and became Chief Executive Officer in May 2019. She was preceded in her role by Charlie Alutto, and prior to that, by Mark Miller, who took over from founder Dr. James Sharp in 1989. Stericycle has been publicly traded on the NASDAQ since 1996 and has 10 people on its board of directors. [ 9 ] Stericycle was founded in 1989 by Dr. James Sharp based on his business plan to address the Syringe Tide , where hypodermic needles and other medical waste washed up to the shores of New York and New Jersey . The Syringe Tide led to the Medical Waste Tracking Act, signed in 1988, establishing regulated medical waste management as an industry. [ 10 ] In 1992, Mark Miller stepped in as President and CEO, and as a result of Miller's leadership, Stericycle grew rapidly, going public in 1996 on the NASDAQ (ticker SRCL). Stericycle began to expand internationally in 1998, starting with Mexico and Canada. (4) In 1999, Stericycle acquired 200,000 customers from Allied Waste Industries after Allied acquired Browning Ferris Industries . [ 11 ] [ 12 ] The company's international business began in 1997 with a joint venture in Mexico. Since then, Stericycle has created services, tools and resources for healthcare professionals not only in the United States and Mexico , but also in Argentina , Brazil , Canada , Chile , Ireland , Japan , Portugal , Puerto Rico, Romania , Spain, and the United Kingdom . In 1999, Stericycle began offering safety and medical compliance training services with the launch of its Stericycle Steri-Safe OSHA Compliance program. In the 2000s, Stericycle achieved growth through launching and/or acquiring complementary business lines, as well as continued international expansion. In 2003, Stericycle entered sharps waste management, acquiring Scherer Healthcare's existing practice and occasionally referring to parts of the service as “Bio Systems” in markets like Ireland . [ 13 ] [ 14 ] In 2004, Stericycle began providing medical waste disposal solutions [ clarification needed ] in the United Kingdom with more international growth following. [ 15 ] In 2008, Stericycle acquired its first hazardous waste removal company and in 2010 started its Communications Solutions business line. The acquisition of PSC Environmental Solutions in 2014 led to the formal establishment of Environmental Solutions focused on hazardous waste. [ 16 ] Finally, Stericycle's largest acquisition to date, Shred-it , occurred in 2015, for US$2.3 billion. [ 17 ] In 2010, Stericycle began to include patient notification services with the acquisition of NotifyMD. Several other acquisitions followed, giving Stericycle an interest in telephone support services for physician offices. In 2014, it acquired PSC Environmental Services LLC in a deal worth $275 million to form Stericycle Environmental Solutions. [ 18 ] This enabled expansion in hazardous waste management. In 2015, it acquired Shred-it International in a deal worth $2.3 billion. [ 19 ] The company lost a contract to provide clinical waste services to GPs and pharmacies in Cumbria and north-east England in April 2017, when their competitor, Healthcare Environment Services put in a substantially cheaper offer, of £310,000, than theirs of £479,999. Stericycle then initiated a legal challenge against NHS England ’s decision which was dismissed by the High Court of Justice in July 2018, and the company's behaviour severely criticised. Their commercial director Lindsay Dransfield was described as “a broadly unsatisfactory witness”. The company said it intended to appeal. [ 20 ] Stericycle's legal position was that HES could not sustainably perform the contract at that low price. HES later was declared bankrupt, causing an NHS medical waste scandal, as it was unable to afford to maintain incinerators at a level to process the volume of waste collected. Beyond services related to healthcare wastes, in some markets the company has expanded its offerings to include management of certain hazardous wastes as well as patient transport and medical courier services. In June 2024, Stericycle accepted an offer from WM (formerly Waste Management Inc) to acquire the business for $7.2 billion. [ 21 ] On November 4, it was announced that the acquisition had been completed. [ 22 ] Stericycle offers the following types of specialized waste management: Stericycle offers secure information destruction, for both paper and hard drive, through Shred-it . The company also offers compliance training primarily through online courses focused on applying industry regulations related to information security, human resources, medical billing, and patient communications. [ 24 ] They have also developed training software related to compliance. Additionally, they have a communications team that coordinates call centers in emergencies and assists with waste management messaging. [ 25 ] The company has a contract for collection and disposal services to around 700 GP practices across Hampshire and Isle of Wight, Buckinghamshire, Surrey, Sussex, Oxfordshire and Berkshire and acute NHS trusts in England. In 2020 it suffered from capacity problems and failed to collect clinical waste routinely from 139 practices during September and October. 245 collections were missed. They said that the NHS was producing significantly higher volumes of clinical waste than expected because of the amount of Personal protective equipment being used. [ 26 ] In 2018, Stericycle joined the National Safety Council as the medicine disposal partner for a nationwide campaign. Stericycle served as a leading voice on safe disposal practices giving away thousands of Seal&Send Mail Back Envelopes consumers could drop in any mailbox. [ 27 ] The Stop Everyday Killers campaign began with the unveiling of Prescribed to Death: A Memorial to the Victims of the Opioid Crisis in Chicago . The exhibit includes a memorial wall made of pills carved with faces that represent the 22,000 people lost last year to prescription opioid overdose. [ 28 ] In 2019, Stericycle partnered with the National Safety Council to launch the Opioids at Work Employer Toolkit. [ 29 ] Stericycle operates a fund that allows employees to support other employees in times of hardship. Stericycle employees have helped over 250 Stericycle families with over $515,000 in grants since 2016. During the fund's biggest year ever in 2017, employees raised $160,000 alone in emergency relief following Hurricane Harvey , Hurricane Irma , and Hurricane Maria . The company currently operates the fund in the US and Canada, and plans to expand it to Latin America in 2019. [ 30 ] [ 15 ] After Hurricane Harvey hit Houston in 2017, Stericycle team members amassed three truckloads of donations that were distributed to families across five Stericycle sites in Houston . [ 31 ] [ 32 ] After Hurricane Maria , Stericycle facilities in Puerto Rico became gathering zones for hot meals, water, laundry service, showers, and shelter to team members who lost their homes. [ 33 ] Since 2011, Stericycle has supported Feed My Starving Children , an organization benefiting malnourished children around the world. In 2018, Stericycle team members packed over 291 cases totaling nearly 63,000 meals, which will feed over 170 children. [ 34 ] Stericycle has also partnered with the American Diabetes Association (ADA) in 2019. Stericycle's partnership with the ADA includes providing consumer-based education, raising awareness and sponsoring key events, such as the Tour de Cure. [ 35 ] In 2011, the Texas Commission on Environmental Quality alleged Stericycle "failed to dispose of pathological waste according to approved methods of treatment and disposition" in violation of 30 Tex. Admin. Code § 330.1219(b)(3). Stericycle denied the charges but agreed to a settlement that included a fine of $34,000. [ 36 ] Stericycle's medical waste incinerator located in North Salt Lake, Utah has been a topic of hot debate in the community. In September 2013, Erin Brockovich joined in with Utah residents in their call for Stericycle to discontinue their business in the area. [ 37 ] [ 38 ] Brockovich's visit was spurred by a violation notice from the Utah Division of Air Quality to Stericycle for excessive emissions above legal limits, and manipulating their reporting to show lower amounts of Mercury, Dioxins, and other potentially harmful chemicals emitted through burning medical waste. [ 39 ] [ 40 ] The violations in 2013 were followed by criminal investigations at the order of Utah Governor Gary Herbert . [ 6 ] [ 7 ] Investigations by California's Soil Water Air Protection Enterprise, or SWAPE, in connection with Ms. Brockovich, discovered dioxins in homes near the incinerator at levels 16 times higher than what is considered safe. [ 41 ] [ 42 ] As of December 1, 2014, Stericycle and the Utah Division of Air Quality reached an agreement acknowledging no wrongdoing, though the settlement does require Stericycle to relocate approximately 40 miles to the west of the incinerator's current location in North Salt Lake. The settlement also calls for Stericycle to pay a $2.3 million fine, half of which is forgivable if the move happens within 3 years. [ 43 ] [ 44 ] As of October 2017, a $295 million settlement was reached on behalf of a nationwide class of Stericycle customers, following a class-action lawsuit accusing the company of engaging in a price-increasing scheme that automatically inflated customers' bills up to 18 percent biannually, according to a news release from Hagens Berman, the Chicago-based law firm that represented the class.
https://en.wikipedia.org/wiki/Stericycle
The sterile fungi , or mycelia sterilia , are a group of fungi that do not produce any known spores , either sexual or asexual . This is considered a form group, not a taxonomic division , and is used as a matter of convenience only, as various isolates within such morphotypes could include distantly related taxa or different morphotypes of the same species, [ 1 ] leading to incorrect identifications. Because these fungi do not produce spores, it is impossible to use traditional methods of morphological comparison to classify them. [ 2 ] However, molecular techniques can be applied to determine their evolutionary history , with ITS testing being the preferred method. [ 1 ] According to one study, approximately 42% of fluids collected from broncho-alveolar lavage have had sterile mycelium observed in them. [ 3 ] This mycology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sterile_fungi
Sterility is the physiological inability to effect sexual reproduction in a living thing, members of whose kind have been produced sexually. Sterility has a wide range of causes. It may be an inherited trait , as in the mule; or it may be acquired from the environment, for example through physical injury or disease , or by exposure to radiation . Sterility is the inability to produce a biological child, while infertility is the inability to conceive after a certain period. [ 1 ] Sterility is rarely discussed in clinical literature and is often used synonymously with infertility. Infertility affects about 12-15% of couples globally. [ 2 ] Still, the prevalence of sterility remains unknown. Sterility can be divided into three subtypes natural, clinical, and hardship. [ 1 ] Natural sterility is the couple's physiological inability to conceive a child. Clinical sterility is natural sterility for which treatment of the patient will not result in conception. Hardship sterility is the inability to take advantage of available treatments due to extraneous factors such as economic, psychological, or physical factors. Clinical sterility is a subtype of natural sterility, and Hardship sterility is a subtype of Clinical sterility. Hybrid sterility can be caused by different closely related species breeding and producing offspring. These animals are usually sterile due to the different numbers of chromosomes between the two parents. The imbalance results in offspring that is viable but not fertile , as is the case with the mule . Sterility can also be caused by selective breeding, where a selected trait is closely linked to genes involved in sex determination or fertility. For example, goats breed to be polled (hornless). This results in a high number of intersex individuals among the offspring, which are typically sterile. [ 3 ] Sterility can also be caused by chromosomal differences within an individual. These individuals tend to be known as genetic mosaics . Loss of part of a chromosome can also cause sterility due to nondisjunction . XX male syndrome is another cause of sterility, wherein the sexual determining factor on the Y chromosome ( SRY ) is transferred to the X chromosome due to an unequal crossing over. This gene triggers the development of testes, causing the individual to be phenotypically male but genotypically female. Economic uses of sterility include: This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sterility_(physiology)
In microbiology , sterility assurance level ( SAL ) is the probability that a single unit that has been subjected to sterilization nevertheless remains nonsterile. It is never possible to prove that all organisms have been destroyed, as the likelihood of survival of an individual microorganism is never zero. So SAL is used to express the probability of the survival. For example, medical device manufacturers design their sterilization processes for an extremely low SAL, such as 10 −6 , which is a 1 in 1,000,000 chance of a non-sterile unit. SAL also describes the killing efficacy of a sterilization process. A very effective sterilization process has a very low SAL. Mathematically, SALs are probabilities , often very small but (by definition) always lying between zero and one. So when they are expressed in scientific notation their exponents are negative, as for instance, "The SAL of this process is 10 −6 ". But the term SAL is sometimes also used to refer to a sterilization's efficacy. This usage (technically the multiplicative inverse ) results in positive exponents, as in "The SAL of this process is 10 6 ". To avoid ambiguity from these inverse usages, some authors use the term log reduction (e.g., "This process gives a six-log reduction"). SALs can also be used to describe the microbial population that was destroyed by the sterilization process, though this is not the same as the probabilistic definition. What is often called a " log reduction" (technically a reduction by one order of magnitude ) represents a 90% reduction in microbial population. Thus a process that achieves a "6-log reduction" (10 −6 ) will theoretically reduce an initial population of one million organisms to very close to zero. The difference in meaning between this and the probabilistic sense can be seen from an example: if careful assays before and after indicate that a procedure has inactivated 90% of the biological agents in some unit, then the procedure can be correctly reported to have achieved a 1-log reduction, even though the probability that the unit is sterile is not 90% but 0. Because of all these ambiguities, contexts in which it is critical to prevent any confusion—such as in the setting of standards—require that SAL terminology be defined carefully and explicitly. [ 1 ] [ 2 ] SALs describing the "Probability of a Non-Sterile Unit" are expressed more specifically in some literature. [ 3 ]
https://en.wikipedia.org/wiki/Sterility_assurance_level
Sterilization ( British English : sterilisation ) refers to any process that removes, kills, or deactivates all forms of life (particularly microorganisms such as fungi , bacteria , spores , and unicellular eukaryotic organisms) and other biological agents (such as prions or viruses ) present in fluid or on a specific surface or object. [ 1 ] Sterilization can be achieved through various means, including heat , chemicals , irradiation , high pressure , and filtration . Sterilization is distinct from disinfection , sanitization, and pasteurization , in that those methods reduce rather than eliminate all forms of life and biological agents present. After sterilization, fluid or an object is referred to as being sterile or aseptic . One of the first steps toward modernized sterilization was made by Nicolas Appert , who discovered that application of heat over a suitable period of time slowed the decay of foods and various liquids, preserving them for safe consumption for a longer time than was typical. Canning of foods is an extension of the same principle and has helped to reduce food borne illness ("food poisoning"). Other methods of sterilizing foods include ultra-high temperature processing (which uses a shorter duration of heating), food irradiation , [ 2 ] [ 3 ] and high pressure ( pascalization ). [ 4 ] In the context of food, sterility typically refers to commercial sterility , defined as "the absence of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during distribution and storage" according to the Codex Alimentarius . [ 5 ] In general, surgical instruments and medications that enter an already aseptic part of the body (such as the bloodstream, or penetrating the skin) must be sterile. Examples of such instruments include scalpels , hypodermic needles , and artificial pacemakers . This is also essential in the manufacture of parenteral pharmaceuticals. [ 6 ] Preparation of injectable medications and intravenous solutions for fluid replacement therapy requires not only sterility but also well-designed containers to prevent entry of adventitious agents after initial product sterilization. [ 6 ] Most medical and surgical devices used in healthcare facilities are made of materials that are able to undergo steam sterilization. [ 7 ] However, since 1950, there has been an increase in medical devices and instruments made of materials (e.g., plastics) that require low-temperature sterilization. Ethylene oxide gas has been used since the 1950s for heat- and moisture-sensitive medical devices. Within the past 15 years, a number of new, low-temperature sterilization systems (e.g., vaporized hydrogen peroxide , peracetic acid immersion, ozone ) have been developed and are being used to sterilize medical devices. [ 8 ] There are strict international rules to protect the contamination of Solar System bodies from biological material from Earth. Standards vary depending on both the type of mission and its destination; the more likely a planet is considered to be habitable , the stricter the requirements are. [ 9 ] Many components of instruments used on spacecraft cannot withstand very high temperatures, so techniques not requiring excessive temperatures are used as tolerated, including heating to at least 120 °C (248 °F), chemical sterilization, oxidization, ultraviolet, and irradiation. [ 10 ] The aim of sterilization is the reduction of initially present microorganisms or other potential pathogens. The degree of sterilization is commonly expressed by multiples of the decimal reduction time, or D-value , denoting the time needed to reduce the initial number N 0 {\displaystyle N_{0}} to one tenth ( 10 − 1 {\displaystyle 10^{-1}} ) of its original value. [ 11 ] Then the number of microorganisms N {\displaystyle N} after sterilization time t {\displaystyle t} is given by: The D-value is a function of sterilization conditions and varies with the type of microorganism, temperature , water activity , pH , etc.. For steam sterilization (see below), typically the temperature, in degrees Celsius , is given as an index. [ citation needed ] Theoretically, the likelihood of the survival of an individual microorganism is never zero. To compensate for this, the overkill method is often used. Using the overkill method, sterilization is performed by sterilizing for longer than is required to kill the bioburden present on or in the item being sterilized. This provides a sterility assurance level (SAL) equal to the probability of a non-sterile unit. [ citation needed ] For high-risk applications, such as medical devices and injections, a sterility assurance level of at least 10 −6 is required by the United States of America Food and Drug Administration (FDA). [ 12 ] Steam sterilization, also known as moist heat sterilization, uses heated saturated steam under pressure to inactivate or kill microorganisms via denaturation of macromolecules, primarily proteins. [ 13 ] This method is a faster process than dry heat sterilization. Steam sterilization is performed using an autoclave , sometimes called a converter or steam sterilizer. The object or liquid is placed in the autoclave chamber, which is then sealed and heated using pressurized steam to a temperature set point for a defined period of time. Steam sterilization cycles can be categorized as either pre-vacuum or gravity displacement. Gravity displacement cycles rely on the lower density of the injected steam to force cooler, denser air out of the chamber drain. Steam Sterilization | Disinfection & Sterilization Guidelines | Guidelines Library | Infection Control | CDC In comparison, pre-vacuum cycles create a vacuum in the chamber to remove cool dry air prior to injecting saturated steam, resulting in faster heating and shorter cycle times. Typical steam sterilization cycles are between 3 and 30 minutes at 121–134 °C (250–273 °F) at 100 kPa (15 psi), but adjustments may be made depending on the bioburden of the article being sterilized, its resistance ( D-value ) to steam sterilization, the article's heat tolerance, and the required sterility assurance level. Following the completion of a cycle, liquids in a pressurized autoclave must be cooled slowly to avoid boiling over when the pressure is released. This may be achieved by gradually depressurizing the sterilization chamber and allowing liquids to evaporate under a negative pressure, while cooling the contents. [ citation needed ] Proper autoclave treatment will inactivate all resistant bacterial spores in addition to fungi , bacteria, and viruses, but is not expected to eliminate all prions , which vary in their heat resistance. For prion elimination, various recommendations state 121–132 °C (250–270 °F) for 60 minutes or 134 °C (273 °F) for at least 18 minutes. [ 14 ] The 263K scrapie prion is inactivated relatively quickly by such sterilization procedures; however, other strains of scrapie and strains of Creutzfeldt-Jakob disease (CKD) and bovine spongiform encephalopathy (BSE) are more resistant. Using mice as test animals, one experiment showed that heating BSE positive brain tissue at 134–138 °C (273–280 °F) for 18 minutes resulted in only a 2.5 log decrease in prion infectivity. [ 15 ] Most autoclaves have meters and charts that record or display information, particularly temperature and pressure as a function of time. The information is checked to ensure that the conditions required for sterilization have been met. Indicator tape is often placed on the packages of products prior to autoclaving, and some packaging incorporates indicators. The indicator changes color when exposed to steam, providing a visual confirmation. [ 16 ] Biological indicators can also be used to independently confirm autoclave performance. Simple biological indicator devices are commercially available, based on microbial spores. Most contain spores of the heat-resistant microbe Geobacillus stearothermophilus (formerly Bacillus stearothermophilus ), which is extremely resistant to steam sterilization. Biological indicators may take the form of glass vials of spores and liquid media, or as spores on strips of paper inside glassine envelopes. These indicators are placed in locations where it is difficult for steam to reach to verify that steam is penetrating that area. For autoclaving, cleaning is critical. Extraneous biological matter or grime may shield organisms from steam penetration. Proper cleaning can be achieved through physical scrubbing, sonication , ultrasound , or pulsed air. [ 17 ] Pressure cooking and canning is analogous to autoclaving, and when performed correctly renders food sterile. [ 18 ] [ failed verification ] To sterilize waste materials that are chiefly composed of liquid, a purpose-built effluent decontamination system can be utilized. These devices can function using a variety of sterilants, although using heat via steam is most common. [ citation needed ] Dry heat was the first method of sterilization and is a longer process than moist heat sterilization. The destruction of microorganisms through the use of dry heat is a gradual phenomenon. With longer exposure to lethal temperatures, the number of killed microorganisms increases. Forced ventilation of hot air can be used to increase the rate at which heat is transferred to an organism and reduce the temperature and amount of time needed to achieve sterility. At higher temperatures, shorter exposure times are required to kill organisms. This can reduce heat-induced damage to food products. [ 19 ] The standard setting for a hot air oven is at least two hours at 160 °C (320 °F). A rapid method heats air to 463.15 K (190.00 °C; 374.00 °F) for 6 minutes for unwrapped objects and 12 minutes for wrapped objects. [ 20 ] [ 21 ] Dry heat has the advantage that it can be used on powders and other heat-stable items that are adversely affected by steam (e.g., it does not cause rusting of steel objects). Flaming is done to inoculation loops and straight-wires in microbiology labs for streaking . Leaving the loop in the flame of a Bunsen burner or alcohol burner until it glows red ensures that any infectious agent is inactivated or killed. This is commonly used for small metal or glass objects, but not for large objects (see Incineration below). However, during the initial heating, infectious material may be sprayed from the wire surface before it is killed, contaminating nearby surfaces and objects. Therefore, special heaters have been developed that surround the inoculating loop with a heated cage, ensuring that such sprayed material does not further contaminate the area. Another problem is that gas flames may leave carbon or other residues on the object if the object is not heated enough. A variation on flaming is to dip the object in a 70% or more concentrated solution of ethanol , then briefly leave the object in the flame of a Bunsen burner . The ethanol will ignite and burn off rapidly, leaving less residue than a gas flame [ citation needed ] Incineration is a waste treatment process that involves the combustion of organic substances contained in waste materials. This method also burns any organism to ash. It is used to sterilize medical and other biohazardous waste before it is discarded with non-hazardous waste. Bacteria incinerators are mini furnaces that incinerate and kill off any microorganisms that may be on an inoculating loop or wire. [ 22 ] Named after John Tyndall , tyndallization [ 23 ] is an obsolete and lengthy process designed to reduce the level of activity of sporulating microbes that are left by a simple boiling water method. The process involves boiling for a period of time (typically 20 minutes) at atmospheric pressure, cooling, incubating for a day, and then repeating the process a total of three to four times. The incubation allow heat-resistant spores surviving the previous boiling period to germinate and form the heat-sensitive vegetative (growing) stage, which can be killed by the next boiling step. This is effective because many spores are stimulated to grow by the heat shock. The procedure only works for media that can support bacterial growth, and will not sterilize non-nutritive substrates like water. Tyndallization is also ineffective against prions. Glass bead sterilizers work by heating glass beads to 250 °C (482 °F). Instruments are then quickly doused in these glass beads, which heat the object while physically scraping contaminants off their surface. Glass bead sterilizers were once a common sterilization method employed in dental offices as well as biological laboratories, [ 24 ] but are not approved by the U.S. Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) to be used as a sterilizers since 1997. [ 25 ] They are still popular in European and Israeli dental practices, although there are no current evidence-based guidelines for using this sterilizer. [ 24 ] Chemicals are also used for sterilization. Heating provides a reliable way to rid objects of all transmissible agents, but it is not always appropriate if it will damage heat-sensitive materials such as biological materials, fiber optics , electronics, and many plastics . In these situations, chemicals either in a gaseous or liquid form, can be used as sterilants. While the use of gas and liquid chemical sterilants avoids the problem of heat damage, users must ensure that the article to be sterilized is chemically compatible with the sterilant being used and that the sterilant is able to reach all surfaces that must be sterilized (typically cannot penetrate packaging). In addition, the use of chemical sterilants poses new challenges for workplace safety , as the properties that make chemicals effective sterilants usually make them harmful to humans. The procedure for removing sterilant residue from the sterilized materials varies depending on the chemical and process that is used. [ citation needed ] Ethylene oxide (EO, EtO) gas treatment is one of the common methods used to sterilize, pasteurize, or disinfect items because of its wide range of material compatibility. It is also used to process items that are sensitive to processing with other methods, such as radiation (gamma, electron beam, X-ray), heat (moist or dry), or other chemicals. Ethylene oxide treatment is the most common chemical sterilization method, used for approximately 70% of total sterilizations, and for over 50% of all disposable medical devices. [ 26 ] [ 27 ] Ethylene oxide treatment is generally carried out between 30 and 60 °C (86 and 140 °F) with relative humidity above 30% and a gas concentration between 200 and 800 mg/L. [ 28 ] Typically, the process lasts for several hours. Ethylene oxide is highly effective, as it penetrates all porous materials , and it can penetrate through some plastic materials and films. Ethylene oxide kills all known microorganisms, such as bacteria (including spores), viruses, and fungi (including yeasts and moulds), and is compatible with almost all materials even when used repeatedly. It is flammable, toxic, and carcinogenic ; however, only with a reported potential for some adverse health effects when not used in compliance with published requirements. Ethylene oxide sterilizers and processes require biological validation after sterilizer installation, significant repairs, or process changes. The traditional process consists of a preconditioning phase (in a separate room or cell), a processing phase (more commonly in a vacuum vessel and sometimes in a pressure rated vessel), and an aeration phase (in a separate room or cell) to remove EO residues and lower by-products such as ethylene chlorohydrin (EC or ECH) and, of lesser importance, ethylene glycol (EG). An alternative process, known as all-in-one processing, also exists for some products whereby all three phases are performed in the vacuum or pressure rated vessel. This latter option can facilitate faster overall processing time and residue dissipation. The most common EO processing method is the gas chamber. To benefit from economies of scale , EO has traditionally been delivered by filling a large chamber with a combination of gaseous EO, either as pure EO, or with other gases used as diluents; diluents include chlorofluorocarbons ( CFCs ), hydrochlorofluorocarbons (HCFCs), and carbon dioxide . [ 29 ] Ethylene oxide is still widely used by medical device manufacturers. [ 30 ] Since EO is explosive at concentrations above 3%, [ 31 ] EO was traditionally supplied with an inert carrier gas, such as a CFC or HCFC. The use of CFCs or HCFCs as the carrier gas was banned because of concerns of ozone depletion . [ 32 ] These halogenated hydrocarbons are being replaced by systems using 100% EO, because of regulations and the high cost of the blends. In hospitals, most EO sterilizers use single-use cartridges because of the convenience and ease of use compared to the former plumbed gas cylinders of EO blends. It is important to adhere to patient and healthcare personnel government specified limits of EO residues in and/or on processed products, operator exposure after processing, during storage and handling of EO gas cylinders, and environmental emissions produced when using EO. The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) at 1 ppm – calculated as an 8-hour time-weighted average (TWA) – and 5 ppm as a 15-minute excursion limit (EL). The National Institute for Occupational Safety and Health 's (NIOSH) immediately dangerous to life and health limit (IDLH) for EO is 800 ppm. [ 33 ] The odor threshold is around 500 ppm, [ 34 ] so EO is imperceptible until concentrations are well above the OSHA PEL. Therefore, OSHA recommends that continuous gas monitoring systems be used to protect workers using EO for processing. [ 35 ] Nitrogen dioxide (NO 2 ) gas is a rapid and effective sterilant for use against a wide range of microorganisms, including common bacteria, viruses, and spores. The unique physical properties of NO 2 gas allow for sterilant dispersion in an enclosed environment at room temperature and atmospheric pressure. The mechanism for lethality is the degradation of DNA in the spore's core through nitration of the phosphate backbone, which kills the exposed organism as it absorbs NO 2 . This degradations occurs at even very low concentrations of the gas. [ 36 ] NO 2 has a boiling point of 21 °C (70 °F) at sea level, which results in a relatively high saturated vapour pressure at ambient temperature. Because of this, liquid NO 2 may be used as a convenient source for the sterilant gas. Liquid NO 2 is often referred to by the name of its dimer , dinitrogen tetroxide (N 2 O 4 ). Additionally, the low levels of concentration required, coupled with the high vapour pressure, assures that no condensation occurs on the devices being sterilized. This means that no aeration of the devices is required immediately following the sterilization cycle. [ 37 ] NO 2 is also less corrosive than other sterilant gases, and is compatible with most medical materials and adhesives. [ 37 ] The most-resistant organism (MRO) to sterilization with NO 2 gas is the spore of Geobacillus stearothermophilus , which is the same MRO for both steam and hydrogen peroxide sterilization processes. The spore form of G. stearothermophilus has been well characterized over the years as a biological indicator in sterilization applications. Microbial inactivation of G. stearothermophilus with NO 2 gas proceeds rapidly in a log-linear fashion, as is typical of other sterilization processes. Noxilizer, Inc. has commercialized this technology to offer contract sterilization services for medical devices at its Baltimore, Maryland (USA) facility. [ 38 ] This has been demonstrated in Noxilizer's lab in multiple studies and is supported by published reports from other labs. These same properties also allow for quicker removal of the sterilant and residual gases through aeration of the enclosed environment. The combination of rapid lethality and easy removal of the gas allows for shorter overall cycle times during the sterilization (or decontamination) process and a lower level of sterilant residuals than are found with other sterilization methods. [ 37 ] Eniware, LLC has developed a portable, power-free sterilizer that uses no electricity, heat, or water. [ 39 ] The 25 liter unit makes sterilization of surgical instruments possible for austere forward surgical teams, in health centers throughout the world with intermittent or no electricity and in disaster relief and humanitarian crisis situations. The 4-hour cycle uses a single use gas generation ampoule and a disposable scrubber to remove NO 2 gas. [ 40 ] Ozone is used in industrial settings to sterilize water and air, as well as a disinfectant for surfaces. It has the benefit of being able to oxidize most organic matter. On the other hand, it is a toxic and unstable gas that must be produced on-site, so it is not practical to use in many settings. [ 41 ] Ozone offers many advantages as a sterilant gas; ozone is a very efficient sterilant because of its strong oxidizing properties ( E =2.076 vs SHE [ 42 ] ) capable of destroying a wide range of pathogens, including prions, without the need for handling hazardous chemicals since the ozone is generated within the sterilizer from medical-grade oxygen . The high reactivity of ozone means that waste ozone can be destroyed by passing over a simple catalyst that reverts it to oxygen and ensures that the cycle time is relatively short. The disadvantage of using ozone is that the gas is very reactive and very hazardous. The NIOSH's IDLH for ozone is 5 ppm, 160 times smaller than the 800 ppm IDLH for ethylene oxide. NIOSH [ 43 ] and OSHA have set the PEL for ozone at 0.1 ppm , calculated as an 8-hour time-weighted average. The sterilant gas manufacturers include many safety features in their products but prudent practice is to provide continuous monitoring of exposure to ozone, in order to provide a rapid warning in the event of a leak. Monitors for determining workplace exposure to ozone are commercially available. Glutaraldehyde and formaldehyde solutions (also used as fixatives ) are accepted liquid sterilizing agents, provided that the immersion time is sufficiently long. To kill all spores in a clear liquid can take up to 22 hours with glutaraldehyde and even longer with formaldehyde. The presence of solid particles may lengthen the required period or render the treatment ineffective. Sterilization of blocks of tissue can take much longer, due to the time required for the fixative to penetrate. Glutaraldehyde and formaldehyde are volatile , and toxic by both skin contact and inhalation. Glutaraldehyde has a short shelf-life (<2 weeks), and is expensive. Formaldehyde is less expensive and has a much longer shelf-life if some methanol is added to inhibit polymerization of the chemical to paraformaldehyde , but is much more volatile. Formaldehyde is also used as a gaseous sterilizing agent; in this case, it is prepared on-site by depolymerization of solid paraformaldehyde. Many vaccines, such as the original Salk polio vaccine , are sterilized with formaldehyde. Hydrogen peroxide , in both liquid and as vaporized hydrogen peroxide (VHP), is another chemical sterilizing agent. Hydrogen peroxide is a strong oxidant , which allows it to destroy a wide range of pathogens. Hydrogen peroxide is used to sterilize heat- or temperature-sensitive articles, such as rigid endoscopes . In medical sterilization, hydrogen peroxide is used at higher concentrations, ranging from around 35% up to 90%. The biggest advantage of hydrogen peroxide as a sterilant is the short cycle time. Whereas the cycle time for ethylene oxide may be 10 to 15 hours, some modern hydrogen peroxide sterilizers have a cycle time as short as 28 minutes. [ 44 ] Drawbacks of hydrogen peroxide include material compatibility, a lower capability for penetration and operator health risks. Products containing cellulose, such as paper, cannot be sterilized using VHP and products containing nylon may become brittle. [ 45 ] The penetrating ability of hydrogen peroxide is not as good as ethylene oxide [ citation needed ] and so there are limitations on the length and diameter of the lumen of objects that can be effectively sterilized. Hydrogen peroxide is a primary irritant and the contact of the liquid solution with skin will cause bleaching or ulceration depending on the concentration and contact time. It is relatively non-toxic when diluted to low concentrations, but is a dangerous oxidizer at high concentrations (> 10% w/w). The vapour is also hazardous, primarily affecting the eyes and respiratory system. Even short-term exposures can be hazardous and NIOSH has set the IDLH at 75 ppm, [ 33 ] less than 1/10 the IDLH for ethylene oxide (800 ppm). Prolonged exposure to lower concentrations can cause permanent lung damage and consequently, OSHA has set the permissible exposure limit to 1.0 ppm, calculated as an 8-hour time-weighted average. [ 46 ] Sterilizer manufacturers go to great lengths to make their products safe through careful design and incorporation of many safety features, though there are still workplace exposures of hydrogen peroxide from gas sterilizers documented in the FDA Manufacturer and User Facility Device Experience (MAUDE) database. [ 47 ] When using any type of gas sterilizer, prudent work practices should include good ventilation, a continuous gas monitor for hydrogen peroxide, and good work practices and training. [ 48 ] [ 49 ] Vaporized hydrogen peroxide (VHP) is used to sterilize large enclosed and sealed areas, such as entire rooms and aircraft interiors. Although toxic, VHP breaks down in a short time to water and oxygen. Peracetic acid (0.2%) is a recognized sterilant by the FDA [ 50 ] for use in sterilizing medical devices such as endoscopes . Peracetic acid which is also known as peroxyacetic acid is a chemical compound often used in disinfectants such as sanitizers. It is most commonly produced by the reaction of acetic acid with hydrogen peroxide by using an acid catalyst. Peracetic acid is never sold in un-stabilized solutions which is why it is considered to be environmentally friendly. [ 51 ] Peracetic acid is a colorless liquid and the molecular formula of peracetic acid is C 2 H 4 O 3 or CH 3 COOOH. [ 52 ] More recently, peracetic acid is being used throughout the world as more people are using fumigation to decontaminate surfaces to reduce the risk of COVID-19 and other diseases. [ 53 ] Prions are highly resistant to chemical sterilization. [ 54 ] Treatment with aldehydes , such as formaldehyde, have actually been shown to increase prion resistance. Hydrogen peroxide (3%) used or 1 hour was shown to be ineffective, providing less than 3 logs (10 −3 ) reduction in contamination. Iodine , formaldehyde, glutaraldehyde, and peracetic acid also fail this test (1 hour treatment). [ 55 ] Only chlorine , phenolic compounds , guanidinium thiocyanate , and sodium hydroxide reduce prion levels by more than 4 logs; chlorine (too corrosive to use on certain objects) and sodium hydroxide are the most consistent. Many studies have shown the effectiveness of sodium hydroxide. [ 56 ] Sterilization can be achieved using electromagnetic radiation , such as ultraviolet light (UV), X-rays , and gamma rays , or irradiation by subatomic particles such as electron beams . [ 57 ] Electromagnetic or particulate radiation can be energetic enough to ionize atoms or molecules ( ionizing radiation ), or less energetic atoms or molecules ( non-ionizing radiation ). [ citation needed ] UV irradiation (from a germicidal lamp ) is useful for sterilization of surfaces and some transparent objects. Many objects that are transparent to visible light absorb UV. UV irradiation is routinely used to sterilize the interiors of biological safety cabinets between uses, but is ineffective in shaded areas, including areas under dirt (which may become polymerized after prolonged irradiation, so that it is very difficult to remove). [ 58 ] It also damages some plastics, such as polystyrene foam if exposed for prolonged periods of time. The safety of irradiation facilities is regulated by the International Atomic Energy Agency of the United Nations and monitored by the different national Nuclear Regulatory Commissions (NRC). The radiation exposure accidents that have occurred in the past are documented by the agency and thoroughly analyzed to determine the cause and improvement potential. Such improvements are then mandated to retrofit existing facilities and future design. Gamma radiation is very penetrating, and is commonly used for sterilization of disposable medical equipment, such as syringes, needles, cannulas and IV sets, and food. It is emitted by a radioisotope , usually cobalt-60 ( 60 Co) or caesium-137 ( 137 Cs), which have photon energies of up to 1.3 and 0.66 MeV , respectively. Use of a radioisotope requires shielding for the safety of the operators while in use and in storage. With most designs, the radioisotope is lowered into a water-filled source storage pool, which absorbs radiation and allows maintenance personnel to enter the radiation shield. One variant keeps the radioisotope under water at all times and lowers the product to be irradiated in the water in hermetically sealed bells; no further shielding is required for such designs. Other uncommonly used designs are dry storage, providing movable shields that reduce radiation levels in areas of the irradiation chamber, etc. An incident in Decatur, Georgia , USA, where water-soluble caesium-137 leaked into the source storage pool, required Nuclear Regulatory Commission (NRC) intervention [ 59 ] and led to the use of this radioisotope being almost entirely discontinued in favor of the more costly, non-water-soluble cobalt-60. Cobalt-60 gamma photons have about twice the energy, and hence greater penetrating range, of caesium-137-produced radiation. Electron beam processing is also commonly used for sterilization. Electron beams use an on-off technology and provide a much higher dosing rate than gamma or X-rays. Due to the higher dose rate, less exposure time is needed and thereby any potential degradation to polymers is reduced. Because electrons carry a charge, electron beams are less penetrating than both gamma and X-rays. Facilities rely on substantial concrete shields to protect workers and the environment from radiation exposure. [ 60 ] High-energy X-rays (produced by bremsstrahlung ) allow irradiation of large packages and pallet loads of medical devices. They are sufficiently penetrating to treat multiple pallet loads of low-density packages with very good dose uniformity ratios. X-ray sterilization does not require chemical or radioactive material: high-energy X-rays are generated at high intensity by an X-ray generator that does not require shielding when not in use. X-rays are generated by bombarding a dense material (target) such as tantalum or tungsten with high-energy electrons, in a process known as bremsstrahlung conversion. These systems are energy-inefficient, requiring much more electrical energy than other systems for the same result. Irradiation with X-rays, gamma rays, or electrons does not make materials radioactive , because the energy used is too low. Generally an energy of at least 10 MeV is needed to induce radioactivity in a material. [ 61 ] Neutrons and very high-energy particles can make materials radioactive, but have good penetration, whereas lower energy particles (other than neutrons) cannot make materials radioactive, but have poorer penetration. Sterilization by irradiation with gamma rays may however affect material properties. [ 62 ] [ 63 ] Irradiation is used by the United States Postal Service to sterilize mail in the Washington, D.C. area. Some foods (e.g., spices and ground meats) are sterilized by irradiation . [ 64 ] Subatomic particles may be more or less penetrating and may be generated by a radioisotope or a device, depending upon the type of particle. Fluids that would be damaged by heat, irradiation, or chemical sterilization, such as drug solution, can be sterilized by microfiltration using membrane filters . This method is commonly used for heat labile pharmaceuticals and protein solutions in medicinal drug processing. A microfilter with pore size of usually 0.22 μm will effectively remove microorganisms . [ 65 ] Some Staphylococcal species have, however, been shown to be flexible enough to pass through 0.22 μm filters. [ 66 ] In the processing of biologics , viruses must be removed or inactivated, requiring the use of nanofilters with a smaller pore size (20–50 nm ). Smaller pore sizes lower the flow rate, so in order to achieve higher total throughput or to avoid premature blockage, pre-filters might be used to protect small pore membrane filters. Tangential flow filtration (TFF) and alternating tangential flow (ATF) systems also reduce particulate accumulation and blockage. Membrane filters used in production processes are commonly made from materials such as mixed cellulose ester or polyethersulfone (PES). The filtration equipment and the filters themselves may be purchased as pre-sterilized disposable units in sealed packaging or must be sterilized by the user, generally by autoclaving at a temperature that does not damage the fragile filter membranes. To ensure proper functioning of the filter, the membrane filters are integrity tested post-use and sometimes before use. The nondestructive integrity test assures that the filter is undamaged and is a regulatory requirement. [ 67 ] Typically, terminal pharmaceutical sterile filtration is performed inside of a cleanroom to prevent contamination. Instruments that have undergone sterilization can be maintained in such condition by containment in sealed packaging until use. Aseptic technique is the act of maintaining sterility during procedures.
https://en.wikipedia.org/wiki/Sterilization_(microbiology)
A sterimol parameter is a set of vectors which describes the steric occupancy of a molecule. First developed by Verloop in the 1970s. [ 1 ] Sterimol parameters found extensive application in quantitative structure-activity relationship (QSAR) studies for drug discovery . [ 2 ] [ 3 ] Introduction of Sterimol parameters into organic synthesis was pioneered by the Sigman research group in the 2010s. [ 4 ] Benefiting from the multi-dimensional values that they carry, sterimol parameters give more accurate predictions for the enantioselectivity of asymmetric catalytic reactions than its counterparts, especially in cases when structurally complicated ligands are used. Sterimol parameters are built upon the Corey-Pauling-Koltun atomic models , which take into consideration the Van der Waals radii of each atom in the molecule. Unlike most other steric parameters such as A-value , Taft parameters and Tolman cone angle , which group all the spatial information into a single cumulative value, Sterimol parameters consist of three sub-parameters: one length parameter (L), and two width parameters (B 1 , B 5 ). The three parameters add together to profile the 3-dimensional spatial information of a molecule. In order to define the Sterimol parameters of a molecule, an axis needs to be defined at first. Since Sterimol parameters are usually applied for describing the bulkiness of a certain substituent which is attached to the substrate, the default choice of the axis is the one that passes through the atoms which link the substrate and substituent together. This axis is defined as the X-axis. Once the X-axis has been defined, the Sterimol parameters can be assigned. Take the 1,2-dimethylpropyl group as an example (Figure 1). The length parameter (L) refers to the farthest extension of the substituents in the direction parallel to the X axis (shown in Figure 1, left). The width parameters can be assigned from the point of view which is perpendicular to the X axis. The width parameter B 1 refers to the minimal profile width of the substituents on the linking atom from the X axis, while parameter B 5 refers to the maximal width from the same axis (shown in Figure 1, right). Sterimol B 2 –B 4 parameters were initially used for obtaining the maximal width. However, in his second generation Sterimol approach, [ 5 ] Verloop pointed out that due to their directional dependence on Sterimol B 1 , discrepancies arose when computing those three parameters in cases where B 1 can point to multiple directions. Since Sterimol B 2 and B 3 hardly contributed significantly to any regression functions obtained, and Sterimol B 4 was practically equal to B 5 , the parameters B 2 –B 4 were omitted. Sterimol B 1 parameter demonstrates the steric effects imposed by branching at the linking atom of a substituent. The more branches the linking atom bears, the larger Sterimol B 1 value the substituent has. On the other hand, Sterimol B 5 parameter is more susceptible to the steric effects of the substituent's terminus. In general, Sterimol B 1 represents vicinal steric effects of the substituent, while Sterimol B 5 represents remote steric effects. Several open-source programs have already included the feature of calculating Sterimol parameters, such as Morfeus, [ 6 ] Kallisto, [ 7 ] and dbstep. [ 8 ] In the 2010s, machine learning emerged as a powerful tool for guiding catalyst discovery. More specifically, machine learning models such as multivariate linear regression have been applied to study the linear free energy relationships (LFERs) in catalytic asymmetric organic reactions . These relationships describe the effects that ligand substituents have on reaction outcomes, namely enantioselectivity, and can be extrapolated to predict the performance of ligands outside the known dataset . However, machine learning approaches require well-defined molecular descriptors for the steric and electronic properties of ligands in order to make accurate predictions. Sterimol parameters emerged as a good candidate for quantifying the steric environment induced by ligands. In Matthew Sigman's seminal work published in 2012, [ 4 ] Sterimol parameters were implemented in asymmetric catalysis for the first time in the analysis of an asymmetric Nozaki-Hiyama-Kishi reaction (Figure 2). In initial ligand screening the team found that the steric hindrance of the ester substituent on the oxazoline - proline -based ligand scaffold was pertinent to the overall enantioselectivity of the reaction. When attempting to use the Charton modification [ 9 ] [ 10 ] of the Taft's parameters for probing the LFERs, they observed breaks in linearity with respect to several "isopropyl-like" substituents with large Charton values (Figure 3, left). However, this break did not exist when the Sterimol B 1 parameter was used instead. All of the substituents studied demonstrated good linear correlation between their Sterimol B 1 value and the reaction enantioselectivity (Figure 3, right). Sigman attributed the superiority of Sterimol B 1 in this prediction over Charton values to the inherent limitations of the experimentally based Charton values. He noted that the Charton model assumes that the substituent can rapidly rotate around the X-axis. However, in the context of asymmetric catalysis, only one conformation of the substituent provides the transition state with lowest energy, which leads to the formation of the major enantiomer . Therefore, Charton values tend to overestimate the steric effects of substituents that are non-symmetrical around the X-axis, because they can only describe the net conformer of a certain substituent. Sterimol parameters, in contrast, are not derived from experimental results, which are sometimes idiosyncratic as a result of distinct mechanisms. By virtue of their origin, namely quantum chemical calculations , Sterimol parameters can more accurately interpret the steric effects of a substituent in its static form. Sterimol B 1 , in particular, can approximate the steric repulsive effect of the exact conformer with the lowest energy. Table 1 demonstrates the differences of the two parameters. For example, while they have the same Sterimol B 1 values, the Charton value of the isopropyl-like CHPr 2 substituent is significantly larger than that of i -Pr due to overestimation. This explains why better correlation was obtained with Sterimol B 1 . To date, the Sigman lab has applied Sterimol parameters in the analysis of several catalytic asymmetric reactions. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] Sterimol parameters are also utilized by chemists worldwide to improve the enantioselectivity for various catalytic reactions, such as conjugative addition , [ 16 ] Tsuji-Trost reaction , [ 17 ] C–H activation , [ 18 ] cyclopropanation , [ 19 ] etc. [ 20 ] [ 21 ] [ 22 ] Following Sigman's work, the Paton lab developed a revised form of Sterimol parameters in 2019. [ 23 ] Termed "weighted Sterimol" (wSterimol), this new depiction of Sterimol parameters considers the influence of conformational effects . Paton stated that enantioselectivity is a macroscopic observable, and multiple conformations should not be overlooked when generating descriptor values, especially for substituents with greater conformational flexibility. With this in mind, Paton designed the python-based program "wSterimol", which combines conformation search with Sterimol parameter calculation. In a fully automated fashion, the program performs conformer generation, geometry optimization , filtering and Sterimol computation. Finally, the program outputs weighted Sterimol values wB 1 , wB 5 and wL , which are generalized based on Boltzmann distribution . This user-friendly program has been applied in the studies of several asymmetric catalytic systems [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Sterimol_parameter
The Sterling 10.5 axle is an automotive axle manufactured by Ford Motor Company at the Sterling Axle Plant in Sterling Heights, MI . It was first used in model year 1985 Ford trucks. The axle was developed to replace the Dana 60 and Dana 70 . The Sterling 10.5 axle is currently only made as a full floating axle. Originally this axle was made as the Sterling 10.25, with a ring gear that measured 10.25 inches (260 mm) until it was upgraded in 1999 to the Sterling 10.50 for the Ford Super Duty trucks. The 10.25 axle came with drum brakes. There were two versions of the Sterling 10.25. The first version was produced from 1985 to 1992. The second version, produced from 1993 to 1997 featured a stronger pinion/yoke. This corrected a known, but rare, issue of the pinion yoke nut working loose. The first and second versions of the 10.25 axle are colloquially known as “short pinion-yoke” and “long pinion-yoke” respectively. The 10.25 is still made. It appears the same as the 10.5 externally, except it uses a one piece differential case with 2 spider gears. The semi float variation is less common and was used in F-250 trucks with a lower GVWR (7200) all the way through 1996. The 10th generation Ford F-150 offered this axle in the light-duty F-250 trucks from 1997 to 1999. For model year 2000 - 2004 trucks, the F-150 7700 offered this axle and the F-250 light duty was discontinued. At least till 2011 the 12 bolt semi float was still found with the heavy duty f-150. Gear ratios were 3.73 in 4x4 and either the 3.73 or 4.10 in the two-wheel-drive models. This was available in both limited slip and standard variations. All 4x4 came with limited slip (L on tag), it can also be determined from the axle code on the door jamb vin label. In 1999 Ford revamped their heavy trucks, including their own Sterling axle. An upgraded 3 spider gear differential carrier was added. The differentials are interchangeable between the two variations. The new axle also came with disc brakes and a unique 8x170 mm (6.7 in) metric lug pattern. Options and features remained unchanged until model year 2011 trucks. 2011 models with this axle have the option of being equipped with an electronic locking differential. [ 1 ] Ford
https://en.wikipedia.org/wiki/Sterling_10.5_axle
Sterling silver is an alloy composed by weight of 92.5% silver and 7.5% other metals , usually copper . The sterling silver standard has a minimum millesimal fineness of 925. Fine silver , which is 99.9% pure silver, is relatively soft, so silver is usually alloyed with copper to increase its hardness and strength. Sterling silver is prone to tarnishing , [ 1 ] and elements other than copper can be used in alloys to reduce tarnishing, as well as casting porosity and firescale . Such elements include germanium , zinc , platinum , silicon , and boron . Recent examples of these alloys include argentium , sterlium and silvadium . [ 2 ] The term sterling silver originally meant "silver fit to be used in the making of sterlings", sterling being another name for the English silver penny . The etymology of sterling itself is unclear and disputed . [ 3 ] A piece of sterling silver dating from Henry II 's reign was used as a standard in the Trial of the Pyx until it was deposited at the Royal Mint in 1843. It bears the royal stamp ENRI. REX ("King Henry") but this was added later, in the reign of Henry III . The first legal definition of sterling silver appeared in 1275, when a statute of Edward I specified that 12 troy ounces of silver for coinage should contain 11 ounces 2 + 1 ⁄ 4 pennyweights of silver and 17 + 3 ⁄ 4 pennyweights of alloy, with 20 pennyweights to the troy ounce. [ 4 ] This is (not precisely) equivalent to a millesimal fineness of 926. In Colonial America , sterling silver was used for currency and general goods as well. Between 1634 and 1776, some 500 silversmiths created items in the "New World" ranging from simple buckles to ornate Rococo coffee pots. Although silversmiths of this era were typically familiar with all precious metals, they primarily worked in sterling silver. The colonies lacked an assay office during this time (the first would be established in 1814), so American silversmiths adhered to the standard set by the London Goldsmiths Company : sterling silver consisted of 91.5–92.5% by weight silver and 8.5–7.5 wt% copper. [ 5 ] Stamping each of their pieces with their personal maker's mark , colonial silversmiths relied upon their own status to guarantee the quality and composition of their products. [ 5 ] Colonial silversmiths used many of the techniques developed by those in Europe. Casting was frequently the first step in manufacturing silver pieces, as silver workers would melt down sterling silver into easily manageable ingots . Occasionally, they would create small components (e.g. teapot legs) by casting silver into iron or graphite molds, but it was rare for an entire piece to be fabricated via casting. [ 6 ] Silversmiths would forge an ingot into the desired shape, by hammering at room temperature; this cold forming process, caused work hardening of the silver, which become increasingly brittle and difficult to shape. [ 6 ] To restore the workability, the silversmith would anneal the piece—that is, heat it to a dull red and then quench it in water—to relieve the stresses in the material and return it to a more ductile state. [ 7 ] Hammering required more time than all other silver manufacturing processes, and therefore accounted for the majority of labor costs. [ 6 ] Silversmiths would then seam parts together to create complex and artistic items, sealing the gaps with a solder of 80 wt% silver and 20 wt% bronze. Finally, they would file and polish their work to remove all seams, finishing off with engraving and stamping the smith's mark. [ 8 ] The American revolutionary Paul Revere was regarded as one of the best silversmiths from this "Golden Age of American Silver". Following the Revolutionary War , Revere acquired and made use of a silver rolling mill from England. [ 9 ] Not only did the rolling mill increase his rate of production [ 10 ] —hammering and flattening silver took most of a silversmith's time—he was able to roll and sell silver of appropriate, uniform thickness to other silversmiths. [ 11 ] He retired a wealthy artisan, his success partly due to this strategic investment. Although he is celebrated for his beautiful hollowware , Revere made his fortune primarily on low-end goods produced by the mill, such as flatware. [ 12 ] With the onset of the first Industrial Revolution , silversmithing declined as an artistic occupation. From about 1840 to 1940 in the United States and Europe, sterling silver cutlery (US: 'flatware') became de rigueur when setting a proper table . There was a marked increase in the number of silver companies that emerged during that period. The height of the silver craze was during the 50-year period from 1870 to 1920. Flatware lines during this period sometimes included up to 100 different types of pieces. Some countries developed systems of hallmarking silver : Individual eating implements often included: [ citation needed ] This was especially true during the Victorian period, when etiquette dictated no food should be touched with one's fingers. Serving pieces were often elaborately decorated and pierced and embellished with ivory , and could include any or all of the following: [ citation needed ] carving knife and fork, salad knife and fork, cold meat fork, punch ladle , soup ladle, gravy ladle, casserole -serving spoon, berry spoon, lasagna server, macaroni server, asparagus server, cucumber server, tomato server, olive spoon, cheese scoop, fish knife and fork , pastry server, petit four server, cake knife, bon bon spoon, salt spoon , sugar sifter or caster and crumb remover with brush. Cutlery sets were often accompanied by tea sets , hot water pots, chocolate pots, trays and salvers , goblets, demitasse cups and saucers, liqueur cups, bouillon cups, egg cups, plates, napkin rings, water and wine pitchers and coasters, candelabra and even elaborate centerpieces. The interest in sterling silver extended to business ( paper clips , mechanical pencils , letter openers, calling card boxes, cigarette cases ), to the boudoir (dresser trays, mirrors, hair and suit brushes, pill bottles, manicure sets, shoehorns , perfume bottles, powder bottles, hair clips ) and even to children (cups, cutlery , rattles ). Other uses for sterling silver include: Silver is not a very reactive metal and does not react with oxygen or water at ordinary temperatures, so does not easily oxidize. However, it is attacked by common components of atmospheric pollution . Silver sulfide slowly appears as a black tarnish during exposure to airborne compounds of sulfur (byproducts of the burning of fossil fuels and some industrial processes), and low level ozone reacts to form silver oxide. [ 14 ] As the purity of the silver decreases, the problem of corrosion or tarnishing increases because other metals in the alloy, usually copper, may react with oxygen in the air. The black silver sulfide (Ag 2 S) is among the most insoluble salts in aqueous solution , a property that is exploited for separating silver ions from other positive ions . Sodium chloride (NaCl) or common table salt is known to corrode silver-copper alloy, typically seen in silver salt shakers where corrosion appears around the holes in the top. Several products have been developed for the purpose of polishing silver that serve to remove sulfur from the metal without damaging or warping it. Because harsh polishing and buffing can permanently damage and devalue a piece of antique silver, valuable items are typically hand-polished to preserve the unique patinas of older pieces. Techniques such as wheel polishing , which are typically performed by professional jewelers or silver repair companies, are reserved for extreme tarnish or corrosion.
https://en.wikipedia.org/wiki/Sterling_silver
The dorsal plane (also known as the coronal plane or frontal plane , especially in human anatomy) is an anatomical plane that divides the body into dorsal and ventral sections [ 1 ] . It is perpendicular to the sagittal and transverse planes. The coronal plane is an example of a longitudinal plane . For a human, the mid-coronal plane would transect a standing body into two halves (front and back, or anterior and posterior) in an imaginary line that cuts through both shoulders. The sternal plane ( planum sternale ) is a coronal plane which transects the front of the sternum . [ 2 ] The term is derived from Latin corona ('garland, crown'), from Ancient Greek κορώνη ( korōnē , 'garland, wreath'). The coronal plane is so called because it lies in the same direction as the coronal suture . [ citation needed ]
https://en.wikipedia.org/wiki/Sternal_plane
The Stern–Volmer relationship , named after Otto Stern and Max Volmer , [ 1 ] allows the kinetics of a photophysical intermolecular deactivation process to be explored. Processes such as fluorescence and phosphorescence are examples of intramolecular deactivation ( quenching ) processes. An intermolecular deactivation is where the presence of another chemical species can accelerate the decay rate of a chemical in its excited state. In general, this process can be represented by a simple equation: or where A is one chemical species, Q is another (known as a quencher) and * designates an excited state. The kinetics of this process follows the Stern–Volmer relationship: Where I f 0 {\displaystyle I_{f}^{0}} is the intensity, or rate of fluorescence, without a quencher, I f {\displaystyle I_{f}} is the intensity, or rate of fluorescence, with a quencher, k q {\displaystyle k_{q}} is the quencher rate coefficient, τ 0 {\displaystyle \tau _{0}} is the lifetime of the emissive excited state of A without a quencher present, and [ Q ] {\displaystyle [\mathrm {Q} ]} is the concentration of the quencher. [ 2 ] For diffusion-limited quenching ( i.e. , quenching in which the time for quencher particles to diffuse toward and collide with excited particles is the limiting factor, and almost all such collisions are effective), the quenching rate coefficient is given by k q = 8 R T / 3 η {\displaystyle k_{q}={8RT}/{3\eta }} , where R {\displaystyle R} is the ideal gas constant, T {\displaystyle T} is temperature in kelvins and η {\displaystyle \eta } is the viscosity of the solution. This formula is derived from the Stokes–Einstein relation and is only useful in this form in the case of two spherical particles of identical radius that react every time they approach a distance R, which is equal to the sum of their two radii. The more general expression for the diffusion limited rate constant is k q = 2 R T 3 η [ r b + r a r b r a ] d c c {\displaystyle k_{q}={\frac {2RT}{3\eta }}[{\frac {r_{b}+r_{a}}{r_{b}r_{a}}}]d_{cc}} Where r a {\displaystyle r_{a}} and r b {\displaystyle r_{b}} are the radii of the two molecules and d c c {\displaystyle d_{cc}} is an approach distance at which unity reaction efficiency is expected (this is an approximation). In reality, only a fraction of the collisions with the quencher are effective at quenching, so the true quenching rate coefficient must be determined experimentally. [ 3 ] Optode , a chemical sensor that makes use of this relationship
https://en.wikipedia.org/wiki/Stern–Volmer_relationship
Steroid-induced skin atrophy is thinning of the skin at the level of the epidermis as a result of prolonged exposure to topical steroids . This is the most common side effect of overuse or misuse of topical steroids. [ 6 ] Topical steroids are typically prescribed for psoriasis, atopic dermatitis (eczema), and other itchy rashes. [ 7 ] In people with psoriasis using topical steroids it occurs in up to 5% of people after a year of use. [ 5 ] Intermittent use of topical steroids for atopic dermatitis is safe and does not cause skin thinning. [ 8 ] [ 9 ] [ 10 ] Skin atrophy can occur with both prescription and over the counter steroids. [ 11 ] Potency of the topical steroid will influence its propensity to cause skin atrophy. [ 12 ] Oral prednisone and intralesional steroids may also result in atrophied skin. [ 6 ] [ 13 ] Alternatives to topical steroids are available, depending on skin condition, with a reduced and different side effect profile. [ 14 ] [ 15 ] [ 16 ] Skin atrophy typically presents as thin, shiny skin. Wrinkling of the skin and erythema may also be observed. [ 17 ] In lighter skin tones, erythema presents as bright red, while darker skin tones appear more dark brown. [ 18 ] Once atrophy develops, further and deeper topical steroid side effects may occur, such as telangiectasia , easy bruising, purpura , and striae . [ 6 ] [ 7 ] [ 12 ] Intralesional steroids may result in an indentation at the site of injection. [ 6 ] [ 13 ] Steroid-induced skin atrophy is a clinical diagnosis that is aided by patient history. [ 7 ] A correlation between start of steroid application and presentation of side effects may be deduced. Key patient history features include amount of steroid applied, frequency and length of application, and location. [ 6 ] An important distinction between a systemic versus external cause of skin conditions is symmetry and definition. Clues for an external cause include well-defined borders in the area of steroid application, as well as asymmetry. [ 6 ] [ 11 ] [ 12 ] [ 13 ] These physical findings support topical steroid etiology. The mainstay of steroid-induced skin atrophy treatment is immediate discontinuation of any further topical corticosteroid use. [ 7 ] Protection and support of the impaired skin barrier is another priority. This can be achieved with utilizing gentle lotions, creams, and/or occlusives to restore the skin barrier. [ 7 ] [ 11 ] Eliminating harsh skin regimens or products will be necessary to minimize potential for further purpura or trauma, skin sensitivity, and potential infection. Development of alternative treatment with less side effects are available. This secondary treatment may also be considered if treatment of the skin condition is refractory to topical steroids. Other treatment choices will depend on the skin condition being treated. Topical steroids are the primary treatment of choice for atopic dermatitis. However, topical immunomodulators (tacrolimus, pimecrolimus) and biologics are available options. [ 14 ] [ 15 ] The mechanism of these alternatives target a different pathway than topical steroids, which help reduce side effects. [ 14 ] [ 15 ] Topical treatment with steroids are effective in most cases of mild psoriasis. In cases refractory to topical steroids or in the presence of steroid side effects, topical vitamin D anologues (calcipotriol, calcipotriene) have shown to be effective. [ 16 ] Off-label use of topical calcineurin inhibitors (tacrolimus, pimecrolimus) is also available with lesser known efficacy. [ 16 ] Steroid-induced skin atrophy [ 19 ] [ 20 ] is often permanent, though if caught soon enough and the topical corticosteroid discontinued in time, the degree of damage may be arrested or slightly improve. If cessation of topical steroid occurs while side effects are only at the level of the epidermis, these effects can be reversible. Deeper dermal damage is often irreversible. [ 6 ] [ 12 ] Dermal damage is typically marked by telangiectasias and stretch marks. [ 21 ] However, while the accompanying telangiectasias may improve marginally, the stretch marks are permanent and irreversible. [ 21 ] Common factors that increase the risk of skin thinning with steroid use include the following: Education on proper application and signs of steroid side effects are important in preventing permanent side effects. The fingertip unit (FTU) is a concept used to standardize amount of steroid applied per area of affected skin. [ 22 ] One FTU is approximately 0.5 grams of medication that covers 2% of body surface area. [ 12 ] It is defined as the amount of medication dispensed from the tip of the finger to the distal interphalangeal joint. [ 12 ] Recommended FTU in a single application is determined by location of affected skin. The face and neck requires 2.5 fingertip units, the front trunk 7 fingertip units, the back trunk 7 fingertip units, one arm 3 fingertip units, one hand (front and back) 1 fingertip unit, one leg 6 fingertip units, and one foot 2 fingertip units. [ 12 ] Strong steroids should be avoided on sensitive sites such as the face, inframammary folds, groin, and armpits. [ 7 ] [ 12 ] Application of weak steroids to these areas should be limited to less than two weeks of continuous use. [ 23 ] In general, use a potent preparation short term and weaker preparation for maintenance between flare-ups. While there is no proven best benefit-to-risk ratio, [ 24 ] if prolonged use of a topical steroid is required, pulse therapy can be used as a mechanism to prevent side effects. Pulse therapy refers to the application of a corticosteroid for 2 or 3 consecutive days each week or two. This is useful for maintaining control of chronic diseases. Generally a milder topical steroid or non-steroid treatment is used on the in-between days. [ 23 ] For treating atopic dermatitis , newer (second generation) corticosteroids, such as fluticasone propionate and mometasone furoate , are more effective and safer than older steroid generations. They are also generally safe and do not cause skin thinning when used intermittently to treat atopic dermatitis flare-ups. They are also safe when used twice a week for preventing flares (also known as weekend treatment). [ 8 ] [ 9 ] [ 10 ] Applying once daily is enough as it is as effective as twice or more daily application. [ 25 ] Steroids have anti-inflammatory, anti-proliferative, and immunosuppressive effects that make it the mainstay of treatment for autoimmune and inflammatory conditions such as psoriasis and atopic dermatitis. [ 5 ] [ 8 ] [ 26 ] Steroids bind intracellularly to their receptors found in the epidermis and dermis. [ 7 ] [ 26 ] [ 17 ] Recent research has found transactivation of these receptors as a mechanism for steroid atrophy. [ 26 ] [ 17 ] Further research has shown the pathogenesis of skin atrophy is due to its inhibitory effects on key skin components at different levels of the skin. Keratinocyte proliferation is inhibited in the epidermis, while collagen 1 and 3 synthesis is halted at the dermis. [ 17 ] Dermal atrophy intensifies with cessation of fibroblasts and hyaluronan synthase 3 enzyme leads because of decreased hyaluronic acid. [ 17 ] Atrophic effects begin within 3–14 days of consistent topical steroid use and is first evident in the epidermal layer. [ 26 ] Symptoms occurring at the level of epidermis are reversible. Prolonged use compromises the skin barrier integrity, which can cause dermal (more permanent) effects. [ 26 ] The strength of topical steroids is an important consideration in choosing a steroid to prescribe. Topical steroids are divided into classes based on potency. Low potency steroids (hydrocortisone, triamcinolone acetonide) may be used on any surface of the body and for a longer period (typically 1–2 weeks). [ 12 ] Higher potency steroids (clobetasol, betamethasone) are reserved for severe pruritus that has been refractory to lower potency topical steroids. [ 7 ] [ 12 ] Higher potency steroids are used for a shorter periods to control exacerbations. [ 7 ] [ 12 ]
https://en.wikipedia.org/wiki/Steroid-induced_skin_atrophy
1584 13072 ENSG00000160882 ENSMUSG00000022589 P15538 P15539 NM_000497 NM_001026213 NM_009991 NP_000488 NP_001021384 NP_034121 Steroid 11β-hydroxylase , also known as steroid 11β-monooxygenase , is a steroid hydroxylase found in the zona glomerulosa and zona fasciculata of the adrenal cortex . Named officially the cytochrome P450 11B1, mitochondrial , it is a protein that in humans is encoded by the CYP11B1 gene . [ 5 ] [ 6 ] The enzyme is involved in the biosynthesis of adrenal corticosteroids [ 7 ] by catalyzing the addition of hydroxyl groups during oxidation reactions. The CYP11B1 gene encodes 11β-hydroxylase, a member of the cytochrome P450 superfamily of enzymes . The cytochrome P450 proteins are monooxygenases that catalyze many reactions involved in drug metabolism and synthesis of cholesterol , steroids and other lipids . The product of this CYP11B1 gene is the 11β-hydroxylase protein. This protein localizes to the mitochondrial inner membrane and is involved in the conversion of various steroids in the adrenal cortex. Transcript variants encoding different isoforms have been noted for this gene . [ 6 ] The CYP11B1 gene is reversibly inhibited by etomidate [ 8 ] [ 9 ] and metyrapone . 11β-hydroxylase is a steroidogenic enzyme , i.e. the enzyme involved in the metabolism of steroids . The enzyme is primarily localized in the zona glomerulosa and zona fasciculata of the adrenal cortex. The enzyme functions by introducing a hydroxyl group at carbon position 11β on the steroid nucleus, thereby facilitating the conversion of certain steroids. Humans have two isozymes with 11β-hydroxylase activity: CYP11B1 and CYP11B2. CYP11B1 (11β-hydroxylase) is expressed at high levels and is regulated by ACTH , while CYP11B2 ( aldosterone synthase ) is usually expressed at low levels and is regulated by angiotensin II . In addition to the 11β-hydroxylase activity, both isozymes have 18-hydroxylase activity. [ 10 ] The CYP11B1 isozyme has strong 11β-hydroxylase activity, but the activity of 18-hydroxylase is only one-tenth of CYP11B2. [ 11 ] The weak 18-hydroxylase activity of CYP11B1 explains why an adrenal with suppressed CYP11B2 expression continues to synthesize 18-hydroxycorticosterone . [ 12 ] Here are some of the steroids, grouped by catalytic activity of the CYP11B1 isozyme: 11β-hydroxylase has strong [ 13 ] catalytic activity during conversion of 11-deoxycortisol to cortisol and 11-deoxycorticosterone to corticosterone , by catalyzing the hydroxylation of carbon hydrogen bond at 11-beta position. Note the extra "–OH" added at the 11 position (near the center, on ring "C"): As a mitochondrial P450 system, P450c11 is dependent on two electron transfer proteins, adrenodoxin reductase and adrenodoxin that transfer 2 electrons from NADPH to the P450 for each monooxygenase reaction catalyzed by the enzyme. In most respects this process of electron transfer appears similar to that of P450scc system that catalyzes cholesterol side chain cleavage. [ 25 ] Similar to P450scc the process of electrons transfer is leaky leading to superoxide production. The rate of electron leakage during metabolism depends on the functional groups of the steroid substrate. [ 26 ] The expression of the enzyme in adrenocortical cells is regulated by the trophic hormone corticotropin ( ACTH ). [ 27 ] A mutation in genes encoding 11β-hydroxylase is associated with congenital adrenal hyperplasia due to 11β-hydroxylase deficiency . 11β-hydroxylase is involved in the metabolism of 17α-hydroxyprogesterone to 21-deoxycortisol , [ 17 ] in cases of congenital adrenal hyperplasia due to 21-hydroxylase deficiency . [ 28 ] [ 29 ]
https://en.wikipedia.org/wiki/Steroid_11β-hydroxylase
The steroidogenic acute regulatory protein , commonly referred to as StAR ( STARD1 ), is a transport protein that regulates cholesterol transfer within the mitochondria , which is the rate-limiting step in the production of steroid hormones. It is primarily present in steroid-producing cells, including theca cells and luteal cells in the ovary , Leydig cells in the testis and cell types in the adrenal cortex . Cholesterol needs to be transferred from the outer mitochondrial membrane to the inner membrane where cytochrome P450scc enzyme (CYP11A1) cleaves the cholesterol side chain, which is the first enzymatic step in all steroid synthesis. The aqueous phase between these two membranes cannot be crossed by the lipophilic cholesterol, unless certain proteins assist in this process. A number of proteins have historically been proposed to facilitate this transfer including: sterol carrier protein 2 (SCP2), steroidogenic activator polypeptide (SAP), peripheral benzodiazepine receptor (PBR or translocator protein, TSPO), and StAR. It is now clear that this process is primarily mediated by the action of StAR. The mechanism by which StAR causes cholesterol movement remains unclear as it appears to act from the outside of the mitochondria and its entry into the mitochondria ends its function. Various hypotheses have been advanced. Some involve StAR transferring cholesterol itself like a shuttle. [ 1 ] [ 2 ] While StAR may bind cholesterol itself, [ 3 ] the exorbitant number of cholesterol molecules that the protein transfers would indicate that it would have to act as a cholesterol channel instead of a shuttle. Another notion is that it causes cholesterol to be kicked out of the outer membrane to the inner (cholesterol desorption). [ 4 ] StAR may also promote the formation of contact sites between the outer and inner mitochondrial membranes to allow cholesterol influx. Another suggests that StAR acts in conjunction with PBR, causing the movement of Cl − out of the mitochondria to facilitate contact site formation. However, evidence for an interaction between StAR and PBR remains elusive. In humans, the gene for StAR is located on chromosome 8p11.23 [ 5 ] and the protein has 285 amino acids. The signal sequence of StAR that targets it to the mitochondria is clipped off in two steps with import into the mitochondria. Phosphorylation at the serine at position 195 increases its activity. [ 6 ] The domain of StAR important for promoting cholesterol transfer is the StAR-related transfer domain (START domain). StAR is the prototypic member of the START domain family of proteins and is thus also known as STARD1 for "START domain-containing protein 1". [ 7 ] It is hypothesized that the START domain forms a pocket in StAR that binds single cholesterol molecules for delivery to P450scc . The closest homolog to StAR is MLN64 (STARD3). [ 8 ] Together they comprise the StarD1/D3 subfamily of START domain-containing proteins. StAR is a mitochondrial protein that is rapidly synthesized in response to stimulation of the cell to produce steroid. Hormones that stimulate its production depend on the cell type and include luteinizing hormone (LH), ACTH and angiotensin II . At the cellular level, StAR is synthesized typically in response to activation of the cAMP second messenger system , although other systems can be involved even independently of cAMP . [ 9 ] StAR has thus far been found in all tissues that can produce steroids, including the adrenal cortex, the gonads , the brain and the nonhuman placenta . [ 10 ] One known exception is the human placenta. Substances that suppress StAR activity, like those listed below, can cause endocrine disrupting effects, including altered steroid hormone levels and fertility. Mutations in the gene for StAR cause lipoid congenital adrenal hyperplasia (lipoid CAH), in which patients produce little steroid and can die shortly after birth. [ 10 ] Mutations that less severely affect the function of StAR result in nonclassic lipoid CAH or familial glucocorticoid deficiency type 3. [ 16 ] [ 17 ] All known mutations disrupt StAR function by altering its START domain. In the case of StAR mutation, the phenotype does not present until birth since human placental steroidogenesis is independent of StAR. At the cellular level, the lack of StAR results in a pathologic accumulation of lipid within cells, especially noticeable in the adrenal cortex as seen in the mouse model. The testes are undescended and the resident steroidogenic Leydig cells are modestly affected. Early in life, the ovary is spared as it does not express StAR until puberty. After puberty, lipid accumulations and hallmarks of ovarian failure are noted. [ citation needed ] While loss of functional StAR in the human and the mouse catastrophically reduces steroid production, it does not eliminate all of it, indicating the existence of StAR-independent pathways for steroid generation. Aside from the human placenta , these pathways are considered minor for endocrine production. It is unclear what factors catalyze StAR-independent steroidogenesis. Candidates include oxysterols which can be freely converted to steroid [ 18 ] and the ubiquitous MLN64 . Recent findings suggest that StAR may also traffic cholesterol to a second mitochondrial enzyme, sterol 27-hydroxylase . This enzyme converts cholesterol to 27-hydroxycholesterol. In this way it may be important for the first step in one of the two pathways for the production of bile acids by the liver (the alternative pathway). [ 19 ] Evidence also shows that the presence of StAR in a type of immune cell , the macrophage , where it can stimulate the production of 27-hydroxycholesterol. [ 20 ] [ 21 ] In this case, 27-hydroxycholesterol may by itself be helpful against the production of inflammatory factors associated with cardiovascular disease . It is important to note that no study has yet found a link between the loss of StAR and problems in bile acid production or increased risk for cardiovascular disease. Recently StAR was found to be expressed in cardiac fibroblasts in response to ischemic injury due to myocardial infarction. In these cells it has no apparent de novo steroidogenic activity, as evidenced by the lack of the key steroidogenic enzymes cytochrome P450 side chain cleavage (CYP11A1) and 3 beta hydroxysteroid dehydrogenase (3βHSD). StAR was found to have an anti-apoptotic effect on the fibroblasts, which may allow them to survive the initial stress of the infarct, differentiate and function in tissue repair at the infarction site. [ 22 ] The StAR protein was first identified, characterized and named by Douglas Stocco at Texas Tech University Health Sciences Center in 1994. [ 23 ] The role of this protein in lipoid CAH was confirmed the following year in collaboration with Walter Miller at the University of California, San Francisco . [ 24 ] All of this work follows the initial observations of the appearance of this protein and its phosphorylated form coincident with factors that caused steroid production by Nanette Orme-Johnson while at Tufts University . [ 25 ]
https://en.wikipedia.org/wiki/Steroidogenic_acute_regulatory_protein
The Stetter reaction is a reaction used in organic chemistry to form carbon-carbon bonds through a 1,4-addition reaction utilizing a nucleophilic catalyst . [ 1 ] While the related 1,2-addition reaction, the benzoin condensation , was known since the 1830s, the Stetter reaction was not reported until 1973 by Dr. Hermann Stetter. [ 2 ] The reaction provides synthetically useful 1,4-dicarbonyl compounds and related derivatives from aldehydes and Michael acceptors . Unlike 1,3-dicarbonyls, which are easily accessed through the Claisen condensation , or 1,5-dicarbonyls, which are commonly made using a Michael reaction , 1,4-dicarbonyls are challenging substrates to synthesize, yet are valuable starting materials for several organic transformations, including the Paal–Knorr synthesis of furans and pyrroles. Traditionally utilized catalysts for the Stetter reaction are thiazolium salts and cyanide anion, but more recent work toward the asymmetric Stetter reaction has found triazolium salts to be effective. The Stetter reaction is an example of umpolung chemistry, as the inherent polarity of the aldehyde is reversed by the addition of the catalyst to the aldehyde, rendering the carbon center nucleophilic rather than electrophilic. As the Stetter reaction is an example of umpolung chemistry, the aldehyde is converted from an electrophile to a nucleophile under the reaction conditions. [ 3 ] This is accomplished by activation from some catalyst - either cyanide (CN − ) or thiazolium salt. [ 1 ] For the use of either catalyst, the mechanism is very similar; the only difference is that with thiazolium salts, the catalyst must be deprotonated first to form the active catalytic species. The active catalyst can be described as the combination of two contributing resonance forms - an ylide or a carbene , both of which portray the nucleophilic character at carbon. The thiazolium ylide or CN − can then add into the aldehyde substrate, forming a cyanohydrin in the case of CN − or the Breslow intermediate in the case of thiazolium salt. The Breslow intermediate was proposed by Ronald Breslow in 1958 and is a common intermediate for all thiamine -catalyzed reactions, whether in vitro or in vivo . [ 4 ] Once the "nucleophilic aldehyde" synthon is formed, whether as a cyanohydrin or stabilized by a thiazolium ylide, the reaction can proceed down two pathways. The faster pathway is self-condensation with another molecule of aldehyde to give benzoin products. However, benzoin condensation is completely reversible, and therefore does not interfere with product formation in the Stetter reaction. In fact, benzoins can be used instead of aldehydes as substrates to achieve the same overall Stetter transformation, because benzoins can be restored to their aldehyde precursors under the reaction conditions. [ 1 ] The desired pathway toward the Stetter product is the 1,4-addition of the nucleophilic aldehyde to a Michael-type acceptor. After 1,4-addition, the reaction is irreversible and ultimately, the 1,4-dicarbonyl is formed when the catalyst is kicked out to regenerate CN − or the thiazolium ylide. The Stetter reaction produces classically difficult to access 1,4-dicarbonyl compounds and related derivatives. The traditional Stetter reaction is quite versatile, working on a wide variety of substrates. [ 1 ] Aromatic aldehydes, heteroaromatic aldehydes, and benzoins can all be used as acyl anion precursors with thiazolium salt and cyanide catalysts. However, aliphatic aldehydes can only be utilized if a thiazolium salt is used as a catalyst, as they undergo aldol condensation side reaction when a cyanide catalyst is used. In addition, α,β-unsaturated esters, ketones, nitriles, nitros, and aldehydes are all appropriate Michael acceptors with either catalyst. However, the general scope of asymmetric Stetter reactions is more limited. Intramolecular asymmetric Stetter reactions enjoy a range of acceptable Michael acceptors and acyl anion precursors in essentially any combination. [ 5 ] Intramolecular asymmetric Stetter reactions can utilize aromatic, heteroaromatic and aliphatic aldehydes with a tethered α,β-unsaturated ester, ketone, thioester, malonate, nitrile or Weinreb amide. It has been shown that α,β-unsaturated nitros and aldehydes are not suitable Michael acceptors and have markedly decreased enantiomeric excess in such reactions. [ 5 ] Another limitation encountered with intramolecular asymmetric Stetter reactions is that only substrates that result in the formation of a six-membered ring show synthetically useful enantiomeric excess; substrates which form five and seven-membered rings either do not react or show low stereoinduction. [ 5 ] On the other hand, intermolecular asymmetric reactions are quite confined to specifically matched combinations of acyl anion precursor and Michael acceptor, such as an aliphatic aldehyde with a nitroalkene. [ 6 ] In addition, these substrates tend to be rather activated, as the intermolecular asymmetric Stetter reaction is still in the early stages of development. Several variations of the Stetter reaction have been developed since its discovery in 1973. In 2001, Murry et al reported a Stetter reaction of aromatic aldehydes onto acylimine derivatives to give α-amido ketone products. [ 7 ] The acylimine acceptors were generated in situ from α-tosylamide substrates, which underwent elimination in the presence of base. Good to excellent yields (75-90%) were observed. Mechanistic investigations showed that the corresponding benzoins were not adequate substrates, contrary to traditional Stetter reactions. [ 1 ] From this, the authors conclude the Stetter reaction of acylimines is under kinetic control, rather than thermodynamic control. Another variation of the Stetter reaction involves the use of 1,2-dicarbonyls as precursors to the acyl anion intermediate. In 2005, Scheidt and coworkers reported the use of sodium pyruvate, which loses CO 2 to form the Breslow intermediate. [ 8 ] Similarly, in 2011 Bortolini and coworkers demonstrated the use of α-diketones to generate an acyl anion. [ 9 ] Under the conditions they developed, 2,3-butadienone is cleaved after addition to the thiazolium catalyst to release ethyl acetate and generate the Breslow intermediate necessary for the Stetter reaction to proceed. In addition, they showed the atom economy and utility of using a cyclic α-diketone to generate the Stetter product with a tethered ethyl ester. The reaction precedes through the same mechanism as the acyclic version, but the ester generated by attack of ethanol remains tethered to the product. However, the conditions only allow for the generation of ethyl esters, due to the necessity of ethanol as solvent. Substitution of ethanol with tert -butanol resulted in no product. The authors speculate this is due to the difference in acidity between the two alcoholic solvents. In 2004, Scheidt and coworkers introduced acyl silanes as competent substrates in the Stetter reaction, a variation they termed the "sila-Stetter reaction." [ 10 ] Under their reaction conditions, the thiazolium catalyst induces a [1,2] Brook rearrangement , which is followed by desilylation by an isopropanol additive to give the common Breslow intermediate of the traditional Stetter reaction. The desilylation step was found to be necessary, and the reaction does not proceed without an alcoholic additive. Acyl silanes are less electrophilic than the corresponding aldehydes, preventing typical benzoin-type byproducts often observed in the Stetter reaction. [ 11 ] The first asymmetric variant of the Stetter reaction was reported in 1996 by Enders et al , employing a chiral triazolium catalyst 1 . [ 12 ] Subsequently, several other catalysts were reported for asymmetric Stetter reactions, including 2 , [ 13 ] 3 , [ 14 ] and 4 . [ 15 ] The success of the Rovis group's catalyst 2 led them to further explore this family of catalysts and expand their use for asymmetric Stetter reactions. In 2004, they reported the enantioselective formation of quaternary centers from aromatic aldehydes in an intramolecular Stetter reaction with a slightly modified catalyst. [ 16 ] Further work extended the scope of this reaction to include aliphatic aldehydes as well. [ 17 ] Subsequently, it was shown that the olefin geometry of the Michael acceptor dictates diastereoselectivity in these reactions, whereby the catalyst dictates the enantioselectivity of the initial carbon bond formation and allylic strain minimization dictates the diastereoselective intramolecular protonation. [ 18 ] The inherent difficulties of controlling enantioselectivity in intermolecular reactions made the development of an intermolecular asymmetric Stetter reaction a challenge. While limited enantiomeric excess had been reported by Enders in the early 1990s for the reaction of n -butanal with chalcone, [ 19 ] conditions for a synthetically useful asymmetric intermolecular Stetter reaction were not reported until 2008 when both the groups of Enders and Rovis published such reactions. The Enders group utilized a triazolium-based catalyst to effect the coupling of aromatic aldehydes with chalcone derivatives with moderate yields. [ 20 ] The concurrent publication from the Rovis group also employed a triazolium-based catalyst and reported the Stetter reaction between glyoxamides and alkylidenemalonates in good to excellent yields. [ 21 ] Rovis and coworkers subsequently went on to explore the asymmetric intermolecular Stetter reaction of heterocyclic aldehydes and nitroalkenes . [ 22 ] During optimization of this reaction, it was found that a catalyst with a fluorinated backbone greatly enhanced enantioselectivity in the reaction. It was proposed that the fluorinated backbone helps to lock the conformation of the catalyst in a way the increases enantioselectivity. Further computational studies on this system verified that the stereoelectronic attraction between the developing partial negative charge on the nitroalkene in the transition state and the partial positive charge of the C-F dipole is responsible for the increase in enantiomeric excess observed with the use of the catalyst with backbone fluorination. [ 23 ] While this is a marked advance in the area of intermolecular asymmetric Stetter reactions, the substrate scope is limited and the catalyst is optimized for the specific substrates being utilized. Another contribution to the development of asymmetric intermolecular Stetter reactions came from Glorius and coworkers in 2011. [ 6 ] They demonstrated the synthesis of α-amino acids enantioselectively by utilizing N -acylamido acrylate as the conjugate acceptor. Significantly, the reaction can be run on a 5 mmol scale without loss of yield or enantioselectivity. The Stetter reaction is an effective tool in organic synthesis . The products of the Stetter reaction, 1,4-dicarbonyls, are valuable moieties for the synthesis of complex molecules. For example, Trost and coworkers employed a Stetter reaction as one step in their synthesis of rac -hirsutic acid C. [ 24 ] The intramolecular coupling of an aliphatic aldehyde with a tethered α,β-unsaturated ester led to the desired tricyclic 1,4-dicarbonyl in 67% yield. This intermediate was converted into rac -hirsutic acid C in seven more steps. The Stetter reaction is commonly used in sequence with the Paal-Knorr synthesis of furans and pyrroles, which a 1,4-dicarbonyl undergoes condensation with itself or in the presence of an amine under high temperature, acidic conditions. In 2001, Tius and coworkers reported the asymmetric total synthesis of roseophilin utilizing an intermolecular Stetter reaction to couple an aliphatic aldehyde with a cyclic enone. [ 25 ] After ring-closing metathesis and alkene reduction, the 1,4-dicarbonyl product was converted to a pyrrole via the Paal-Knorr synthesis and further elaborated to the natural product. In 2004, a one-pot coupling-isomerization-Stetter-Paal Knorr sequence was reported. [ 26 ] This procedure first utilizes palladium cross-coupling chemistry to couple aryl halides with propargylic alcohols to give α,β-unsaturated ketones, which can then undergo a Stetter reaction with an aldehyde. Once the 1,4-dicarbonyl compound is formed, heating in the presence of acid will give the furan, while heating in the presence of ammonium chloride and acid will give the pyrrole. The entire sequence is performed in one-pot with no work-up or purification between steps. Ma and coworkers developed an alternative method for accessing furans utilizing the Stetter reaction. [ 27 ] In their report, 3-aminofurans are synthesized under Stetter conditions for coupling aromatic aldehydes with dimethyl acetylenedicarboxylate (DMAD), whereby the thiazolium ylide is hydrolyzed by the aromatization of the furan product. As the thiazolium is destroyed under these conditions, it is not catalytic and must be used in stoichiometric quantities. They further elaborated on this work by developing a method in which 2-aminofurans are synthesized by cyclization onto a nitrile. [ 28 ] In this method, the thiazolium ylide is employed catalytically and the free amine product is generated.
https://en.wikipedia.org/wiki/Stetter_reaction
Stephen Alan Owens (born August 19, 1955) is an American attorney and politician. Originally from Memphis, Tennessee , he served as chief counsel and state director for U.S. Senator Al Gore before moving to the Phoenix, Arizona area during Gore's unsuccessful presidential run in 1988. He was a fundraiser for the Clinton-Gore campaign in 1992, and, from 1993 to 1995, was chair of the Arizona Democratic Party . He was the Democratic nominee for Arizona's 6th congressional district in 1996 and 1998 , losing both times to incumbent J. D. Hayworth . Owens served as director of the Arizona Department of Environmental Quality from 2003 to 2009 under Governor Janet Napolitano , after which he was appointed by President Barack Obama to be Assistant Administrator of the U.S. Environmental Protection Agency for the Office of Prevention, Pesticides and Toxic Substances. After two years in Washington, he joined Squire Sanders (now Squire Patton Boggs ) as a partner in their Phoenix office. Since February 2022, he has served as a member of the U.S. Chemical Safety and Hazard Investigation Board by appointment of President Joe Biden . Owens was born on August 19, 1955, in Memphis, Tennessee to Milburne (1924–1995), a truck driver, and Maxine Neal Owens (1932–2019), who worked at Sears . [ 1 ] [ 2 ] He attended Messick High School , where he was elected by his peers as president of the class of 1973. [ 3 ] Later, he was accepted into Brown University on an academic scholarship. [ 4 ] While there, was an active member of the Undergraduate Council of Students, the school's student government . He won election as vice president in 1976 and as president the following year. [ 5 ] [ 6 ] After five years at Brown, Owens graduated with honors with a degree in public policy in 1978. He then attended Vanderbilt University Law School , where he was editor-in-chief of the school's law review , graduating in 1981. [ 7 ] [ 8 ] He was admitted to the Tennessee bar later that year and spent a year as a law clerk to Judge Thomas A. Wiseman Jr. of the U.S. District Court for the Middle District of Tennessee . [ 9 ] Owens married Karen Lynn Carter on November 12, 1988, at the Customs House in Nashville . [ 10 ] The two knew each other at Vanderbilt Law and reconnected when Owens moved to Phoenix, Arizona , where Carter was practicing law with Janet Napolitano at Lewis & Roca . They went on to have two sons. [ 8 ] Owens first met then-U.S. Representative Al Gore as a law student. [ 11 ] [ 12 ] In 1982, he moved to Washington, D.C. after Gore named him counsel to the House Science and Technology Committee 's Subcommittee on Oversight and Investigations, which Gore chaired. [ 13 ] During the 1984 U.S. Senate election , in which Gore handily defeated Republican state senator Victor Ashe , Owens served as his Shelby County campaign manager. [ 14 ] In the Senate, he was Gore's chief counsel and later his state director. [ 1 ] In 1987, Gore kicked off his campaign for the following year's Democratic presidential nomination . Despite a relatively successful Super Tuesday , by April 1988, he was trailing far behind Michael Dukakis and Jesse Jackson . [ 15 ] Owens, the campaign's Southern director, was dispatched to Phoenix to round up delegates ahead of the April 16 Arizona caucus and ended up staying in the state. [ 8 ] [ 16 ] He took an active role in state politics, working in 1992 as a fundraiser for the Clinton-Gore campaign , and, on January 16, 1993, he was elected chair of the Arizona Democratic Party , after incumbent Bill Minette declined to run for a second term. [ 12 ] He won reelection in early 1995 but resigned in July of that year, in part to focus on a 1996 congressional run. [ 17 ] He was succeeded by former congressman Sam Coppersmith . [ 18 ] [ 19 ] [ 20 ] [ 21 ] After moving to Phoenix, Owens entered private practice, joining the law firm Brown & Bain as a regulatory attorney and registered lobbyist. [ 11 ] [ 22 ] [ 23 ] Later, he joined Beshears Muchmore Wallwork. [ 24 ] In 2003, when friend Janet Napolitano was sworn in as Governor of Arizona, she appointed Owens to serve as director of the state Department of Environmental Quality. [ 8 ] Six years later, Napolitano and Owens were both tapped for jobs in the Obama administration: Napolitano as Secretary of Homeland Security and Owens as Assistant Administrator of the Environmental Protection Agency for the Office of Prevention, Pesticides and Toxic Substances. [ 25 ] Owens left in 2011 to return to Arizona and become a partner with Squire Sanders (now Squire Patton Boggs ). [ 26 ] In 2021, President Joe Biden nominated Owens to serve on the U.S. Chemical Safety and Hazard Investigation Board . Owens' nomination was confirmed by the Senate in December 2021, [ 27 ] and he began service on February 2, 2022. [ 28 ] Following the resignation of Katherine Lemos in July 2022, President Biden appointed Owens as interim executive authority, and nominated him as chair of the board. On November 17, 2022, the United States Senate Committee on Environment and Public Works held hearings on his nomination. On December 13, 2022, the United States Senate discharged the committee from further consideration of the nomination by unanimous consent agreement, and confirmed the nomination by voice vote. [ 29 ]
https://en.wikipedia.org/wiki/Steve_Owens_(Arizona_politician)
Steve Shnider ( Hebrew : סטיב שניידר ) is a retired professor of mathematics at Bar Ilan University . [ 1 ] He received a PhD in Mathematics from Harvard University in 1972, under Shlomo Sternberg . [ 2 ] His main interests are in the differential geometry of fiber bundles ; algebraic methods in the theory of deformation of geometric structures; symplectic geometry ; supersymmetry ; operads ; and Hopf algebras . He retired in 2014. [ 3 ] A 2002 book of Markl, Shnider and Stasheff Operads in algebra, topology, and physics was the first book to provide a systematic treatment of operad theory , an area of mathematics that came to prominence in 1990s and found many applications in algebraic topology , category theory , graph cohomology, representation theory , algebraic geometry , combinatorics , knot theory , moduli spaces , and other areas. The book was the subject of a Featured Review in Mathematical Reviews by Alexander A. Voronov which stated, in particular: "The first book whose main goal is the theory of operads per se ... a book such as this one has been long awaited by a wide scientific readership, including mathematicians and theoretical physicists ... a great piece of mathematical literature and will be helpful to anyone who needs to use operads, from graduate students to mature mathematicians and physicists." [ 4 ] This article about a mathematician is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steve_Shnider
Steven M. Bachrach is an organic chemist who took up the position of Dean of Science at Monmouth University in 2016. Bachrach had previously been the Dr D. R. Semmes Distinguished Professor of Chemistry at Trinity University in San Antonio, Texas. [ 1 ] Bachrach is the author of the textbook Computational Organic Chemistry . [ 2 ] Bachrach earned a Bachelor of Science degree from the University of Illinois Urbana-Champaign and a PhD from the University of California, Berkeley . [ 1 ] Bachrach is an organic chemist specializing in computational organic chemistry and began his career at Northern Illinois University , where he earned a Professorship . He spent 17 years at Trinity University , holding positions including the Dr. D. R. Semmes Distinguished Professor of Chemistry, Chair of the Department of Chemistry, and Assistant Vice-President for Special Projects. He then took up the position of Dean of Science in 2016 at Monmouth University . [ 1 ] Bachrach departed from Monmouth University in 2022 and currently holds the position of Dean, Artis College of Science and Technology at Radford University . [ 3 ] Bachrach has written a textbook about computational organic chemistry, the second edition of which was published by John Wiley and Sons in 2014. [ 2 ] Bachrach maintains a blog to provide supplementary materials for the textbook. [ 5 ] For example, following the publication of the structure of the dication of hexamethylbenzene , C 6 (CH 3 ) 2+ 6 , in Angewandte Chemie International Edition , [ 6 ] Bachrach discussed its pyramidal geometry and six-coordinate carbon moiety in a blog post, demonstrating it is not hypervalent and explaining its three-dimensional aromaticity . [ 4 ]
https://en.wikipedia.org/wiki/Steven_Bachrach
Steven G Arless is a Canadian entrepreneur in the biomedical technology industry. [ 1 ] He directed and developed several medical device companies treating cardiovascular disease from inception through major financing and public stock offering, advancing new medical devices from R&D to commercialization and global sales, including CryoCath Technologies, CardioInsight and Resonant Medical. [ 2 ] [ 3 ] He has served as CEO for a number of companies, most notably CryoCath Technologies (sold to Medtronic Inc. for $400 million CAD), Resonant Medical (sold to Elekta AB for $30 million), and CardioInsight (also sold to Medtronic for $100 million USD). Arless was born in Montreal, Quebec on July 23, 1949. His father, an accountant, and his mother were immigrants to Canada from Lebanon . [ 4 ] Arless studied Chemistry and Engineering at McGill University , graduating with a BSc. in 1971. [ 1 ] Arless also completed his MBA at the John Molson School of Business at Concordia University in Montréal in 2008. [ 1 ] Arless is an angel investor in medical technology and is involved in mentorship and teaching of graduate students at McGill University and Concordia University and McGill University where he is a Professor of Practice. [ 1 ] In 1971, Arless worked with Smith & Nephew , a British medical equipment company for 17 years and served as president for five years. [ 4 ] He was CEO of the North American operations from 1986 to 1990. [ 4 ] In 1996, Arless became CEO of CryoSurge founded in 1995 and changed the name to CryoCath Technologies inc., in 1997, a company specializing in a cryoablation treatment for heart arrhythmia. [ 5 ] Arless served as president and CEO of CryoCath from its inception until 2006, when he resigned. [ 6 ] Arless relocated to Cleveland, Ohio in 2009 to become CEO of CardioInsight, a company marketing and advancing research in cardiovascular technology. [ 7 ] [ 6 ] Arless stepped down from his position at CardioInsight in 2012. [ 8 ] Arless co-founded medical device company, Soundbite Medical Solutions, which fabricated wire devices to treat occlusions caused by coronary and peripheral artery disease with shockwaves . [ 9 ] Arless stepped down as CEO in 2017. [ 10 ] As of early 2018, Arless has been serving as MedTech Entrepreneur-in-Residence at Centech, a Montreal-based technology accelerator that strives to create new sustainable high tech business startups. [ 11 ] In late 2019, Arless co-founded and took the position as CEO of ViTAA Medical Solutions, a healthcare AI company specializing in 3D mapping of patients with aortic aneurysms that provide surgeons with clinical intelligence to guide therapy. [ 12 ] [ 13 ] Arless was awarded the Ernst & Young Entrepreneur of the Year award in 2005 for Health Sciences. [ 14 ]
https://en.wikipedia.org/wiki/Steven_G_Arless
Steven T. Bramwell (born 7 June 1961) is a British physicist and chemist who works at the London Centre for Nanotechnology and the Department of Physics and Astronomy, University College London . He is known for his experimental discovery of spin ice with M. J. Harris and his calculation of a critical exponent observed in two-dimensional magnets with P. C. W. Holdsworth. A probability distribution for global quantities in complex systems, the "Bramwell-Holdsworth-Pinton (BHP) distribution", (to be implemented in Mathematica [ 3 ] ) is named after him. [ 4 ] In 2009 Bramwell's group was one of several to report experimental evidence of magnetic monopole excitations in spin ice. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] He coined the term "magnetricity" to describe currents of these effective magnetic "monopoles" in condensed-matter systems. [ 10 ] Bramwell studied chemistry at Oxford University , obtaining his PhD in 1989. He was a professor of physical chemistry at University College London from 2000-2009, before becoming a Professor in the Department of Physics and Astronomy. Bramwell was awarded the 2010 Holweck Prize of the British Institute of Physics and the Société Française de Physique (SFP) for "pioneering new concepts in the experimental and theoretical study of spin systems". [ 1 ] He shared the 2012 Europhysics Prize of the European Physical Society Condensed Matter Division "for the prediction and experimental observation of magnetic monopoles in spin ice". [ 2 ] He is a Fellow of the Institute of Physics . In 2010 he won the Times Higher Education research project of the year for "magnetricity", [ 11 ] and was named by The Times on their list of the 100 top UK scientists. [ 12 ] In 2019, Bramwell was awarded an honorary doctorate (FDhc) at Uppsala University in recognition of his work on spin ice, two-dimensional magnetism and longstanding collaboration with researchers at the Ångström Laboratory [ 13 ] .
https://en.wikipedia.org/wiki/Steven_T._Bramwell
The Stevens Award is a software engineering lecture award given by the Reengineering Forum, an industry association. The international Stevens Award was created to recognize outstanding contributions to the literature or practice of methods for software and systems development . The first award was given in 1995. The presentations focus on the current state of software methods and their direction for the future. [ 1 ] This award lecture is named in memory of Wayne Stevens (1944-1993), a consultant, author, pioneer, and advocate of the practical application of software methods and tools. The Stevens Award and lecture is managed by the Reengineering Forum. The award was founded by International Workshop on Computer Aided Software Engineering (IWCASE), an international workshop association of users and developers of computer-aided software engineering (CASE) technology, which merged into The Reengineering Forum. Wayne Stevens was a charter member of the IWCASE executive board. [ 1 ]
https://en.wikipedia.org/wiki/Stevens_Award
The Stevens rearrangement in organic chemistry is an organic reaction converting quaternary ammonium salts and sulfonium salts to the corresponding amines or sulfides in presence of a strong base in a 1,2-rearrangement . [ 1 ] The reactants can be obtained by alkylation of the corresponding amines and sulfides. The substituent R next the amine methylene bridge is an electron-withdrawing group . The original 1928 publication by Thomas S. Stevens [ 2 ] concerned the reaction of 1-phenyl-2-(N,N-dimethylamino)ethanone with benzyl bromide to the ammonium salt followed by the rearrangement reaction with sodium hydroxide in water to the rearranged amine. A 1932 publication [ 3 ] described the corresponding sulfur reaction. The reaction mechanism of the Stevens rearrangement is one of the most controversial reaction mechanisms in organic chemistry. [ 4 ] Key in the reaction mechanism [ 5 ] [ 6 ] for the Stevens rearrangement (explained for the nitrogen reaction) is the formation of an ylide after deprotonation of the ammonium salt by a strong base. Deprotonation is aided by electron-withdrawing properties of substituent R. Several reaction modes exist for the actual rearrangement reaction. A concerted reaction requires an antarafacial reaction mode but since the migrating group displays retention of configuration this mechanism is unlikely. In an alternative reaction mechanism the N–C bond of the leaving group is homolytically cleaved to form a di-radical pair ( 3a ). In order to explain the observed retention of configuration, the presence of a solvent cage is invoked. Another possibility is the formation of a cation-anion pair ( 3b ), also in a solvent cage. Competing reactions are the Sommelet-Hauser rearrangement and the Hofmann elimination . In one application a double-Stevens rearrangement expands a cyclophane ring. [ 7 ] The ylide is prepared in situ by reaction of the diazo compound ethyl diazomalonate with a sulfide catalyzed by dirhodium tetraacetate in refluxing xylene . Recently, γ-butyrobetaine hydroxylase , [ 8 ] [ 9 ] an enzyme that is involved in the human carnitine biosynthesis pathway, was found to catalyze a C-C bond formation reaction in a fashion analogous to a Stevens type rearrangement. [ 8 ] [ 10 ] The substrate for the reaction is meldonium . [ 11 ]
https://en.wikipedia.org/wiki/Stevens_rearrangement
A Stevenson screen or instrument shelter is a shelter or an enclosure used to protect meteorological instruments against precipitation and direct heat radiation from outside sources, while still allowing air to circulate freely around them. [ 1 ] It forms part of a standard weather station and holds instruments that may include thermometers (ordinary, maximum/minimum ), a hygrometer , a psychrometer , a dewcell , a barometer , and a thermograph . Stevenson screens may also be known as a cotton region shelter, an instrument shelter, a thermometer shelter, a thermoscreen, or a thermometer screen. Its purpose is to provide a standardised environment in which to measure temperature, humidity, dewpoint, and atmospheric pressure. It is white in color to reflect direct solar radiation. It was designed by Thomas Stevenson (1818–1887), a Scottish civil engineer who designed many lighthouses , and was the father of author Robert Louis Stevenson . The development of his small thermometer screen with double- louvered walls on all sides and no floor was reported in 1864. [ 2 ] After comparisons with other screens in the United Kingdom , Stevenson's original design was modified. [ 3 ] The modifications by Edward Mawley of the Royal Meteorological Society in 1884 included a double roof, a floor with slanted boards, and a modification of the double louvers. [ 4 ] This design was adopted by the British Meteorological Office and eventually other national services, such as Canada . The national services developed their own variations, such as the single-louvered Cotton Region design in the United States . [ 5 ] The traditional Stevenson screen is a box shape, constructed of wood, in a double-louvered design. [ 6 ] However, it is possible to construct a screen using other materials and shapes, such as a pyramid. The World Meteorological Organization (WMO) agreed standard for the height of the thermometers is between 1.25 and 2 m (4 ft 1 in and 6 ft 7 in) above the ground. The interior size of the screen will depend on the number of instruments that are to be used. A single screen may measure 76.5 by 61 by 59.3 cm (30.1 by 24.0 by 23.3 in) and a double screen 76.5 by 105 by 59.3 cm (30.1 by 41.3 by 23.3 in). The unit is either supported by four metal or wooden legs or a wooden post. The top of the screen was originally composed of two asbestos boards with an air space between them. These asbestos boards have generally been replaced by a laminate for health and safety reasons. The whole screen is painted with several coats of white to reflect sunlight radiation , and usually requires repainting every two years. The siting of the screen is very important to avoid data degradation by the effects of ground cover, buildings and trees: WMO 2010 recommendations, if incomplete, are a sound basis. [ 7 ] In addition, Environment Canada , for example, recommends that the screen be placed at least twice the distance of the height of the object, e.g. , 20 m (66 ft) from any tree that is 10 m (33 ft) high. In the northern hemisphere, the door of the screen should always face north so as to prevent direct sunlight on the thermometers. In polar regions with twenty-four-hour sunlight, the observer must take care to shield the thermometers from the sun and at the same time avoiding a rise in temperature being caused by the observer's body heat. A special type of Stevenson screen with an eye bolt on the roof is used on a ship. The unit is hung from above and remains vertical despite the movement of the vessel. In some areas the use of single-unit automatic weather stations is supplanting the Stevenson screen and other standalone meteorological equipment. [ citation needed ]
https://en.wikipedia.org/wiki/Stevenson_screen
A stew pond or stewpond or stew is a fish pond used to store live fish ready for eating. [ 1 ] In medieval Europe, monasteries often maintained attached stews to supply fish over the winter. This article about a building or structure type is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stew_pond
Stewardship cessation [ 1 ] is a concept useful in system engineering . Certain systems remain hazardous for a considerable period after their useful life, and will usually be managed to ensure that the public and the environment is not exposed to the hazard. It is incumbent on the systems designer to consider the outcome should this stewardship be discontinued for any reason, and to design a system which is as robust as possible in the event of stewardship cessation. The most obvious example is the nuclear industry, and the radioactive waste it generates which will be a hazard for many centuries. In the present era, most high level waste is in still in currently managed facilities, but various methods are being considered for disposal. Most of the proposed disposal methods are designed to put the waste in a place so isolated from the environment that (it is hoped) immediate stewardship cessation would be safe and appropriate. However, many people are more comfortable with systems where the waste is still accessible, so that if there is an unforeseen problem with the disposal method, the waste can still be accessed to rectify the problem. These systems will still require some level of stewardship, but the system designer must consider that this may not be available for the hundreds of years required. Another example is when geostationary communications satellites reach the end of their useful lives. Stewardship cessation will occur hopefully in a planned manner, where the operator will move the satellite to a somewhat higher orbit to minimise the risk that the satellite will be a collision hazard to other satellites in the geostationary arc (graveyard burn). Unplanned stewardship cessation will occur if telecommand access to the satellite's systems is cut off due to a failure, for instance in the telecommand receivers. Reasons for stewardship cessation include:
https://en.wikipedia.org/wiki/Stewardship_cessation
In geometry , Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart , who published the theorem in 1746. [ 1 ] Let a , b , c be the lengths of the sides of a triangle. Let d be the length of a cevian to the side of length a . If the cevian divides the side of length a into two segments of length m and n , with m adjacent to c and n adjacent to b , then Stewart's theorem states that b 2 m + c 2 n = a ( d 2 + m n ) . {\displaystyle b^{2}m+c^{2}n=a(d^{2}+mn).} A common mnemonic used by students to memorize this equation (after rearranging the terms) is: m a n + d a d A m a n and his d a d = b m b + c n c put a b o m b in the s i n k . {\displaystyle {\underset {{\text{A }}man{\text{ and his }}dad}{man\ +\ dad}}=\!\!\!\!\!\!{\underset {{\text{put a }}bomb{\text{ in the }}sink.}{bmb\ +\ cnc}}} The theorem may be written more symmetrically using signed lengths of segments. That is, take the length AB to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line. In this formulation, the theorem states that if A, B, C are collinear points, and P is any point, then In the special case where the cevian is a median (meaning it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem . The theorem can be proved as an application of the law of cosines . [ 3 ] Let θ be the angle between m and d and θ' the angle between n and d . Then θ' is the supplement of θ , and so cos θ' = −cos θ . Applying the law of cosines in the two small triangles using angles θ and θ' produces c 2 = m 2 + d 2 − 2 d m cos ⁡ θ , b 2 = n 2 + d 2 − 2 d n cos ⁡ θ ′ = n 2 + d 2 + 2 d n cos ⁡ θ . {\displaystyle {\begin{aligned}c^{2}&=m^{2}+d^{2}-2dm\cos \theta ,\\b^{2}&=n^{2}+d^{2}-2dn\cos \theta '\\&=n^{2}+d^{2}+2dn\cos \theta .\end{aligned}}} Multiplying the first equation by n and the third equation by m and adding them eliminates cos θ . One obtains b 2 m + c 2 n = n m 2 + n 2 m + ( m + n ) d 2 = ( m + n ) ( m n + d 2 ) = a ( m n + d 2 ) , {\displaystyle {\begin{aligned}b^{2}m+c^{2}n&=nm^{2}+n^{2}m+(m+n)d^{2}\\&=(m+n)(mn+d^{2})\\&=a(mn+d^{2}),\\\end{aligned}}} which is the required equation. Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances b , c , d in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression. [ 2 ] According to Hutton & Gregory (1843 , p. 220), Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. Coxeter & Greitzer (1967 , p. 6) state that the result was probably known to Archimedes around 300 B.C.E. They go on to say (mistakenly) that the first known proof was provided by R. Simson in 1751. Hutton & Gregory (1843) state that the result is used by Simson in 1748 and by Simpson in 1752, and its first appearance in Europe [ clarification needed ] given by Lazare Carnot in 1803.
https://en.wikipedia.org/wiki/Stewart's_theorem
In fluid dynamics , a Stewartson layer is a thin cylindrical shear layer that connects two differentially rotating regions in the radial direction, namely the inside and outside the cylinder. The Stewartson layer, typically, also connects different Ekman boundary layers in the axial direction. The layer was first identified by Ian Proudman [ 1 ] and was first described by Keith Stewartson . [ 2 ] [ 3 ] This layer should be compared with the Ekman layer which occurs near solid boundaries. [ 4 ] The Stewartson layer is not elementary but possesses a complex structure and emerges when the relevant Ekman number is E k = ν / Ω L 2 ≪ 1 {\displaystyle \mathrm {Ek} =\nu /\Omega L^{2}\ll 1} ; here ν {\displaystyle \nu } is the kinematic viscosity , Ω {\displaystyle \Omega } and L {\displaystyle L} are the characteristic scales for the angular speed and length. The fundamental balance that occurs in the Stewartson shear layer is between Coriolis forces and viscous forces. For simplicity, consider the example of two concentric spheres that rotate about a common axis with slightly different angular velocity. The fluid domain corresponds to the annular region. In this problem, the Stewartson layer emerges as a cylinder D {\displaystyle {\mathcal {D}}} circumscribing the inner sphere with its generators lying parallel to the rotation axis. Outside D {\displaystyle {\mathcal {D}}} , the fluid rotates as a solid body with a speed that of the outer sphere. Inside D {\displaystyle {\mathcal {D}}} (in the annular region), again the fluid rotates as a solid body, except near the inner and outer sphere walls, where Ekman boundary layers of thickness E k 1 / 2 {\displaystyle \mathrm {Ek} ^{1/2}} are set up that help adjusting the flow to transition from uniform rotation to their respective rotating values on the solid walls. Across D {\displaystyle {\mathcal {D}}} , there is a jump in the azimuthal velocity and on D {\displaystyle {\mathcal {D}}} , there is an axial flow connecting the two Ekman layers. The structure of D {\displaystyle {\mathcal {D}}} is the Stewartson layer. The Stewartson layer consists of two outer layers, one on the inner side of D {\displaystyle {\mathcal {D}}} with a thicknesses E k 2 / 7 {\displaystyle \mathrm {Ek} ^{2/7}} and one on the outer side of D {\displaystyle {\mathcal {D}}} with a thickness E k 1 / 4 {\displaystyle \mathrm {Ek} ^{1/4}} ; these outer layers flank a thin inner layer of thickness E k 1 / 3 {\displaystyle \mathrm {Ek} ^{1/3}} . The differential rotation between inside and outside D {\displaystyle {\mathcal {D}}} is smoothed out in the outer layers (primarily in the outer layer lying on the outer side of D {\displaystyle {\mathcal {D}}} ). The adjustment of azimuthal motion in the outer layers induces secondary axial flow. The inner layer becomes necessary partly to accommodate this induced axial motion and partly to accommodate the transport of flow between one Ekman boundary layer to the other one (from the Ekman layer on the faster-rotating sphere to the slower one). Note that the thickness of the Ekman layer is E k 1 / 2 {\displaystyle \mathrm {Ek} ^{1/2}} , which is much smaller than the inner Stewartson layer. In the inner layer, change in the azimuthal velocity is very small, because the outer layers are already smoothed out jump in the azimuthal velocity. In addition, the outer layers (again primarily in the outler layer lying outer side of the cylinder) also transport axially flow from the fast rotating sphere to slower one. In cylindrical geometries, the thickness of both the two outer layers is E k 1 / 4 {\displaystyle \mathrm {Ek} ^{1/4}} and the thickness of inner layer is E k 1 / 3 {\displaystyle \mathrm {Ek} ^{1/3}} . [ 5 ]
https://en.wikipedia.org/wiki/Stewartson_layer
The Stewart–Walker lemma provides necessary and sufficient conditions for the linear perturbation of a tensor field to be gauge -invariant. Δ δ T = 0 {\displaystyle \Delta \delta T=0} if and only if one of the following holds 1. T 0 = 0 {\displaystyle T_{0}=0} 2. T 0 {\displaystyle T_{0}} is a constant scalar field 3. T 0 {\displaystyle T_{0}} is a linear combination of products of delta functions δ a b {\displaystyle \delta _{a}^{b}} A 1-parameter family of manifolds denoted by M ϵ {\displaystyle {\mathcal {M}}_{\epsilon }} with M 0 = M 4 {\displaystyle {\mathcal {M}}_{0}={\mathcal {M}}^{4}} has metric g i k = η i k + ϵ h i k {\displaystyle g_{ik}=\eta _{ik}+\epsilon h_{ik}} . These manifolds can be put together to form a 5-manifold N {\displaystyle {\mathcal {N}}} . A smooth curve γ {\displaystyle \gamma } can be constructed through N {\displaystyle {\mathcal {N}}} with tangent 5-vector X {\displaystyle X} , transverse to M ϵ {\displaystyle {\mathcal {M}}_{\epsilon }} . If X {\displaystyle X} is defined so that if h t {\displaystyle h_{t}} is the family of 1-parameter maps which map N → N {\displaystyle {\mathcal {N}}\to {\mathcal {N}}} and p 0 ∈ M 0 {\displaystyle p_{0}\in {\mathcal {M}}_{0}} then a point p ϵ ∈ M ϵ {\displaystyle p_{\epsilon }\in {\mathcal {M}}_{\epsilon }} can be written as h ϵ ( p 0 ) {\displaystyle h_{\epsilon }(p_{0})} . This also defines a pull back h ϵ ∗ {\displaystyle h_{\epsilon }^{*}} that maps a tensor field T ϵ ∈ M ϵ {\displaystyle T_{\epsilon }\in {\mathcal {M}}_{\epsilon }} back onto M 0 {\displaystyle {\mathcal {M}}_{0}} . Given sufficient smoothness a Taylor expansion can be defined δ T = ϵ h ϵ ∗ ( L X T ϵ ) ≡ ϵ ( L X T ϵ ) 0 {\displaystyle \delta T=\epsilon h_{\epsilon }^{*}({\mathcal {L}}_{X}T_{\epsilon })\equiv \epsilon ({\mathcal {L}}_{X}T_{\epsilon })_{0}} is the linear perturbation of T {\displaystyle T} . However, since the choice of X {\displaystyle X} is dependent on the choice of gauge another gauge can be taken. Therefore the differences in gauge become Δ δ T = ϵ ( L X T ϵ ) 0 − ϵ ( L Y T ϵ ) 0 = ϵ ( L X − Y T ϵ ) 0 {\displaystyle \Delta \delta T=\epsilon ({\mathcal {L}}_{X}T_{\epsilon })_{0}-\epsilon ({\mathcal {L}}_{Y}T_{\epsilon })_{0}=\epsilon ({\mathcal {L}}_{X-Y}T_{\epsilon })_{0}} . Picking a chart where X a = ( ξ μ , 1 ) {\displaystyle X^{a}=(\xi ^{\mu },1)} and Y a = ( 0 , 1 ) {\displaystyle Y^{a}=(0,1)} then X a − Y a = ( ξ μ , 0 ) {\displaystyle X^{a}-Y^{a}=(\xi ^{\mu },0)} which is a well defined vector in any M ϵ {\displaystyle {\mathcal {M}}_{\epsilon }} and gives the result The only three possible ways this can be satisfied are those of the lemma.
https://en.wikipedia.org/wiki/Stewart–Walker_lemma
Excess-3 , 3-excess [ 1 ] [ 2 ] [ 3 ] or 10-excess-3 binary code (often abbreviated as XS-3 , [ 4 ] 3XS [ 1 ] or X3 [ 5 ] [ 6 ] ), shifted binary [ 7 ] or Stibitz code [ 1 ] [ 2 ] [ 8 ] [ 9 ] (after George Stibitz , [ 10 ] who built a relay-based adding machine in 1937 [ 11 ] [ 12 ] ) is a self-complementary binary-coded decimal (BCD) code and numeral system . It is a biased representation . Excess-3 code was used on some older computers as well as in cash registers and hand-held portable electronic calculators of the 1970s, among other uses. Biased codes are a way to represent values with a balanced number of positive and negative numbers using a pre-specified number N as a biasing value. Biased codes (and Gray codes ) are non-weighted codes. In excess-3 code, numbers are represented as decimal digits, and each digit is represented by four bits as the digit value plus 3 (the "excess" amount): To encode a number such as 127, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010). Excess-3 arithmetic uses different algorithms than normal non-biased BCD or binary positional system numbers. After adding two excess-3 digits, the raw sum is excess-6. For instance, after adding 1 (0100 in excess-3) and 2 (0101 in excess-3), the sum looks like 6 (1001 in excess-3) instead of 3 (0110 in excess-3). To correct this problem, after adding two digits, it is necessary to remove the extra bias by subtracting binary 0011 (decimal 3 in unbiased binary) if the resulting digit is less than decimal 10, or subtracting binary 1101 (decimal 13 in unbiased binary) if an overflow (carry) has occurred. (In 4-bit binary, subtracting binary 1101 is equivalent to adding 0011 and vice versa.) [ 14 ] The primary advantage of excess-3 coding over non-biased coding is that a decimal number can be nines' complemented [ 1 ] (for subtraction) as easily as a binary number can be ones' complemented : just by inverting all bits. [ 1 ] Also, when the sum of two excess-3 digits is greater than 9, the carry bit of a 4-bit adder will be set high. This works because, after adding two digits, an "excess" value of 6 results in the sum. Because a 4-bit integer can only hold values 0 to 15, an excess of 6 means that any sum over 9 will overflow (produce a carry-out). Another advantage is that the codes 0000 and 1111 are not used for any digit. A fault in a memory or basic transmission line may result in these codes. It is also more difficult to write the zero pattern to magnetic media. [ 1 ] [ 15 ] [ 11 ] BCD 8-4-2-1 to excess-3 converter example in VHDL :
https://en.wikipedia.org/wiki/Stibitz_code
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ] Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ: The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes. In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below: For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ] In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ] The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ] Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ] It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ] Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ] The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension. When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ] About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ] Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ] Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization . In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ] A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ] To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ] As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ] The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list: The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear: These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed If bit 1 is inverted, blocks of 2 codewords change order: If bit 2 is inverted, blocks of 4 codewords reverse order: Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values. The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ] On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits. 0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221 There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings. For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ): There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}} A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ] For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ] whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ] We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form { g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}} where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by If we define the transition multiplicities m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|} to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}} The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ] Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ] Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e. V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} . An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}} otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots } The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below. These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines . Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 ‍ N , where N is the number of vertices in the middle-level subgraph. [ 70 ] Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ] Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ] The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ] Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022. An STGC for P = 360 and n = 9 is reproduced here: Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ] Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ] If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ] There are a number of binary codes similar to Gray codes, including: The following binary-coded decimal (BCD) codes are Gray code variants as well:
https://en.wikipedia.org/wiki/Stibitz–Gray_code
Stibnite , sometimes called antimonite , is a sulfide mineral , a mineral form of antimony trisulfide ( Sb 2 S 3 ). It is a soft, metallic grey crystalline solid with an orthorhombic space group. [ 6 ] It is the most important source for the metalloid antimony . [ 7 ] The name is derived from the Greek στίβι stibi through the Latin stibium as the former name for the mineral and the element antimony. [ 3 ] [ 4 ] Stibnite has a structure similar to that of arsenic trisulfide, As 2 S 3 . The Sb(III) centers, which are pyramidal and three-coordinate, are linked via bent two-coordinate sulfide ions. However, some studies suggest that the actual coordination polyhedra of antimony are SbS 7 , with (3+4) coordination at the M1 site and (5+2) at the M2 site. Some of the secondary bonds impart cohesion and are connected with packing. [ 8 ] Stibnite is grey when fresh, but can turn superficially black due to oxidation in air. The melting point of Sb 2 S 3 is 823 K (550 °C; 1,022 °F). [ 9 ] The band gap is 1.88 eV at room temperature and it is a photoconductor. [ 10 ] Stibnite is also toxic upon ingestion, with symptoms similar to those of arsenic poisoning . [ 11 ] Pastes of Sb 2 S 3 powder in fat [ 12 ] or in other materials have been used since c. 3000 BC as eye cosmetics in the Mediterranean and farther afield; in this use, Sb 2 S 3 is called kohl . It was used to darken the brows and lashes, or to draw a line around the perimeter of the eye. [ 13 ] Antimony trisulfide finds use in pyrotechnic compositions , namely in the glitter and fountain mixtures. Needle-like crystals, "Chinese needles", are used in glitter compositions and white pyrotechnic stars . The "dark pyro" version is used in flash powders to increase their sensitivity and sharpen their report. It is also a component of modern safety matches . It was formerly used in flash compositions, but its use was abandoned due to toxicity and sensitivity to static electricity . [ 14 ] Stibnite was used ever since protodynastic ancient Egypt as a medication and a cosmetic. [ 13 ] The Sunan Abi Dawood reports, “prophet Muhammad said: 'Among the best types of collyrium is antimony ( ithmid ) for it clears the vision and makes the hair sprout. ' " [ 15 ] The 17th century alchemist Eirenaeus Philalethes , also known as George Starkey, describes stibnite in his alchemical commentary An Exposition upon Sir George Ripley's Epistle . Starkey used stibnite as a precursor to philosophical mercury, which was itself a hypothetical precursor to the philosopher's stone . [ 16 ] Stibnite occurs in hydrothermal deposits and is associated with realgar , orpiment , cinnabar , galena , pyrite , marcasite , arsenopyrite , cervantite , stibiconite , calcite , ankerite , barite and chalcedony . [ 3 ] Small deposits of stibnite are common, but large deposits are rare. The world's largest deposit of antimony, the Xikuangshan mine , yields high quality crystals in paragenesis with calcite . It occurs in Canada , Mexico , Peru , Japan , Germany , Romania , Italy , France , England , Algeria , and Kalimantan , Borneo . In the United States it is found in Arkansas , Idaho , Nevada , California , and Alaska . Historically, the Romans used stibnite mined in Dacia to make colourless glass, the making of which ended when this province was lost to the Roman Empire. [ 17 ] As of May 2007, the largest specimen on public display (1000 pounds) is at the American Museum of Natural History . [ 18 ] [ 19 ] The largest documented single crystals of stibnite measured ~60×5×5 cm and originated from different locations including Japan, France and Germany. [ 20 ]
https://en.wikipedia.org/wiki/Stibnite
Stibole is a theoretical heterocyclic organic compound , a five-membered ring with the formula C 4 H 4 Sb H. It is classified as a metallole . It can be viewed as a structural analog of pyrrole , with antimony replacing the nitrogen atom of pyrrole. Stibole itself is very rare, but many substituted derivatives have been synthesized. They are called stiboles . Pentaphenylstibile is prepared from 1,4-dilithio-1,2,3,4-tetraphenylbutadiene and phenylantimony dichloride by a salt metathesis reaction : [ 1 ] 2,5-Dimethyl-1-phenyl-1 H -stibole, for example, can be formed by the reaction of 1,1-dibutyl-2,5-dimethyl stannole and dichlorophenylstibine. [ 2 ] Stiboles can be used to form ferrocene -like sandwich compounds . [ 3 ] This article about a heterocyclic compound is a stub . You can help Wikipedia by expanding it . This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stibole
A sticker is a detailed illustration of a character that represents an emotion or action that is a mix of cartoons and Japanese smiley-like " emojis " sent through instant messaging platforms. They have more variety than emoticons and have a basis from internet "reaction face" culture due to their ability to portray body language with a facial reaction. Stickers are elaborate, character-driven emoticons and give people a lightweight means to communicate through kooky animations. [ 1 ] Stickers were first popularized by the Korean-developed mobile messaging app Line . Naver developed the app with the Japanese market in mind, as KakaoTalk was already the dominant mobile messaging service in South Korea. The stickers' blend of the ubiquitous emoji system with anime -styled artwork, and their use as a substitute for typing out longer messages in Japanese text, helped the feature appeal to Japanese audiences. As Line's dominance grew, the mascot characters featured within Line's sticker sets also became popular as merchandise. [ 2 ] In 2013, stickers began to expand beyond Asian markets: Path added stickers in March 2013 as part of its new private messaging system, [ 3 ] followed by Facebook 's main and Facebook Messenger mobile apps in April. In July, sticker functionality was extended to Facebook's web interface, [ 4 ] [ 5 ] while Kik Messenger also added stickers. [ 6 ] Startup companies devoted to stickers also emerged, helping produce them on behalf of brands as part of advertising campaigns. [ 7 ] By November 2013, a survey of mobile messaging users found that 40% of those surveyed used stickers on a daily basis, with Indonesians showing the highest amount of daily usage (46%), followed by China (43%), South Korea (38%) and the United States (35%). Out of those who did regularly use stickers, 20% had paid for stickers or emoji in mobile messaging apps at least once. [ 8 ] In 2016, Snapchat acquired Bitstrips for its app Bitmoji, which allows users to create custom stickers featuring a personal avatar . [ 9 ] [ 10 ] [ 11 ] In July 2019, Telegram introduced animated stickers, using a new format .tgs, with support for third-party sticker packs. [ 12 ] In December, [ 13 ] Signal introduced sticker packs in PNG and WebP formats and APNG for animation. [ 14 ] Later, in July 2020, WhatsApp implemented animated stickers with several official packs. [ 15 ] In June 2021, Discord added the capability for premium users to use animated stickers. [ 16 ] In September 2023, Instagram and Facebook Messenger introduced the ability to add AI-generated stickers. [ 17 ] Within a week, there were reports of people issuing inappropriate prompts involving celebrities or copyrighted characters, such as Elmo holding a knife. [ 18 ] IOS 17 Apple rolls out Image to Sticker. The feature allows users to long press an image object to cut the background and save the object as a digital sticker that can be used in messaging or photos. [ 19 ] Stickers are commonly downloadable for free, but some services may offer premium options via microtransactions , often described as stickers as a service . Sets may be devoted to specific themes, characters, as well as popular brands and media franchises such as Hello Kitty , Psy , and the Minions of Despicable Me . [ 1 ] [ 20 ]
https://en.wikipedia.org/wiki/Sticker_(messaging)
Sticking coefficient is the term used in surface physics to describe the ratio of the number of adsorbate atoms (or molecules ) that adsorb , or "stick", to a surface to the total number of atoms that impinge upon that surface during the same period of time . [ 1 ] Sometimes the symbol S c is used to denote this coefficient , and its value is between 1 (all impinging atoms stick) and 0 (no atoms stick). The coefficient is a function of surface temperature , surface coverage (θ) and structural details as well as the kinetic energy of the impinging particles. The original formulation was for molecules adsorbing from the gas phase and the equation was later extended to adsorption from the liquid phase by comparison with molecular dynamics simulations. [ 2 ] For use in adsorption from liquids the equation is expressed based on solute density (molecules per volume) rather than the pressure. When arriving at a site of a surface, an adatom has three options. There is a probability that it will adsorb to the surface ( P a {\displaystyle P_{a}} ), a probability that it will migrate to another site on the surface ( P m {\displaystyle P_{m}} ), and a probability that it will desorb from the surface and return to the bulk gas ( P d {\displaystyle P_{d}} ). For an empty site (θ=0) the sum of these three options is unity. For a site already occupied by an adatom (θ>0), there is no probability of adsorbing, and so the probabilities sum as: For the first site visited, the P of migrating overall is the P of migrating if the site is filled plus the P of migrating if the site is empty. The same is true for the P of desorption. The P of adsorption, however, does not exist for an already filled site. The P of migrating from the second site is the P of migrating from the first site and then migrating from the second site, and so we multiply the two values. Thus the sticking probability ( s c {\displaystyle s_{c}} ) is the P of sticking of the first site, plus the P of migrating from the first site and then sticking to the second site, plus the P of migrating from the second site and then sticking at the third site etc. There is an identity we can make use of. The sticking coefficient when the coverage is zero s 0 {\displaystyle s_{0}} can be obtained by simply setting θ = 0 {\displaystyle \theta =0} . We also remember that If we just look at the P of migration at the first site, we see that it is certainty minus all other possibilities. Using this result, and rearranging, we find:
https://en.wikipedia.org/wiki/Sticking_coefficient
The sticking probability is the probability that molecules are trapped on surfaces and adsorb chemically. From Langmuir's adsorption isotherm , molecules cannot adsorb on surfaces when the adsorption sites are already occupied by other molecules, so the sticking probability can be expressed as follows: S = S 0 ( 1 − θ ) {\displaystyle S=S_{0}(1-\theta )} where S 0 {\displaystyle S_{0}} is the initial sticking probability and θ {\displaystyle \theta } is the surface coverage fraction ranging from 0 to 1. Similarly, when molecules adsorb on surfaces dissociatively, the sticking probability is S = S 0 ( 1 − θ ) 2 {\displaystyle S=S_{0}(1-\theta )^{2}} The square is because a disassociation of 1 molecule into 2 parts requires 2 adsorption sites. These equations are simple and can be easily understood but cannot explain experimental results. In 1958, P. Kisliuk [ 1 ] presented an equation for the sticking probability that can explain experimental results. In his theory, molecules are trapped in precursor states of physisorption before chemisorption . Then the molecules meet adsorption sites that molecules can adsorb to chemically, so the molecules behave as follows. If these sites are not occupied, molecules do the following (with probability in parentheses): and if these sites are occupied, they Note that an occupied site is defined as one where there is a chemically bonded adsorbate so by definition it would be P a ′ = 0 {\displaystyle P_{a}'=0} . Then the sticking probability is, according to equation (6) of the reference, [ 1 ] S = S 0 ( 1 + θ 1 − θ K ) − 1 = S 0 1 − θ 1 + ( K − 1 ) θ {\displaystyle S=S_{0}\left(1+{\frac {\theta }{1-\theta }}K\right)^{-1}=S_{0}{\frac {1-\theta }{1+(K-1)\theta }}} K = P b ′ P a + P b {\displaystyle K={\frac {P_{b}'}{P_{a}}}+P_{b}} When K = 1 {\displaystyle K=1} , this equation is identical in result to Langmuir's adsorption isotherm .
https://en.wikipedia.org/wiki/Sticking_probability
Stickland fermentation or The Stickland Reaction [ 1 ] is the name for a chemical reaction that involves the coupled oxidation and reduction of amino acids to organic acids. The electron donor amino acid is oxidised to a volatile carboxylic acid one carbon atom shorter than the original amino acid. For example, alanine with a three carbon chain is converted to acetate with two carbons. The electron acceptor amino acid is reduced to a volatile carboxylic acid the same length as the original amino acid. For example, glycine with two carbons is converted to acetate. In this way, amino acid fermenting microbes can avoid using hydrogen ions as electron acceptors to produce hydrogen gas. Amino acids can be Stickland acceptors, Stickland donors, or act as both donor and acceptor. Only histidine cannot be fermented by Stickland reactions, and is oxidised. With a typical amino acid mix, there is a 10% shortfall in Stickland acceptors, which results in hydrogen production . Under very low hydrogen partial pressures, increased uncoupled anaerobic oxidation has also been observed. It occurs in proteolytic clostridia such as: C. perfringens , Clostridioides difficile , C. sporogenes , and C. botulinum . Additionally, sarcosine and betaine can act as electron acceptors. [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stickland_fermentation
Sticky-shed syndrome is a condition created by the deterioration of the binders in a magnetic tape , which hold the ferric oxide magnetizable coating to its plastic carrier, or which hold the thinner back-coating on the outside of the tape. [ 1 ] This deterioration renders the tape unusable. [ 2 ] Some kinds of binder are known to break down over time, due to the absorption of moisture ( hydrolysis ). [ citation needed ] The symptoms of this breakdown can be immediately obvious even when rewinding the tape: tearing sounds and sluggish behavior. [ 3 ] If a tape with sticky-shed syndrome is played, the reels will make screeching or squeaking sounds, and the tape will leave dusty, rusty particles on the guides and heads. [ 4 ] In some cases, particularly with digital tapes, the symptoms are more subtle, causing intermittent dropouts. [ citation needed ] Some tapes may deteriorate because of a breakdown in the binder (the glue) that holds the oxide particles on the tape, or the back coating on the reverse side, if the tape was from any of the tape manufacturers who had inadvertently used an unstable binder formulation. That binder contained polyurethane that soaks up water (hydrolysis) and causes the urethane to rise to the tape's surface. [ citation needed ] This problem became known as the 'sticky-shed syndrome'. One explanation offered was that short strands of urethane were commonly used in tapes until it was discovered that mid-sized strands are better and were good at absorbing moisture. [ 3 ] Baking the tape at low temperature may temporarily restore the tape by driving the water molecules from the binder so that it can be safely copied to another tape or a different format. After baking, the tape may remain in good condition for approximately a month. If the tape re-deteriorates, it may be possible to bake the tape again. [ citation needed ] Tapes affected by sticky-shed are those that were made by Ampex / Quantegy such as 406/407, 456/457, 499, and consumer/audiophile grade back coated tapes such as Grand Master and 20-20+, [ citation needed ] as well as those made by Scotch/3M including professional tapes such as 206/207, 226/227, 262 (though not all 262 is backcoated and therefore isn't affected), 808, and 986 as well as audiophile tapes such as "Classic" and "Master-XS". [ citation needed ] Though less common, many Sony -branded tapes such as PR-150, SLH, ULH, and FeCr have also been reported to suffer from sticky-shed. [ citation needed ] Blank cassettes from the 70s-90s are unaffected because the hygroscopic binder was not used in cassette formulations. However, some cassette tape formulations do suffer from a similar problem caused by fatty acids working to the surface of the tape that can cause sticking to heads and guides and severe modulation of signals through the playback head until it is cleaned. Ampex-branded U-matic cassette tapes are also now exhibiting sticky-shed problems, similar to their reel tape media. [ citation needed ] As of the 2020s, digitizing companies have documented examples of sticky-shed from Maxell . TDK has been showing signs as of late of shedding its lubricant in the form of a white powder or white/yellowish goo. This has shown up on the TDK SA and some LX and BX tapes. [ citation needed ] There have been a few reports of some tape from the current manufacturers ATR and RMGI exhibiting symptoms of sticky-shed. But these may be isolated incidents relating to prototype or single bad batches and are not necessarily indicative of the overall product line integrity. [ citation needed ] BASF tape production did not use the unstable formulation, and their tape production rarely shows this type of coating instability, although BASF LH Super SM cassettes manufactured in the mid-70s are prone to the problem. Certain batches of Chromdioxid Extra II C-90s, produced around 1989-1991 and sold in the UK, shed a white powder that would coat the record/playback head after a few months of use. The slightly higher performance Chromdioxid Super II and Chromdioxid Maxima C-90 cassettes were unaffected. [ citation needed ] As of 2015, some 35 mm magnetic fullcoat tapes produced by Kodak , such as those used for the audio portion of older IMAX films, are also reported to be exhibiting sticky-shed. [ 5 ] As tapes remain in storage for a longer time, it is possible that other binder formulations may develop problems. [ citation needed ] Current solutions to sticky-shed syndrome seek to safely remove the unwanted moisture from the tape binder. Two different strategies are commonly employed: applying heat to the tape (commonly called 'baking'), and changing the environment to lower the humidity . Baking is widely practiced but can damage tapes if not done properly. While modification of humidity by safely controlling the environment may take significantly longer, its major benefit is that it does not irreparably damage the tape. Baking is a common practice for temporarily repairing sticky-shed syndrome. There is no standard equipment or practice for baking, so each engineer is left to create their own methods and materials. Generally, tapes are baked at low temperatures for relatively long periods of time, such as 130 to 140 °F (54 to 60 °C) for 1 to 8 hours. [ 6 ] Tapes wider than 1/4 inch (0.635 cm) may take longer. It is commonly thought that baking a tape will temporarily remove the moisture that has accumulated in the binder. A treated tape will reportedly function like new for a few weeks to a few months before it will reabsorb moisture and be unplayable again. [ 7 ] Baking cannot be used with acetate tapes, nor is it needed. [ 3 ] Tape baking should only be done when necessary, since there is a risk of damaging the tape from the heat. However, there are some important signs that show when a tape needs baking. The signs include flakes and sticky goo on the tape heads or transport machinery. The usual symptom is squealing when the tape passes the playback head or other fixed parts of a tape player. The squealing is audible directly from the tape and usually also transmitted electronically through the output of the tape recorder as a wideband distortion of the playback signal. Continuous use of a shedding tape permanently damages it, as oxide is literally torn off the tape. This flaking residue can be seen and can feel gummy while still on the tape's surface. There is also a risk of damage to the player. Another symptom is the tape sounding dull and distorted. In a video recording, degradation can be represented by audio-visual dropouts. [ 8 ]
https://en.wikipedia.org/wiki/Sticky-shed_syndrome
DNA ends refer to the properties of the ends of linear DNA molecules, which in molecular biology are described as "sticky" or "blunt" based on the shape of the complementary strands at the terminus. In sticky ends , one strand is longer than the other (typically by at least a few nucleotides), such that the longer strand has bases which are left unpaired. In blunt ends , both strands are of equal length – i.e. they end at the same base position, leaving no unpaired bases on either strand. The concept is used in molecular biology , in cloning , or when subcloning insert DNA into vector DNA . Such ends may be generated by restriction enzymes that break the molecule's phosphodiester backbone at specific locations, which themselves belong to a larger class of enzymes called exonucleases and endonucleases . A restriction enzyme that cuts the backbones of both strands at non-adjacent locations leaves a staggered cut, generating two overlapping sticky ends, while an enzyme that makes a straight cut (at locations directly across from each other on both strands) generates two blunt ends. [ 1 ] A single-stranded non-circular DNA molecule has two non-identical ends, the 3' end and the 5' end (usually pronounced "three prime end" and "five prime end"). The numbers refer to the numbering of carbon atoms in the deoxyribose , which is a sugar forming an important part of the backbone of the DNA molecule. In the backbone of DNA the 5' carbon of one deoxyribose is linked to the 3' carbon of another by a phosphodiester bond linkage. [ 2 ] When a molecule of DNA is double stranded, as DNA usually is, the two strands run in opposite directions. Therefore, one end of the molecule will have the 3' end of strand 1 and the 5' end of strand 2, and vice versa in the other end. [ 2 ] However, the fact that the molecule is two stranded allows numerous different variations. The simplest DNA end of a double stranded molecule is called a blunt end . Blunt ends are also known as non-cohesive ends. In a blunt-ended molecule, both strands terminate in a base pair . Blunt ends are not always desired in biotechnology since when using a DNA ligase to join two molecules into one, the yield is significantly lower with blunt ends. [ 3 ] When performing subcloning, it also has the disadvantage of potentially inserting the insert DNA in the opposite orientation desired. On the other hand, blunt ends are always compatible with each other. Here is an example of a small piece of blunt-ended DNA: Non-blunt ends are created by various overhangs . An overhang is a stretch of unpaired nucleotides in the end of a DNA molecule. These unpaired nucleotides can be in either strand, creating either 3' or 5' overhangs. [ 3 ] These overhangs are in most cases palindromic. The simplest case of an overhang is a single nucleotide. This is most often adenine and is created as a 3' overhang by some DNA polymerases . Most commonly this is used in cloning PCR products created by such an enzyme. The product is joined with a linear DNA molecule with a 3' thymine overhang. Since adenine and thymine form a base pair , this facilitates the joining of the two molecules by a ligase, yielding a circular molecule. Here is an example of an A-overhang: Longer overhangs are called cohesive ends or sticky ends . [ 3 ] They are most often created by restriction endonucleases when they cut DNA. Very often they cut the two DNA strands four base pairs from each other, creating a four-base 3' overhang in one molecule and a complementary 3' overhang in the other. These ends are called cohesive since they are easily joined back together by a ligase. For example, these two "sticky" ends (four-base 5' overhangs) are compatible: Also, since different restriction endonucleases usually create different overhangs, it is possible to create a plasmid by excising a piece of DNA (using a different enzyme for each end) and then joining it to another DNA molecule with ends trimmed by the same enzymes. Since the overhangs have to be complementary in order for the ligase to work, the two molecules can only join in one orientation. This is often highly desirable in molecular biology . Sticky ends can be converted to blunt ends by a process known as blunting, which involves filling in the sticky end with complementary nucleotides. This yields a blunt end, however, sticky ends are often preferable, meaning the main use of this method is to label DNA by using radiolabeled nucleotides to fill the gap. [ 4 ] Blunt ends can also be converted to sticky ends by addition of double-stranded linker sequences containing recognition sequences for restriction endonucleases that create sticky ends and subsequent application of the restriction enzyme or by homopolymer tailing, which refers to extending the molecule's 3' ends with only one nucleotide, allowing for specific pairing with the matching nucleotide (e.g. poly-C with poly-G). [ 3 ] Across from each single strand of DNA, we typically see adenine pair with thymine , and cytosine pair with guanine to form a parallel complementary strand as described below. Two nucleotide sequences which correspond to each other in this manner are referred to as complementary: A frayed end refers to a region of a double stranded (or other multi-stranded) DNA molecule near the end with a significant proportion of non-complementary sequences; that is, a sequence where nucleotides on the adjacent strands do not match up correctly: The term "frayed" is used because the incorrectly matched nucleotides tend to avoid bonding, thus appearing similar to the strands in a fraying piece of rope. Although non-complementary sequences are also possible in the middle of double stranded DNA, mismatched regions away from the ends are not referred to as "frayed". Ronald W. Davis first discovered sticky ends as the product of the action of EcoRI , the restriction endonuclease . [ 5 ] Sticky end links are different in their stability. Free energy of formation can be measured to estimate stability. Free energy approximations can be made for different sequences from data related to oligonucleotide UV thermal denaturation curves. [ 6 ] Also predictions from molecular dynamics simulations show that some sticky end links are much stronger in stretch than the others. [ 7 ]
https://en.wikipedia.org/wiki/Sticky_and_blunt_ends
In the field of management , sticky information is information that is costly to acquire, transfer, and use in a new location. Eric von Hippel coined the term around 1994. Because of the importance of sticky, local information for some kinds of innovation and product customization, von Hippel suggests that in certain circumstances the innovation will be increasingly accomplished by end-users ( user innovation ) rather than an expert provider. Toolkits for user innovation can be used to support end-users in their innovation process. [ 1 ] [ 2 ] In the field of economics , sticky information is related to the concept of sticky prices . That is, members of the economy are making decisions with lagged information.
https://en.wikipedia.org/wiki/Sticky_information
Sticky keys is an accessibility feature of some graphical user interfaces which assists users who have physical disabilities or helps users reduce repetitive strain injury . It serializes keystrokes; instead of being required to press multiple keys at a time, the user can press and release a modifier key , such as ⇧ Shift , Ctrl , Alt , or the Windows key , and have it remain active until any other key is pressed. Sticky keys functionality is available on/in Microsoft Windows , macOS , chromeOS and KDE Plasma as Sticky Keys , [ 1 ] [ 2 ] and on Unix / X11 systems as part of the AccessX utility. [ 3 ] [ 4 ] Sticky Keys was first [ when? ] introduced to System 6 as part of the Easy Access extension , which also included mouse keys functionality. [ 5 ] In 1994, Solaris 2.4 shipped with the AccessX utility, which also provided sticky keys and mouse keys functionality. [ 6 ] This Microsoft Windows article is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sticky_keys
A sticky mat , also called a tacky mat or cleanroom mat , is a mat with an adhesive surface that is placed at the entrances or exits to certain workplaces to remove contaminants from the bottoms of footwear and wheeled carts such as hand trucks . [ 2 ] They are an example of an engineering control within the hierarchy of hazard controls . [ 1 ] Sticky mats are typically used in cleanrooms [ 2 ] and construction sites. [ 3 ] Their purpose is to prevent contaminants from entering the site with personnel, and hazardous materials from exiting. In a cleanroom setting, airborne particles that are not removed by the ventilation system deposit themselves onto a surface, where they can be transported by personnel walking on or past them. [ 2 ] [ 4 ] Sticky mats can be temporary or permanent. Temporary sticky mats are made of a stack of adhesive plastic film layers that are periodically peeled off and discarded. Permanent mats are made of a polymer , usually polyester - or polyvinyl chloride -based, that binds particles through electrostatic forces. The peeling process for temporary mats may dislodge particles from the mat, causing inhalation risk. [ 2 ] [ 5 ] However, permanent mats must be washed with a mop and detergent , which is more time-consuming and may be done less often. [ 5 ] A 2012 study found that temporary adhesive mats reduced the particle level on shoes and overshoes by 20–50% while permanent polymeric flooring reduced it by approximately 80%, and that adhesive mats released more particles when they were dirtier and when they were peeled quickly. [ 2 ] However, sticky mats placed outside the entrance to an operating room or suite have not been shown to reduce the number of organisms on shoes or stretcher wheels, nor do they reduce the risk of surgical site infections . [ 6 ]
https://en.wikipedia.org/wiki/Sticky_mat
The stick–slip phenomenon , also known as the slip–stick phenomenon or simply stick–slip , is a type of motion exhibited by objects in contact sliding over one another. The motion of these objects is usually not perfectly smooth, but rather irregular, with brief accelerations (slips) interrupted by stops (sticks). Stick–slip motion is normally connected to friction , and may generate vibration (noise) or be associated with mechanical wear of the moving objects, and is thus often undesirable in mechanical devices. [ 1 ] On the other hand, stick–slip motion can be useful in some situations, such as the movement of a bow across a string to create musical tones in a bowed string instrument . [ 2 ] With stick–slip there is typically a jagged type of behavior for the friction force as a function of time as illustrated in the static kinetic friction figure. Initially there is relatively little movement and the force climbs until it reaches some critical value which is set by the multiplication of the static friction coefficient and the applied load —the retarding force here follows the standard ideas of friction from Amontons' laws . Once this force is exceeded movement starts at a much lower load which is determined by the kinetic friction coefficient which is almost always smaller than the static coefficient. At times the object moving can get 'stuck', with local rises in the force before it starts to move again. There are many causes of this depending upon the size scale, from atomic to processes involving millions of atoms. [ 3 ] [ 4 ] Stick–slip can be modeled as a mass coupled by an elastic spring to a constant drive force (see the model sketch). The drive system V applies a constant force, loading spring R and increasing the pushing force against load M. This force increases until retarding force from the static friction coefficient between load and floor is exceeded. The load then starts sliding, and the friction coefficient decreases to the value corresponding to load times the dynamic friction . Since this frictional force will be lower than the static value, the load accelerates until the decompressing spring can no longer generate enough force to overcome dynamic friction, and the load stops moving. The pushing force due to the spring builds up again, and the cycle repeats. [ 1 ] [ 2 ] Stick–slip may be caused by many different phenomena, depending on the types of surfaces in contact and also the scale; it occurs with everything from the sliding of atomic force microscope tips to large tribometers . For rough surfaces, it is known that asperities play a major role in friction . [ 5 ] The bumping together of asperities on the surface creates momentary sticks. For dry surfaces with regular microscopic topography, the two surfaces may need to creep at high friction for certain distances (in order for bumps to move past one another), until a smoother, lower-friction contact is formed. On lubricated surfaces, the lubricating fluid may undergo transitions from a solid-like state to a liquid-like state at certain forces, causing a transition from sticking to slipping. [ 1 ] On very smooth surfaces, stick–slip behavior may result from coupled phonons (at the interface between the substrate and the slider) that are pinned in an undulating potential well, sticking or slipping with thermal fluctuations . [ 6 ] Stick–slip occurs on all types of materials and on enormously varying length scales. [ 7 ] The frequency of slips depends on the force applied to the sliding load, with a higher force corresponding to a higher frequency of slip. [ 8 ] Stick–slip motion is ubiquitous in systems with sliding components, such as disk brakes , bearings , electric motors, wheels on roads or railways, and in mechanical joints . [ 9 ] Stick–slip also has been observed in articular cartilage in mild loading and sliding conditions, which could result in abrasive wear of the cartilage. [ 10 ] Many familiar sounds are caused by stick–slip motion, such as the squeal of chalk on a chalkboard , the squeak of basketball shoes on a basketball court, and the sound made by the spiny lobster . [ 8 ] [ 11 ] [ 12 ] Stick–slip motion is used to generate musical notes in bowed string instruments, [ 2 ] the glass harp [ 13 ] and the singing bowl . [ 14 ] Stick–slip can also be observed on the atomic scale using a friction force microscope . [ 15 ] The behaviour of seismically active faults is also explained using a stick–slip model, with earthquakes being generated during the periods of rapid slip. [ 16 ]
https://en.wikipedia.org/wiki/Stick–slip_phenomenon
Stiction (a portmanteau of the words static and friction ) [ 1 ] is the force that needs to be overcome to enable relative motion of stationary objects in contact. [ 2 ] Any solid objects pressing against each other (but not sliding) will require some threshold of force parallel to the surface of contact in order to overcome static adhesion. [ 3 ] Stiction is a threshold , not a continuous force. However, stiction might also be an illusion made by the rotation of kinetic friction . [ 4 ] In situations where two surfaces with areas below the micrometer scale come into close proximity (as in an accelerometer ), they may adhere together. At this scale, electrostatic and/or Van der Waals and hydrogen bonding forces become significant. The phenomenon of two such surfaces being adhered together in this manner is also called stiction. Stiction may be related to hydrogen bonding or residual contamination. Stiction is also the same threshold at which a rolling object would begin to slide over a surface rather than rolling at the expected rate (and in the case of a wheel, in the expected direction). In this case, it is called "rolling friction" or μ r . This is why driver training courses teach that, if a car begins to slide sideways, the driver should avoid braking and instead try to steer in the same direction as the slide. This gives the wheels a chance to regain static contact by rolling, which gives the driver some control again. Similarly, when trying to accelerate rapidly (particularly from a standing start) an overenthusiastic driver may "squeal" the driving wheels, but this impressive display of noise and smoke is less effective than maintaining static contact with the road. Many stunt -driving techniques (such as drifting ) are done by deliberately breaking and/or regaining this rolling friction. A car on a slippery surface can slide a long way with little control over orientation if the driver "locks" the wheels in stationary positions by pressing hard on the brakes. Anti-lock braking systems use wheel speed sensors and vehicle speed sensors to determine if any of the wheels have stopped turning. The ABS module then briefly releases pressure to any wheel that is spinning too slowly to not be slipping, to allow the road surface to begin turning the wheel freely again. Anti-lock brakes can be much more effective than cadence braking , which is essentially a manual technique for doing the same thing. Stiction refers to the characteristic of start-and-stop–type motion of a mechanical assembly. Consider a mechanical element slowly increasing an external force on an assembly at rest that is designed for the relative rotation or sliding of its parts in contact. The static contact friction between the assembly parts resists movement, causing the spring moments in the assembly to store mechanical energy. Any part of the assembly that can elastically bend, even microscopically, and exert a restoring force contributes a spring moment. Thus the "springs" in an assembly might not be obvious to the eye. The increasing external force finally exceeds the static friction resisting force, and the spring moments, released, impulsively exert their restoring forces on both the moving assembly parts and, by Newton's third law, in reaction on the external forcing element. The assembly parts then impulsively accelerate with respect to each other, though resisted by dynamic contact friction (in this context very much less than the static friction). However, the forcing element cannot accelerate at the same pace, fails to keep up, and loses contact. The external force on the moving assembly momentarily drops to zero for lack of forcing mechanical contact even though the external force element continues its motion. The moving part then decelerates to a stop from the dynamic contact friction. The cycle repeats as the forcing element catches up to contact again. Stick, store spring energy, impulsively release spring energy, accelerate, decelerate, stop, stick. Repeat. Stiction is a problem for the design and materials science of many moving linkages. This is particularly the case for linear sliding joints, rather than rotating pivots. Owing to simple geometry, the moving distance of a sliding joint in two comparable linkages is longer than the circumferential travel of a pivoting bearing, thus the forces involved (for equivalent work ) are lower and stiction forces become proportionally more significant. This issue has often led to linkages being redesigned from sliding to purely pivoted structures, just to avoid problems with stiction. An example is the Chapman strut , a suspension linkage. [ 5 ] During surface micromachining, stiction or adhesion between the substrate (usually silicon -based) and the microstructure occurs during the isotropic wet etching of the sacrificial layer. The capillary forces due to the surface tension of the liquid between the microstructure and substrate during drying of the wet etchant cause the two surfaces to adhere together. Separating the two surfaces is often complicated due to the fragile nature of the microstructure. Stiction is often circumvented by the use of a sublimating fluid (often supercritical CO 2 , which has extremely low surface tension) in a drying process where the liquid phase is bypassed. CO 2 displaces the rinsing fluid and is heated past the supercritical point. As the chamber pressure is slowly released the CO 2 sublimates, thereby preventing stiction.
https://en.wikipedia.org/wiki/Stiction
In mathematics , the Stiefel manifold V k ( R n ) {\displaystyle V_{k}(\mathbb {R} ^{n})} is the set of all orthonormal k -frames in R n . {\displaystyle \mathbb {R} ^{n}.} That is, it is the set of ordered orthonormal k -tuples of vectors in R n . {\displaystyle \mathbb {R} ^{n}.} It is named after Swiss mathematician Eduard Stiefel . Likewise one can define the complex Stiefel manifold V k ( C n ) {\displaystyle V_{k}(\mathbb {C} ^{n})} of orthonormal k -frames in C n {\displaystyle \mathbb {C} ^{n}} and the quaternionic Stiefel manifold V k ( H n ) {\displaystyle V_{k}(\mathbb {H} ^{n})} of orthonormal k -frames in H n {\displaystyle \mathbb {H} ^{n}} . More generally, the construction applies to any real, complex, or quaternionic inner product space . In some contexts, a non- compact Stiefel manifold is defined as the set of all linearly independent k -frames in R n , C n , {\displaystyle \mathbb {R} ^{n},\mathbb {C} ^{n},} or H n ; {\displaystyle \mathbb {H} ^{n};} this is homotopy equivalent to the more restrictive definition, as the compact Stiefel manifold is a deformation retract of the non-compact one, by employing the Gram–Schmidt process . Statements about the non-compact form correspond to those for the compact form, replacing the orthogonal group (or unitary or symplectic group) with the general linear group . Let F {\displaystyle \mathbb {F} } stand for R , C , {\displaystyle \mathbb {R} ,\mathbb {C} ,} or H . {\displaystyle \mathbb {H} .} The Stiefel manifold V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} can be thought of as a set of n × k matrices by writing a k -frame as a matrix of k column vectors in F n . {\displaystyle \mathbb {F} ^{n}.} The orthonormality condition is expressed by A * A = I k {\displaystyle I_{k}} where A * denotes the conjugate transpose of A and I k {\displaystyle I_{k}} denotes the k × k identity matrix . We then have The topology on V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} is the subspace topology inherited from F n × k . {\displaystyle \mathbb {F} ^{n\times k}.} With this topology V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} is a compact manifold whose dimension is given by Each of the Stiefel manifolds V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} can be viewed as a homogeneous space for the action of a classical group in a natural manner. Every orthogonal transformation of a k -frame in R n {\displaystyle \mathbb {R} ^{n}} results in another k -frame, and any two k -frames are related by some orthogonal transformation. In other words, the orthogonal group O( n ) acts transitively on V k ( R n ) . {\displaystyle V_{k}(\mathbb {R} ^{n}).} The stabilizer subgroup of a given frame is the subgroup isomorphic to O( n − k ) which acts nontrivially on the orthogonal complement of the space spanned by that frame. Likewise the unitary group U( n ) acts transitively on V k ( C n ) {\displaystyle V_{k}(\mathbb {C} ^{n})} with stabilizer subgroup U( n − k ) and the symplectic group Sp( n ) acts transitively on V k ( H n ) {\displaystyle V_{k}(\mathbb {H} ^{n})} with stabilizer subgroup Sp( n − k ). In each case V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} can be viewed as a homogeneous space: When k = n , the corresponding action is free so that the Stiefel manifold V n ( F n ) {\displaystyle V_{n}(\mathbb {F} ^{n})} is a principal homogeneous space for the corresponding classical group. When k is strictly less than n then the special orthogonal group SO( n ) also acts transitively on V k ( R n ) {\displaystyle V_{k}(\mathbb {R} ^{n})} with stabilizer subgroup isomorphic to SO( n − k ) so that The same holds for the action of the special unitary group on V k ( C n ) {\displaystyle V_{k}(\mathbb {C} ^{n})} Thus for k = n − 1, the Stiefel manifold is a principal homogeneous space for the corresponding special classical group. The Stiefel manifold can be equipped with a uniform measure , i.e. a Borel measure that is invariant under the action of the groups noted above. For example, V 1 ( R 2 ) {\displaystyle V_{1}(\mathbb {R} ^{2})} which is isomorphic to the unit circle in the Euclidean plane, has as its uniform measure the natural uniform measure ( arc length ) on the circle. It is straightforward to sample this measure on V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} using Gaussian random matrices : if A ∈ F n × k {\displaystyle A\in \mathbb {F} ^{n\times k}} is a random matrix with independent entries identically distributed according to the standard normal distribution on F {\displaystyle \mathbb {F} } and A = QR is the QR factorization of A , then the matrices, Q ∈ F n × k , R ∈ F k × k {\displaystyle Q\in \mathbb {F} ^{n\times k},R\in \mathbb {F} ^{k\times k}} are independent random variables and Q is distributed according to the uniform measure on V k ( F n ) . {\displaystyle V_{k}(\mathbb {F} ^{n}).} This result is a consequence of the Bartlett decomposition theorem . [ 1 ] A 1-frame in F n {\displaystyle \mathbb {F} ^{n}} is nothing but a unit vector, so the Stiefel manifold V 1 ( F n ) {\displaystyle V_{1}(\mathbb {F} ^{n})} is just the unit sphere in F n . {\displaystyle \mathbb {F} ^{n}.} Therefore: Given a 2-frame in R n , {\displaystyle \mathbb {R} ^{n},} let the first vector define a point in S n −1 and the second a unit tangent vector to the sphere at that point. In this way, the Stiefel manifold V 2 ( R n ) {\displaystyle V_{2}(\mathbb {R} ^{n})} may be identified with the unit tangent bundle to S n −1 . When k = n or n −1 we saw in the previous section that V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} is a principal homogeneous space, and therefore diffeomorphic to the corresponding classical group: Given an orthogonal inclusion between vector spaces X ↪ Y , {\displaystyle X\hookrightarrow Y,} the image of a set of k orthonormal vectors is orthonormal, so there is an induced closed inclusion of Stiefel manifolds, V k ( X ) ↪ V k ( Y ) , {\displaystyle V_{k}(X)\hookrightarrow V_{k}(Y),} and this is functorial . More subtly, given an n -dimensional vector space X , the dual basis construction gives a bijection between bases for X and bases for the dual space X ∗ , {\displaystyle X^{*},} which is continuous, and thus yields a homeomorphism of top Stiefel manifolds V n ( X ) → ∼ V n ( X ∗ ) . {\displaystyle V_{n}(X){\stackrel {\sim }{\to }}V_{n}(X^{*}).} This is also functorial for isomorphisms of vector spaces. There is a natural projection from the Stiefel manifold V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} to the Grassmannian of k -planes in F n {\displaystyle \mathbb {F} ^{n}} which sends a k -frame to the subspace spanned by that frame. The fiber over a given point P in G k ( F n ) {\displaystyle G_{k}(\mathbb {F} ^{n})} is the set of all orthonormal k -frames contained in the space P . This projection has the structure of a principal G -bundle where G is the associated classical group of degree k . Take the real case for concreteness. There is a natural right action of O( k ) on V k ( R n ) {\displaystyle V_{k}(\mathbb {R} ^{n})} which rotates a k -frame in the space it spans. This action is free but not transitive. The orbits of this action are precisely the orthonormal k -frames spanning a given k -dimensional subspace; that is, they are the fibers of the map p . Similar arguments hold in the complex and quaternionic cases. We then have a sequence of principal bundles: The vector bundles associated to these principal bundles via the natural action of G on F k {\displaystyle \mathbb {F} ^{k}} are just the tautological bundles over the Grassmannians. In other words, the Stiefel manifold V k ( F n ) {\displaystyle V_{k}(\mathbb {F} ^{n})} is the orthogonal, unitary, or symplectic frame bundle associated to the tautological bundle on a Grassmannian. When one passes to the n → ∞ {\displaystyle n\to \infty } limit, these bundles become the universal bundles for the classical groups. The Stiefel manifolds fit into a family of fibrations : thus the first non-trivial homotopy group of the space V k ( R n ) {\displaystyle V_{k}(\mathbb {R} ^{n})} is in dimension n − k . Moreover, This result is used in the obstruction-theoretic definition of Stiefel–Whitney classes .
https://en.wikipedia.org/wiki/Stiefel_manifold
The Stieglitz Award was established in 1940 using funds from the memorial legacy of Professor Julius Stieglitz , who worked at the University of Chicago from 1892 to his death in 1937. The lecture was presented alternatively by the University of Chicago Chemistry department and the Chicago Section of the American Chemical Society in consecutive years until 1994. There was a pause in presentation from 1994 until 1999 until the funds built up to a level where they were sufficient to support a stipend of $1000 plus expenses for each year. Source: ACS Chicago Section
https://en.wikipedia.org/wiki/Stieglitz_Lecture
The Stieglitz rearrangement is a rearrangement reaction in organic chemistry which is named after the American chemist Julius Stieglitz (1867–1937) and was first investigated by him and Paul Nicholas Leech in 1913. [ 1 ] It describes the 1,2-rearrangement of trityl amine derivatives to triaryl imines . [ 1 ] [ 2 ] It is comparable to a Beckmann rearrangement which also involves a substitution at a nitrogen atom through a carbon to nitrogen shift. [ 3 ] As an example, triaryl hydroxylamines can undergo a Stieglitz rearrangement by dehydration and the shift of a phenyl group after activation with phosphorus pentachloride to yield the respective triaryl imine , a Schiff base . [ 4 ] [ 5 ] In general, the term "Stieglitz rearrangement" is used to describe a wide variety of rearrangement reactions of amines to imines . [ 4 ] Although, it is generally associated with the rearrangement of triaryl hydroxylamines , that are well-reported in the academic literature, Stieglitz rearrangements can also occur on alkylated amine derivatives, [ 6 ] haloamines [ 7 ] [ 8 ] and azides [ 9 ] [ 10 ] as well as other activated amine derivatives. [ 4 ] The Stieglitz rearrangement's reaction mechanism and the products and starting materials involved make it closely related to the Beckmann rearrangement , which can be used for the synthesis of carboxamides . [ 11 ] Both rearrangement reactions involve a carbon to nitrogen shift, usually after electrophilic activation of the leaving group on the nitrogen atom. [ 4 ] [ 12 ] [ 13 ] The main difference in the starting materials, however, is their saturation degree. While a Stieglitz rearrangement takes place on saturated amine derivatives with a σ-single bond, the typical starting material for a Beckmann rearrangement is an oxime (a hydroxylimine) with a C=N-double bond. [ 4 ] [ 14 ] In a Beckmann rearrangement, the acid catalyzed carbon to nitrogen migration takes place on the oxime to yield a nitrilium ion intermediate. [ 15 ] In principle, the first step of a Stieglitz rearrangement proceeds in an analogous way. [ 4 ] However, after the generation of the positively charged iminium ion through the π-interaction between the nitrogen lone pair and the electron deficient carbon in the Stieglitz rearrangement, the pathways diverge. In the Stieglitz rearrangement, a charge-neutral state of the molecule can be achieved by dissociation of a proton. Alternatively, if the starting material did not possess any amino protons, the neutral state can be achieved with an external reducing agent, such as sodium borohydride . It reduces the iminium ion intermediate to the corresponding saturated amine . [ 4 ] [ 16 ] In the Beckmann rearrangement such a proton is also missing and the stabilization of the intermediate proceeds via a nucleophilic addition of a water molecule, dissociation of a proton and tautomerism from the imidic acid to the carboxamide . [ 17 ] Although the original Stieglitz reaction is best known for the rearrangement of trityl hydroxylamines, there are several variations which include good leaving groups as N -substituents (such as halogens and sulfonates). Different reagents are commonly applied, depending on the exact nature of the substrate. [ 4 ] For the rearrangement of trityl hydroxylamines, Lewis acids such as phosphorus pentachloride (PCl 5 ) , phosphorus pentoxide (P 2 O 5 ) or boron trifluoride (BF 3 ) can be used. [ 4 ] They function as electrophilic activators for the hydroxyl group by increasing the quality of the leaving group. For example, when using PCl 5 as a reagent, the trityl hydroxylamine is first transformed into the activated intermediate via a nucleophilic substitution. [ 18 ] The generated intermediate can then undergo rearrangement by the migration of the phenyl group and dissociation of the phosphorus(V) species to form N -phenyl benzophenone imine. [ 18 ] Additionally to N -hydroxy trityl amines, rearrangements in N -alkoxy trityl amines are also possible. However, those reactions are known for their intrinsically low yields. [ 19 ] For example, N -benzyloxy substituted trityl amine can undergo a Stieglitz rearrangement in the presence of phosphorus pentachloride (160 °C, 40% yield) or with BF 3 as a reagent (60 °C, 29% yield). [ 20 ] In the latter case, BF 3 acts as a Lewis acid in the electrophilic activation of the benzylic oxygen to allow for a nucleophilic attack on the adjacent nitrogen atom. [ 20 ] Stieglitz rearrangements also readily proceed with active sulfonates as a leaving group. [ 21 ] N -sulfonated amines can be obtained from the respective hydroxylamines and suitable sulfonation reagents. For example, Herderin et al. synthesized their secondary hydroxylamine (starting material in the rearrangement shown below) by subjecting the respective hydroxylamine to tosyl chloride and sodium hydroxide in acetonitrile . [ 22 ] The Stieglitz rearrangement is especially reactive in the case of bridged bicyclic N -sulfonated amines as starting materials, where mild conditions are sufficient for an efficient reaction to take place. [ 23 ] For example, the rearrangement of the bicyclic N -tosylated amine proceeds readily in aqueous dioxane at room temperature. [ 24 ] However, the respective imine is not formed in this case, presumably due to the strain that would thermodynamically disfavor such a structure, bearing a double bond at a bridgehead atom ( Bredt's rule ). [ 25 ] Instead, the tosylate is nucleophilically added at the geminal position of the nitrogen via an attack on the iminium ion. [ 22 ] Stieglitz rearrangements can also proceed on organic azides with molecular nitrogen as a good leaving group. [ 4 ] Those reactions proceed comparably to steps of the Schmidt reaction , by which carboxylic acids can be transformed into amines through the addition of hydrazoic acid under acidic aqueous conditions. [ 26 ] The Stieglitz rearrangement of azides generally profits from a protonic [ 16 ] or thermal [ 4 ] activation, which can also be combined. [ 10 ] In both cases, molecular nitrogen is set free as a gas in an irreversible step. It has been suggested that the rearrangement, after the dissociation of the N 2 molecule, proceeds over a reactive nitrene intermediate. [ 10 ] These intermediates would be quite similar to those that have been proposed to be key intermediates in the rearrangement reactions named after Hofmann and Curtius , [ 27 ] but have since been considered unlikely. [ 28 ] When subjecting the azide to a Brønsted acid, the protonation of the azide activates the basal nitrogen and lowers the bond strength to the adjacent one, so that the dissociation and expulsion of molecular nitrogen is eased. [ 16 ] After the rearrangement the proton can then dissociate from the iminium ion to yield the imine. An alternative way for the production of protonated organic azides is the nuclophilic addition of hydrazoic acid to a carbocations, which can then also undergo Stieglitz rearrangements. [ 16 ] The Stieglitz rearrangement of N -halogenated amines can be observed for chlorine [ 7 ] and bromine [ 8 ] substituted amines, often in combination with an organic base, such as sodium methoxide . [ 4 ] The need for a base is generally affiliated with the need for a deprotonation of the amine. [ 4 ] However, there also have been reported examples of base-free Stieglitz rearrangements of N -halogenated amines. An example for that can be found in the total synthesis of (±)-lycopodine by Paul Grieco et al. [ 6 ] [ 29 ] There, a ring formation takes place by a rearrangement on a secondary haloamine by subjecting it to silver tetrafluoroborate. [ 6 ] AgBF 4 is known to act as a source of Ag + ions that can facilitate the dissociation of halides from organic molecules, with the formation of the respective silver halide as a driving force. [ 30 ] The desired product is then obtained by reduction with sodium cyanoborohydride , a mild reducing agent which is commonly employed in the reduction of imines to amines. [ 31 ] It was also observed, that the addition of lead tetraacetate can facilitate the Stieglitz rearrangement of amine derivatives. [ 32 ] After the formation of the activated amine derivative intermediate by coordination to the lead center, the following rearrangement again proceeds via migration of the aromatic group under formation of a C–N bond, dissociation of lead and the deprotonation of the resulting iminium ion. [ 33 ]
https://en.wikipedia.org/wiki/Stieglitz_rearrangement
In mathematics , the Stieltjes constants are the numbers γ k {\displaystyle \gamma _{k}} that occur in the Laurent series expansion of the Riemann zeta function : The constant γ 0 = γ = 0.577 … {\displaystyle \gamma _{0}=\gamma =0.577\dots } is known as the Euler–Mascheroni constant . The Stieltjes constants are given by the limit (In the case n = 0, the first summand requires evaluation of 0 0 , which is taken to be 1.) Cauchy's differentiation formula leads to the integral representation Various representations in terms of integrals and infinite series are given in works of Jensen , Franel, Hermite , Hardy , Ramanujan , Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that where δ n,k is the Kronecker symbol (Kronecker delta) . [ 5 ] [ 6 ] Among other formulae, we find see. [ 1 ] [ 5 ] [ 7 ] As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912 [ 8 ] Israilov [ 9 ] gave semi-convergent series in terms of Bernoulli numbers B 2 k {\displaystyle B_{2k}} Connon, [ 10 ] Blagouchine [ 6 ] [ 11 ] and Coppo [ 1 ] gave several series with the binomial coefficients where G n are Gregory's coefficients , also known as reciprocal logarithmic numbers ( G 1 =+1/2, G 2 =−1/12, G 3 =+1/24, G 4 =−19/720,... ). More general series of the same nature include these examples [ 11 ] and or where ψ n ( a ) are the Bernoulli polynomials of the second kind and N n,r ( a ) are the polynomials given by the generating equation respectively (note that N n,1 ( a ) = ψ n ( a ) ). [ 12 ] Oloa and Tauraso [ 13 ] showed that series with harmonic numbers may lead to Stieltjes constants Blagouchine [ 6 ] obtained slowly-convergent series involving unsigned Stirling numbers of the first kind [ ⋅ ⋅ ] {\displaystyle \left[{\cdot \atop \cdot }\right]} as well as semi-convergent series with rational terms only where m =0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form where H n is the n th harmonic number . [ 6 ] More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-You, Williams, Coffey. [ 2 ] [ 3 ] [ 6 ] The Stieltjes constants satisfy the bound given by Berndt in 1972. [ 14 ] Better bounds in terms of elementary functions were obtained by Lavrik [ 15 ] by Israilov [ 9 ] with k =1,2,... and C (1)=1/2, C (2)=7/12,... , by Nan-You and Williams [ 16 ] by Blagouchine [ 6 ] where B n are Bernoulli numbers , and by Matsuoka [ 17 ] [ 18 ] As concerns estimations resorting to non-elementary functions and solutions, Knessl, Coffey [ 19 ] and Fekih-Ahmed [ 20 ] obtained quite accurate results. For example, Knessl and Coffey give the following formula that approximates the Stieltjes constants relatively well for large n . [ 19 ] If v is the unique solution of with 0 < v < π / 2 {\displaystyle 0<v<\pi /2} , and if u = v tan ⁡ v {\displaystyle u=v\tan v} , then where Up to n = 100000, the Knessl-Coffey approximation correctly predicts the sign of γ n with the single exception of n = 137. [ 19 ] In 2022 K. Maślanka [ 21 ] gave an asymptotic expression for the Stieltjes constants, which is both simpler and more accurate than those previously known. In particular, it reproduces with a relatively small error the troublesome value for n = 137. Namely, when n >> 1 {\displaystyle n>>1} where s n {\displaystyle s_{n}} are the saddle points: W {\displaystyle W} is the Lambert function and c {\displaystyle c} is a constant: Defining a complex "phase" φ n {\displaystyle \varphi _{n}} we get a particularly simple expression in which both the rapidly increasing amplitude and the oscillations are clearly seen: The first few values are [ 22 ] For large n , the Stieltjes constants grow rapidly in absolute value, and change signs in a complex pattern. Further information related to the numerical evaluation of Stieltjes constants may be found in works of Keiper, [ 23 ] Kreminski, [ 24 ] Plouffe, [ 25 ] Johansson [ 26 ] [ 27 ] and Blagouchine. [ 27 ] First, Johansson provided values of the Stieltjes constants up to n = 100000, accurate to over 10000 digits each (the numerical values can be retrieved from the LMFDB [1] . Later, Johansson and Blagouchine devised a particularly efficient algorithm for computing generalized Stieltjes constants (see below) for large n and complex a , which can be also used for ordinary Stieltjes constants. [ 27 ] In particular, it allows one to compute γ n to 1000 digits in a minute for any n up to n =10 100 . More generally, one can define Stieltjes constants γ n (a) that occur in the Laurent series expansion of the Hurwitz zeta function : Here a is a complex number with Re( a )>0. Since the Hurwitz zeta function is a generalization of the Riemann zeta function, we have γ n (1)=γ n The zeroth constant is simply the digamma-function γ 0 (a)=-Ψ(a), [ 28 ] while other constants are not known to be reducible to any elementary or classical function of analysis. Nevertheless, there are numerous representations for them. For example, there exists the following asymptotic representation due to Berndt and Wilton. The analog of Jensen-Franel's formula for the generalized Stieltjes constant is the Hermite formula [ 5 ] Similar representations are given by the following formulas: [ 27 ] and Generalized Stieltjes constants satisfy the following recurrence relation as well as the multiplication theorem where ( p r ) {\displaystyle {\binom {p}{r}}} denotes the binomial coefficient (see [ 29 ] and, [ 30 ] pp. 101–102). The first generalized Stieltjes constant has a number of remarkable properties. where m and n are positive integers such that m < n . This formula has been long-time attributed to Almkvist and Meurman who derived it in 1990s. [ 31 ] However, it was recently reported that this identity, albeit in a slightly different form, was first obtained by Carl Malmsten in 1846. [ 5 ] [ 32 ] see Blagouchine. [ 5 ] [ 28 ] An alternative proof was later proposed by Coffey [ 33 ] and several other authors. For more details and further summation formulae, see. [ 5 ] [ 30 ] At points 1/4, 3/4 and 1/3, values of first generalized Stieltjes constants were independently obtained by Connon [ 34 ] and Blagouchine [ 30 ] At points 2/3, 1/6 and 5/6 These values were calculated by Blagouchine. [ 30 ] To the same author are also due The second generalized Stieltjes constant is much less studied than the first constant. Similarly to the first generalized Stieltjes constant, the second generalized Stieltjes constant at rational argument may be evaluated via the following formula see Blagouchine. [ 5 ] An equivalent result was later obtained by Coffey by another method. [ 33 ]
https://en.wikipedia.org/wiki/Stieltjes_constants
In mathematics , the Stieltjes moment problem , named after Thomas Joannes Stieltjes , seeks necessary and sufficient conditions for a sequence ( m 0 , m 1 , m 2 , ...) to be of the form for some measure μ . If such a function μ exists, one asks whether it is unique. The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞). Let be a Hankel matrix , and Then { m n : n = 1, 2, 3, ... } is a moment sequence of some measure on [ 0 , ∞ ) {\displaystyle [0,\infty )} with infinite support if and only if for all n , both { m n : n = 1, 2, 3, ... } is a moment sequence of some measure on [ 0 , ∞ ) {\displaystyle [0,\infty )} with finite support of size m if and only if for all n ≤ m {\displaystyle n\leq m} , both and for all larger n {\displaystyle n} There are several sufficient conditions for uniqueness, for example, Carleman's condition , which states that the solution is unique if
https://en.wikipedia.org/wiki/Stieltjes_moment_problem
A Stiff diagram , or Stiff pattern , is a graphical representation of chemical analyses, first developed by H.A. Stiff in 1951. It is widely used by hydrogeologists and geochemists to display the major ion composition of a water sample . A polygonal shape is created from four parallel horizontal axes extending on either side of a vertical zero axis. Cations are plotted in milliequivalents per liter on the left side of the zero axis, one to each horizontal axis, and anions are plotted on the right side. Stiff patterns are useful in making a rapid visual comparison between water from different sources. An alternative to the Stiff diagram is the Maucha diagram . Stiff diagrams can be used: A typical Stiff diagram is shown in the figure (right). By standard convention, Stiff diagrams are created by plotting the equivalent concentration of the cations to the left of the center axis and anions to the right. The points are connected to form the figure. When comparing Stiff diagrams between different waters it is important to prepare each diagram using the same ionic species, in the same order, on the same scale. Environmental laboratories typically report concentrations for anion and cation parameters using units of mass/volume, usually mg/L. In order to convert the mass concentration to an equivalent concentration the following mathematical relationship is used: For example, a water with a calcium concentration of 120 mg/L would have the following calcium equivalent concentration:
https://en.wikipedia.org/wiki/Stiff_diagram
Stiffening is any process that increases the rigidity and structural integrity of objects. Stiffening is used in crafts, art, industry, architecture, sports, aerospace, object construction, bookbinding, etc. In mechanics , "stiffening" beams brings anti- buckling , anti-wrinkling, desired shaping, reinforcement, repair, strength, enhanced function, extended utility, longer beam life, safety, etc. Stiffening of fluid or rigid beams is used in medical arts , aerospace , aviation , sports , bookbinding , art , architecture , natural plants and trees, construction industry, bridge building, [ 1 ] and more. Mechanical methods for stiffening include tension stiffening, [ 2 ] centrifugal stiffening, [ 3 ] bracing , superstructure bracing, substructure bracing, straightening, strain stiffening, stress stiffening, [ 4 ] damping vibrations, swelling, pressure increasing, drying, cooling, interior reinforcing, exterior reinforcing, wrapping, surface treating, or combinations of these and other methods. Beams under bending loads or compression invite stiffening to stop buckling or collapse while fulfilling desired functions, purposes, and benefits. In bookbinding , stiffening is a process whereby paperback books are reinforced for use in libraries, without change to their fundamental binding structure. It is in use at several academic libraries in the United States, including those at Cornell University and Johns Hopkins University . During the stiffening process, a cloth or Tyvek strip is glued down on the inside joints of a paperback to reinforce the attachment of the book's covers. A thin but stiff board is then glued to the inside of both the front and back cover of the book, and the entire book is trimmed slightly on the head, tail, and fore edge, often with an electric guillotine. Stiffening provides an in-house, inexpensive alternative to commercial library binding for paperbacks. While it does not involve (re-)sewing a book as in a library binding, stiffening does significantly prolong the usable life of a paperback, and allows paperbacks to stand upright on library shelves. The stiffening process was invented in 1974 by John Dean, who was Head of Preservation at Johns Hopkins at the time.
https://en.wikipedia.org/wiki/Stiffening
Stiffness is the extent to which an object resists deformation in response to an applied force . [ 1 ] The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is. [ 2 ] The stiffness, k , {\displaystyle k,} of a body is a measure of the resistance offered by an elastic body to deformation. For an elastic body with a single degree of freedom (DOF) (for example, stretching or compression of a rod), the stiffness is defined as k = F δ {\displaystyle k={\frac {F}{\delta }}} where, Stiffness is usually defined under quasi-static conditions , but sometimes under dynamic loading. [ 3 ] In the International System of Units , stiffness is typically measured in newtons per meter ( N / m {\displaystyle N/m} ). In Imperial units, stiffness is typically measured in pounds (lbs) per inch. Generally speaking, deflections (or motions) of an infinitesimal element (which is viewed as a point) in an elastic body can occur along multiple DOF (maximum of six DOF at a point). For example, a point on a horizontal beam can undergo both a vertical displacement and a rotation relative to its undeformed axis. When there are M {\displaystyle M} degrees of freedom a M × M {\displaystyle M\times M} matrix must be used to describe the stiffness at the point. The diagonal terms in the matrix are the direct-related stiffnesses (or simply stiffnesses) along the same degree of freedom and the off-diagonal terms are the coupling stiffnesses between two different degrees of freedom (either at the same or different points) or the same degree of freedom at two different points. In industry, the term influence coefficient is sometimes used to refer to the coupling stiffness. It is noted that for a body with multiple DOF, the equation above generally does not apply since the applied force generates not only the deflection along its direction (or degree of freedom) but also those along with other directions. For a body with multiple DOF, to calculate a particular direct-related stiffness (the diagonal terms), the corresponding DOF is left free while the remaining should be constrained. Under such a condition, the above equation can obtain the direct-related stiffness for the degree of unconstrained freedom. The ratios between the reaction forces (or moments) and the produced deflection are the coupling stiffnesses. The elasticity tensor is a generalization that describes all possible stretch and shear parameters. A single spring may intentionally be designed to have variable (non-linear) stiffness throughout its displacement. The inverse of stiffness is flexibility or compliance , typically measured in units of metres per newton. In rheology , it may be defined as the ratio of strain to stress , [ 4 ] and so take the units of reciprocal stress, for example, 1/ Pa . A body may also have a rotational stiffness, k , {\displaystyle k,} given by k = M θ {\displaystyle k={\frac {M}{\theta }}} where In the SI system, rotational stiffness is typically measured in newton-metres per radian . In the SAE system, rotational stiffness is typically measured in inch- pounds per degree . Further measures of stiffness are derived on a similar basis, including: The elastic modulus of a material is not the same as the stiffness of a component made from that material. Elastic modulus is a property of the constituent material; stiffness is a property of a structure or component of a structure, and hence it is dependent upon various physical dimensions that describe that component. That is, the modulus is an intensive property of the material; stiffness, on the other hand, is an extensive property of the solid body that is dependent on the material and its shape and boundary conditions. For example, for an element in tension or compression , the axial stiffness is k = E ⋅ A L {\displaystyle k=E\cdot {\frac {A}{L}}} where Similarly, the torsional stiffness of a straight section is k = G ⋅ J L {\displaystyle k=G\cdot {\frac {J}{L}}} where Note that the torsional stiffness has dimensions [force] * [length] / [angle], so that its SI units are N*m/rad. For the special case of unconstrained uniaxial tension or compression, Young's modulus can be thought of as a measure of the stiffness of a structure. The stiffness of a structure is of principal importance in many engineering applications, so the modulus of elasticity is often one of the primary properties considered when selecting a material. A high modulus of elasticity is sought when deflection is undesirable, while a low modulus of elasticity is required when flexibility is needed. In biology, the stiffness of the extracellular matrix is important for guiding the migration of cells in a phenomenon called durotaxis . Another application of stiffness finds itself in skin biology. The skin maintains its structure due to its intrinsic tension, contributed to by collagen , an extracellular protein that accounts for approximately 75% of its dry weight. [ 5 ] The pliability of skin is a parameter of interest that represents its firmness and extensibility, encompassing characteristics such as elasticity, stiffness, and adherence. These factors are of functional significance to patients. [ 6 ] This is of significance to patients with traumatic injuries to the skin, whereby the pliability can be reduced due to the formation and replacement of healthy skin tissue by a pathological scar . This can be evaluated both subjectively, or objectively using a device such as the Cutometer. The Cutometer applies a vacuum to the skin and measures the extent to which it can be vertically distended. These measurements are able to distinguish between healthy skin, normal scarring, and pathological scarring, [ 7 ] and the method has been applied within clinical and industrial settings to monitor both pathophysiological sequelae, and the effects of treatments on skin.
https://en.wikipedia.org/wiki/Stiffness
In the finite element method for the numerical solution of elliptic partial differential equations , the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation. For simplicity, we will first consider the Poisson problem on some domain Ω , subject to the boundary condition u = 0 on the boundary of Ω . To discretize this equation by the finite element method , one chooses a set of basis functions { φ 1 , …, φ n } defined on Ω which also vanish on the boundary. One then approximates The coefficients u 1 , u 2 , …, u n are determined so that the error in the approximation is orthogonal to each basis function φ i : as a consequence of the homogenous Dirichlet boundary conditions . The stiffness matrix is the n -element square matrix A defined by By defining the vector F with components F i = ∫ Ω φ i f d x , {\textstyle \mathbf {F} _{i}=\int _{\Omega }\varphi _{i}f\,dx,} the coefficients u i are determined by the linear system Au = F . The stiffness matrix is symmetric , i.e. A ij = A ji , so all its eigenvalues are real. Moreover, it is a strictly positive-definite matrix , so that the system Au = F always has a unique solution. (For other problems, these nice properties will be lost.) Note that the stiffness matrix will be different depending on the computational grid used for the domain and what type of finite element is used. For example, the stiffness matrix when piecewise quadratic finite elements are used will have more degrees of freedom than piecewise linear elements. Determining the stiffness matrix for other PDEs follows essentially the same procedure, but it can be complicated by the choice of boundary conditions. As a more complex example, consider the elliptic equation where A ( x ) = a k l ( x ) {\displaystyle \mathbf {A} (x)=a^{kl}(x)} is a positive-definite matrix defined for each point x in the domain. We impose the Robin boundary condition where ν k is the component of the unit outward normal vector ν in the k -th direction. The system to be solved is as can be shown using an analogue of Green's identity . The coefficients u i are still found by solving a system of linear equations, but the matrix representing the system is markedly different from that for the ordinary Poisson problem. In general, to each scalar elliptic operator L of order 2 k , there is associated a bilinear form B on the Sobolev space H k , so that the weak formulation of the equation Lu = f is for all functions v in H k . Then the stiffness matrix for this problem is In order to implement the finite element method on a computer, one must first choose a set of basis functions and then compute the integrals defining the stiffness matrix. Usually, the domain Ω is discretized by some form of mesh generation , wherein it is divided into non-overlapping triangles or quadrilaterals , which are generally referred to as elements. The basis functions are then chosen to be polynomials of some order within each element, and continuous across element boundaries. The simplest choices are piecewise linear for triangular elements and piecewise bilinear for rectangular elements. The element stiffness matrix A [ k ] for element T k is the matrix The element stiffness matrix is zero for most values of i and j , for which the corresponding basis functions are zero within T k . The full stiffness matrix A is the sum of the element stiffness matrices. In particular, for basis functions that are only supported locally, the stiffness matrix is sparse . For many standard choices of basis functions, i.e. piecewise linear basis functions on triangles, there are simple formulas for the element stiffness matrices. For example, for piecewise linear elements, consider a triangle with vertices ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , and define the 2×3 matrix Then the element stiffness matrix is When the differential equation is more complicated, say by having an inhomogeneous diffusion coefficient, the integral defining the element stiffness matrix can be evaluated by Gaussian quadrature . The condition number of the stiffness matrix depends strongly on the quality of the numerical grid. In particular, triangles with small angles in the finite element mesh induce large eigenvalues of the stiffness matrix, degrading the solution quality.
https://en.wikipedia.org/wiki/Stiffness_matrix
(See article text) Stilbonematinae is a subfamily of the nematode worm family Desmodoridae that is notable for its symbiosis with sulfur-oxidizing bacteria . Stilbonematinae Chitwood, 1936 belongs to the family Desmodoridae in the order Desmodorida. Nine genera have been described. [ 1 ] Stilbonematines can be up to 10 mm long, with a club-like head. The worms are completely covered in a coat of ectosymbiotic sulfur-oxidizing bacteria except for the anterior region. The presence of the bacteria, which often contain intracellular inclusions of elemental sulfur , gives the worms a bright white appearance under incident light. They have small mouths and buccal cavities, and short pharynges . Many species have multicellular sensory-glandular organs in longitudinal rows along the length of the body, which secrete mucus that the bacterial symbionts are embedded in. [ 1 ] Stilbonematines are found in the meiofaunal habitat in marine environments. [ 2 ] Another group of meiofaunal nematodes with sulfur-oxidizing symbionts is the genus Astomonema , although in Astomonema the bacteria are endo- rather than ectosymbionts. The bacterial symbionts of stilbonematines are of different shapes and sizes, ranging from small coccoid cells to elongate crescent-like cells, but each host species has only a single morphological type associated with it. [ 3 ] The bacterial symbionts of stilbonematines are closely related to the sulfur-oxidizing symbionts of gutless phallodriline oligochaete worms: these bacteria were all descended from a single ancestor, and each host species has its own specific bacterial species. [ 4 ] The bacterial symbionts are chemosynthetic , gaining energy by oxidizing sulfide from the environment, and producing biomass by fixing carbon dioxide through the Calvin-Benson-Bassham cycle . [ 3 ] The bacteria benefit from the symbiosis because the host animal can migrate between sulfide- and oxygen-rich regions of the sediment habitat, and the bacteria require both these chemical substances to produce energy. The hosts are believed to consume the bacteria as a food source, based on evidence from their stable carbon isotope ratios . [ 5 ] The specificity of the bacterial symbionts to their respective host species is controlled by a lectin called Mermaid that is produced by the worms. Mermaid occurs in different isoforms , which have differing affinities for the sugar compositions of the lipopolysaccharide coat in different bacterial species. [ 6 ]
https://en.wikipedia.org/wiki/Stilbonematinae