text
stringlengths
11
320k
source
stringlengths
26
161
The Sir Hans Krebs Lecture and Medal is awarded annually by the Federation of European Biochemical Societies (FEBS) for outstanding achievements in Biochemistry and Molecular Biology or related sciences. It was endowed by the Lord Rank Centre for Research and named after the German-born British biochemist Sir Hans Adolf Krebs , well known for identifying the urea and citric acid cycles . The awardee receives a silver medal and presents one of the plenary lectures at the FEBS Congress. [ 1 ] Source: (1968–2002) [ 2 ]
https://en.wikipedia.org/wiki/Sir_Hans_Krebs_Medal
Sir John Conroy, 3rd Baronet , FRS (16 August 1845 – 15 December 1900) was an English analytical chemist . Conroy was born in Kensington , west London , the son of Sir Edward Conroy, 2nd Baronet (1809–1869) and Lady Alicia Conroy. He was descended from the Ó Maolconaire family of Elphin , County Roscommon . The family had been the hereditary Ollamhs to the O'Conor Kings of Connacht . He was descended from Maoilin Ó Maolchonaire who was the last recognised Chief of the Sept . [ 2 ] He was educated at Eton College and then Christ Church, Oxford , also the college of his father, where he read Natural Science , gaining a first class degree in 1868. [ 3 ] His tutor was the chemical kinetics pioneer Augustus George Vernon Harcourt FRS. He lived mostly with his mother at Arborfield Grange in Berkshire until 1880. [ 3 ] His scientific interests were in analytical chemistry , especially optical measurements . He worked mainly in a laboratory at Christ Church in Oxford . He had teaching posts at Keble College (1880–90), and Balliol College and Trinity College (1886–1900). He worked at the Balliol-Trinity Laboratories with Sir Harold Hartley and others. [ 4 ] In 1890, he became a Fellow of Balliol College. [ 5 ] In 1891, he was elected a Fellow of the Royal Society . Conroy's religious leanings were High Church and he was involved with the English Church Union . From 1897, he was treasurer of the Radcliffe Infirmary in Oxford. Conroy never married and died in Rome . His baronetcy became extinct as a result. This article about a British chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sir_John_Conroy,_3rd_Baronet
The Sir William Dunn Professorship of Biochemistry is the senior professorship in biochemistry at the University of Cambridge . The position was established in 1914 by the trustees of the will of Sir William Dunn , banker, merchant and philanthropist. [ 1 ] The first holder of the chair was Frederick Gowland Hopkins , winner of the 1929 Nobel Prize in Medicine for his work on the discovery of vitamins .
https://en.wikipedia.org/wiki/Sir_William_Dunn_Professor_of_Biochemistry
The siriometer is an obsolete astronomical unit of length , defined to be equal to one million astronomical units (au). [ 1 ] [ 2 ] One siriometer is approximately 149.6 petametres ; 4.848 parsecs ; 15.81 light-years . The distance from Earth to the star Sirius is then approximately 0.54 siriometers. [ 3 ] The unit was proposed in 1911 by Carl V. L. Charlier , [ 3 ] who worked on stellar statistics. [ 4 ] Charlier originally used the symbol 'sir' [ 1 ] but the symbol 'Sm' has also seen use. [ 5 ] The siriometer never gained widespread usage. Frank Dyson (the Astronomer Royal ) objected to the name siriometer, because "it suggests a machine for measuring". [ 6 ] The first General Assembly of the International Astronomical Union in 1922 adopted the parsec as the standard unit of stellar distances, [ 7 ] which simplified the definition of absolute magnitude . [ 3 ] Use of the siriometer seems to have disappeared from the astronomical literature by c. 1930 . [ 3 ] Modern professional astronomers use the parsec as their primary unit for distances larger than the Solar System . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Siriometer
In Greek and Roman mythology and religion , Sirius ( / ˈ s ɪ r ɪ ə s / , SEE -ree-əss ; Ancient Greek : Σείριος , romanized : Seírios , lit. 'scorching' pronounced [sěːrios] ) is the god and personification of the star Sirius , also known as the Dog Star, the brightest star in the night sky and the most prominent star in the constellation of Canis Major (or the Greater Dog). [ 1 ] In ancient Greek and Roman texts, Sirius is portrayed as the scorching bringer of the summer heatwaves, the bright star who intensifies the Sun 's own heat. The ancient Greek word and proper noun Σείριος has been connected to the verb σείω ( seíō ), meaning to 'sparkle, to gleam' and has thus an Indo-European etymology; Furnée on the other hand compared it to the word τίριος ( tírios ), the Cretan word for summer, which, if correct, would mean that the word is pre-Greek instead. [ 2 ] From this name an ancient phrase was derived, σείριον πάθος (literally "sirian passion", meaning burning passion). [ 3 ] Sirius's divine parentage is not made entirely clear in ancient texts; in the Theogony the poet Hesiod names Eos (the dawn goddess) and her husband Astraeus (a star god) as the parents of all stars, although this usually referred to the 'wandering stars', that is the five planets . [ 4 ] Sirius is first mentioned by name in Hesiod 's Works and Days , [ 5 ] [ 6 ] although he is also strongly alluded to in Homer 's Iliad , with his brilliance used as a metaphor for the shiny bronze armors of the soldiers, and in another point he is presented as an ominous death star foreshadowing the fate of the doomed Hector in his fight against Achilles . [ 7 ] Apollonius of Rhodes calls him "brilliant and beautiful but full of menace for the flocks," [ 8 ] and both Aratus and Quintus of Smyrna speak of his rise in conjunction to that of the Sun (the god Helios ). [ 9 ] The Roman poet Statius says: Tempus erat, caeli cum torrentissimus axis incumbit terris ictusque Hyperione multo acer anhelantes incendit Sirius agros. Twas the season when the vault of heaven bends its most scorching heat upon the earth, and Sirius the Dog-star smitten by Hyperion's full might pitilessly burns the panting fields. [ 10 ] In addition to that, "Sirius" was sometimes used as an epithet of Helios himself due to the Sun's great heat and warmth. [ 11 ] [ 12 ] Sirius and his appearance in the sky in July and August was associated with heat, fire and fever by the ancient Greeks from early on, [ 13 ] as was his association with dogs; as the chief star in the constellation Canis Major , he was referred to as 'the Dog', which also referred to the entire constellation. [ 14 ] The arrival of Sirius in the sky was seen as the cause behind the hot, dry days of summer; dogs were thought to be the most affected by Sirius's heat, causing them rapid panting and aggressive behaviour towards humans, who were in danger of contacting rabies from their bites. [ 15 ] Sirius, a luminous star brighter than the Sun, is very often described as red in some ancient Greek and Roman texts, put in the same category as the red-shining Mars and Antares , although in reality it is a white-blue star. [ 16 ] In a lesser known narrative, back when the stars walked the earth, Sirius was sent on a mission on land. There he met and fell madly in love with Opora , the goddess of fruit as well as the transition between summer and autumn. He was however unable to be with her, so in anger he began to burn even hotter. [ 17 ] The mortals started to suffer due to the immense heat, and pleaded to the gods. [ 18 ] Then the god of the north wind, Boreas , ordered his sons to bring Opora to Sirius, while he himself cooled off the earth with blasts of cold, freezing wind. [ 19 ] Sirius then went on to glow and burn hot every summer thereafter during harvest time in commemoration of this event and his great love, explaining the heat of the so-called dog days of summer, which was attributed to this star in antiquity. [ 20 ] The story is generally believed to have originated from a lost play entitled Opora , by the Athenian playwright of Middle Comedy Amphis , and a work of the same name by Amphis's contemporary Alexis . [ 19 ] It also parallels the tale of young Phaethon , the son of the sun-god Helios who drove his father's sun chariot for a day and ended up burning the earth with it, prompting the entire nature to beg Zeus for salvation. [ 19 ] In Euripides 's version of the story , Helios accompanies Phaethon in his journey riding on a steed named Sirius. [ 12 ] After the mortal hunter Orion was killed by the scorpion the earth-goddess Gaia sent to punish him, he was transported by the gods (usually either Artemis or Zeus) in the stars as the homonymous constellation , where he was ever accompanied by his faithful dog, [ 21 ] who was represented by Sirius (and Canis Major) in their new celestial lives. [ 22 ] [ 23 ] This belief seems to originate from the fact that the Dog forms a sky-picture with Orion, as the two hunt Lepus (the Hare) or the Teumessian fox through the sky. [ 20 ] Sirius is also identified with Maera ( Ancient Greek : Μαῖρα , romanized : Maira , lit. 'sparkler'), which was another name for the dog star in antiquity. [ 24 ] In mythology Maera was the hound of Icarius , an old Athenian an who was taught the art of wine-making by Dionysus . When Icarius shared the wine with the other Athenians he was accused of poisoning them (due to the wine's intoxicating properties which made them pass out) and he was thus killed in vengeance; his daughter Erigone , after being led to his corpse by Maera, took her own life by hanging. [ 25 ] Dionysus then transferred all three in the sky, with Maera becoming the star Canicula, which was the Romans' name for Sirius, [ 26 ] [ 27 ] although Hyginus himself claimed that the Greeks used Procyon for Canicula. [ 28 ] In second-century author Lucian 's satire work A True Story , the people of Sirius, here presented as an inhabited world, send an army of Cynobalani (dog-faced men mounting gigantic winged acorns) to assist the Sun citizens in their war against the inhabitants of the Moon. [ 29 ] Sirius, associated with heat, is an appropriate ally for the kingdom of the Sun. [ 30 ] Not much evidence on Sirius' ancient cult is preserved. In antiquity, Sirius might have been venerated on the island of Kea with summer sacrifices to his honour during the Hellenistic period, [ 15 ] though certain doubts have been cast on whether such cult did exist indeed; at any point, that cult surely did not predate the third century BC. [ 24 ] Keans would observe Sirius's rising from a hilltop; if the star rose clear and brilliant it was a good sign of health, but if it appeared faint or misty it was seen as ominous. Sirius was also represented on coinage from Kea. [ 15 ]
https://en.wikipedia.org/wiki/Sirius_(mythology)
Sirolimus , also known as rapamycin and sold under the brand name Rapamune among others, is a macrolide compound that is used to coat coronary stents , prevent organ transplant rejection , treat a rare lung disease called lymphangioleiomyomatosis , and treat perivascular epithelioid cell tumour (PEComa). [ 1 ] [ 2 ] [ 11 ] It has immunosuppressant functions in humans and is especially useful in preventing the rejection of kidney transplants. It is a mammalian target of rapamycin (mTOR) kinase inhibitor [ 2 ] that reduces the sensitivity of T cells and B cells to interleukin-2 (IL-2), inhibiting their activity. [ 12 ] This compound also has a use in cardiovascular drug-eluting stent technologies to inhibit restenosis . It is produced by the bacterium Streptomyces hygroscopicus and was isolated for the first time in 1972, from samples of Streptomyces hygroscopicus found on Easter Island . [ 13 ] [ 14 ] [ 15 ] The compound was originally named rapamycin after the native name of the island, Rapa Nui. [ 11 ] Sirolimus was initially developed as an antifungal agent. However, this use was abandoned when it was discovered to have potent immunosuppressive and antiproliferative properties due to its ability to inhibit mTOR . It was approved by the US Food and Drug Administration (FDA) in 1999. [ 16 ] Hyftor (sirolimus gel) was authorized for topical treatment of facial angiofibroma in the European Union in May 2023. [ 6 ] In the US, sirolimus, as Rapamune, is indicated for the prevention of organ transplant rejection [ 1 ] and for the treatment of lymphangioleiomyomatosis ; [ 1 ] and, as Fyarro, in the form of protein-bound particles, for the treatment of adults with locally advanced unresectable or metastatic malignant perivascular epithelioid cell tumour (PEComa). [ 2 ] In the EU, sirolimus, as Rapamune, is indicated for the prophylaxis of organ rejection in adults at low to moderate immunological risk receiving a renal transplant [ 4 ] [ 5 ] and for the treatment of people with sporadic lymphangioleiomyomatosis with moderate lung disease or declining lung function; [ 4 ] [ 5 ] and, as Hyftor, for the treatment of facial angiofibroma associated with tuberous sclerosis complex. [ 6 ] [ 7 ] The chief advantage sirolimus has over calcineurin inhibitors is its low toxicity toward kidneys. Transplant patients maintained on calcineurin inhibitors long-term tend to develop impaired kidney function or even kidney failure ; this can be avoided by using sirolimus instead. It is particularly advantageous in patients with kidney transplants for hemolytic-uremic syndrome , as this disease is likely to recur in the transplanted kidney if a calcineurin-inhibitor is used. However, on 7 October 2008, the FDA approved safety labeling revisions for sirolimus to warn of the risk for decreased renal function associated with its use. [ 17 ] [ 18 ] In 2009, the FDA notified healthcare professionals that a clinical trial conducted by Wyeth showed an increased mortality in stable liver transplant patients after switching from a calcineurin inhibitor-based immunosuppressive regimen to sirolimus. [ 19 ] A 2019 cohort study of nearly 10,000 lung transplant recipients in the US demonstrated significantly improved long-term survival using sirolimus + tacrolimus instead of mycophenolate mofetil + tacrolimus for immunosuppressive therapy starting at one year after transplant. [ 20 ] Sirolimus can also be used alone, or in conjunction with a calcineurin inhibitor (such as tacrolimus ), and/or mycophenolate mofetil , to provide steroid-free immunosuppression regimens. Impaired wound healing and thrombocytopenia are possible side effects of sirolimus; therefore, some transplant centers prefer not to use it immediately after the transplant operation, but instead administer it only after a period of weeks or months. Its optimal role in immunosuppression has not yet been determined, and it remains the subject of a number of ongoing clinical trials. [ 12 ] In May 2015, the FDA approved sirolimus to treat lymphangioleiomyomatosis (LAM), a rare, progressive lung disease that primarily affects women of childbearing age. This made sirolimus the first drug approved to treat this disease. [ 21 ] LAM involves lung tissue infiltration with smooth muscle -like cells with mutations of the tuberous sclerosis complex gene ( TSC2 ). Loss of TSC2 gene function activates the mTOR signaling pathway, resulting in the release of lymphangiogenic growth factors . Sirolimus blocks this pathway. [ 1 ] The safety and efficacy of sirolimus treatment of LAM were investigated in clinical trials that compared sirolimus treatment with a placebo group in 89 patients for 12 months. The patients were observed for 12 months after the treatment had ended. The most commonly reported side effects of sirolimus treatment of LAM were mouth and lip ulcers, diarrhea , abdominal pain, nausea, sore throat, acne, chest pain, leg swelling, upper respiratory tract infection , headache, dizziness, muscle pain and elevated cholesterol . Serious side effects including hypersensitivity and swelling ( edema ) have been observed in renal transplant patients. [ 21 ] While sirolimus was considered for treatment of LAM, it received orphan drug designation status because LAM is a rare condition. [ 21 ] The safety of LAM treatment by sirolimus in people younger than 18 years old has not been tested. [ 1 ] The antiproliferative effect of sirolimus has also been used in conjunction with coronary stents to prevent restenosis in coronary arteries following balloon angioplasty. The sirolimus is formulated in a polymer coating that affords controlled release through the healing period following coronary intervention. Several large clinical studies have demonstrated lower restenosis rates in patients treated with sirolimus-eluting stents when compared to bare-metal stents, resulting in fewer repeat procedures. However, this kind of stent may also increase the risk of vascular thrombosis. [ 22 ] Sirolimus is used to treat vascular malformations. Treatment with sirolimus can decrease pain and the fullness of vascular malformations, improve coagulation levels, and slow the growth of abnormal lymphatic vessels. [ 23 ] Sirolimus is a relatively new medical therapy for the treatment of vascular malformations [ 24 ] in recent years, sirolimus has emerged as a new medical treatment option for both vascular tumors and vascular malformations, as a mammalian target of rapamycin (mTOR), capable of integrating signals from the PI3K/AKT pathway to coordinate proper cell growth and proliferation. Hence, sirolimus is ideal for "proliferative" vascular tumors through the control of tissue overgrowth disorders caused by inappropriate activation of the PI3K/AKT/mTOR pathway as an antiproliferative agent. [ 25 ] [ 26 ] Sirolimus has been used as a topical treatment of angiofibromas with tuberous sclerosis complex (TSC). Facial angiofibromas occur in 80% of patients with TSC, and the condition is very disfiguring. A retrospective review of English-language medical publications reporting on topical sirolimus treatment of facial angiofibromas found sixteen separate studies with positive patient outcomes after using the drug. The reports involved a total of 84 patients, and improvement was observed in 94% of subjects, especially if treatment began during the early stages of the disease. Sirolimus treatment was applied in several different formulations (ointment, gel, solution, and cream), ranging from 0.003 to 1% concentrations. Reported adverse effects included one case of perioral dermatitis, one case of cephalea, and four cases of irritation. [ 27 ] In April 2022, sirolimus was approved by the FDA for treating angiofibromas. [ 28 ] [ 29 ] The most common adverse reactions (≥30% occurrence, leading to a 5% treatment discontinuation rate) observed with sirolimus in clinical studies of organ rejection prophylaxis in individuals with kidney transplants include: peripheral edema , hypercholesterolemia , abdominal pain, headache, nausea, diarrhea, pain, constipation, hypertriglyceridemia , hypertension , increased creatinine , fever, urinary tract infection , anemia , arthralgia , and thrombocytopenia . [ 1 ] The most common adverse reactions (≥20% occurrence, leading to an 11% treatment discontinuation rate) observed with sirolimus in clinical studies for the treatment of lymphangioleiomyomatosis are: peripheral edema, hypercholesterolemia, abdominal pain, headache, nausea, diarrhea, chest pain, stomatitis , nasopharyngitis , acne, upper respiratory tract infection , dizziness, and myalgia . [ 1 ] The following adverse effects occurred in 3–20% of individuals taking sirolimus for organ rejection prophylaxis following a kidney transplant: [ 1 ] While sirolimus inhibition of mTORC1 appears to mediate the drug's benefits, it also inhibits mTORC2 , which results in diabetes-like symptoms. [ 30 ] This includes decreased glucose tolerance and insensitivity to insulin. [ 30 ] Sirolimus treatment may additionally increase the risk of type 2 diabetes. [ 31 ] In mouse studies, these symptoms can be avoided through the use of alternate dosing regimens or analogs such as everolimus or temsirolimus . [ 32 ] Lung toxicity is a serious complication associated with sirolimus therapy, [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ excessive citations ] especially in the case of lung transplants. [ 40 ] The mechanism of the interstitial pneumonitis caused by sirolimus and other macrolide MTOR inhibitors is unclear, and may have nothing to do with the mTOR pathway. [ 41 ] [ 42 ] [ 43 ] The interstitial pneumonitis is not dose-dependent, but is more common in patients with underlying lung disease. [ 33 ] [ 44 ] There have been warnings about the use of sirolimus in transplants, where it may increase mortality due to an increased risk of infections. [ 1 ] Sirolimus may increase an individual's risk for contracting skin cancers from exposure to sunlight or UV radiation, and risk of developing lymphoma . [ 1 ] In studies, the skin cancer risk under sirolimus was lower than under other immunosuppressants such as azathioprine and calcineurin inhibitors , and lower than under placebo . [ 1 ] [ 45 ] Individuals taking sirolimus are at increased risk of experiencing impaired or delayed wound healing, particularly if they have a body mass index more than 30 kg/m 2 (classified as obese). [ 1 ] Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump ; hence, inhibitors of either protein may increase sirolimus concentrations in blood plasma , whereas inducers of CYP3A4 and P-gp may decrease sirolimus concentrations in blood plasma. [ 1 ] Unlike the similarly named tacrolimus , sirolimus is not a calcineurin inhibitor , but it has a similar suppressive effect on the immune system. Sirolimus inhibits IL-2 and other cytokine receptor-dependent signal transduction mechanisms, via action on mTOR , and thereby blocks activation of T and B cells . Ciclosporin and tacrolimus inhibit the secretion of IL-2, by inhibiting calcineurin . [ 12 ] The mode of action of sirolimus is to bind the cytosolic protein FK-binding protein 12 (FKBP12) like tacrolimus. Unlike the tacrolimus-FKBP12 complex, which inhibits calcineurin (PP2B), the sirolimus-FKBP12 complex inhibits the mTOR (mammalian Target Of Rapamycin, rapamycin being another name for sirolimus) pathway by directly binding to mTOR Complex 1 (mTORC1). [ 12 ] mTOR has also been called FRAP (FKBP-rapamycin-associated protein), RAFT (rapamycin and FKBP target), RAPT1, or SEP. The earlier names FRAP and RAFT were coined to reflect the fact that sirolimus must bind FKBP12 first, and only the FKBP12-sirolimus complex can bind mTOR. However, mTOR is now the widely accepted name, since Tor was first discovered via genetic and molecular studies of sirolimus-resistant mutants of Saccharomyces cerevisiae that identified FKBP12, Tor1, and Tor2 as the targets of sirolimus and provided robust support that the FKBP12-sirolimus complex binds to and inhibits Tor1 and Tor2. [ 46 ] [ 12 ] Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump . [ 1 ] It has linear pharmacokinetics. [ 47 ] In studies on N=6 and N=36 subjects, peak concentration was obtained in 1.3 hours +/r- 0.5 hours and the terminal elimination was slow, with a half life around 60 hours +/- 10 hours. [ 48 ] [ 47 ] Sirolimus was not found to effect the concentration of ciclosporin , which is also metabolized primarily by the CYP3A4 enzyme. [ 47 ] The bioavailabiliy of sirolimus is low, and the absorption of sirolimus into the blood stream from the intestine varies widely between patients, with some patients having up to eight times more exposure than others for the same dose. Drug levels are, therefore, taken to make sure patients get the right dose for their condition. [ 12 ] [ non-primary source needed ] This is determined by taking a blood sample before the next dose, which gives the trough level. However, good correlation is noted between trough concentration levels and drug exposure, known as area under the concentration-time curve, for both sirolimus (SRL) and tacrolimus (TAC) (SRL: r2 = 0.83; TAC: r2 = 0.82), so only one level need be taken to know its pharmacokinetic (PK) profile. PK profiles of SRL and of TAC are unaltered by simultaneous administration. Dose-corrected drug exposure of TAC correlates with SRL (r2 = 0.8), so patients have similar bioavailability of both. [ 49 ] [ non-primary source needed ] Sirolimus is a natural product and macrocyclic lactone . [ 9 ] The biosynthesis of the rapamycin core is accomplished by a type I polyketide synthase (PKS) in conjunction with a nonribosomal peptide synthetase (NRPS). The domains responsible for the biosynthesis of the linear polyketide of rapamycin are organized into three multienzymes, RapA, RapB, and RapC, which contain a total of 14 modules (figure 1). The three multienzymes are organized such that the first four modules of polyketide chain elongation are in RapA, the following six modules for continued elongation are in RapB, and the final four modules to complete the biosynthesis of the linear polyketide are in RapC. [ 50 ] Then, the linear polyketide is modified by the NRPS, RapP, which attaches L-pipecolate to the terminal end of the polyketide, and then cyclizes the molecule, yielding the unbound product, prerapamycin. [ 51 ] The core macrocycle , prerapamycin (figure 2), is then modified (figure 3) by an additional five enzymes, which lead to the final product, rapamycin. First, the core macrocycle is modified by RapI, SAM-dependent O-methyltransferase (MTase), which O-methylates at C39. Next, a carbonyl is installed at C9 by RapJ, a cytochrome P-450 monooxygenases (P-450). Then, RapM, another MTase, O-methylates at C16. Finally, RapN, another P-450, installs a hydroxyl at C27 immediately followed by O-methylation by Rap Q, a distinct MTase, at C27 to yield rapamycin. [ 52 ] The biosynthetic genes responsible for rapamycin synthesis have been identified. As expected, three extremely large open reading frames (ORF's) designated as rapA , rapB , and rapC encode for three extremely large and complex multienzymes, RapA, RapB, and RapC, respectively. [ 50 ] The gene rapL has been established to code for a NAD+ -dependent lysine cycloamidase, which converts L- lysine to L- pipecolic acid (figure 4) for incorporation at the end of the polyketide. [ 53 ] [ 54 ] The gene rapP , which is embedded between the PKS genes and translationally coupled to rapC , encodes for an additional enzyme , an NPRS responsible for incorporating L-pipecolic acid, chain termination and cyclization of prerapamycin. In addition, genes rapI , rapJ , rapM , rapN , rapO , and rapQ have been identified as coding for tailoring enzymes that modify the macrocyclic core to give rapamycin (figure 3). Finally, rapG and rapH have been identified to code for enzymes that have a positive regulatory role in the preparation of rapamycin through the control of rapamycin PKS gene expression. [ 55 ] Biosynthesis of this 31-membered macrocycle begins as the loading domain is primed with the starter unit, 4,5-dihydroxocyclohex-1-ene-carboxylic acid, which is derived from the shikimate pathway . [ 50 ] Note that the cyclohexane ring of the starting unit is reduced during the transfer to module 1. [ citation needed ] The starting unit is then modified by a series of Claisen condensations with malonyl or methylmalonyl substrates, which are attached to an acyl carrier protein (ACP) and extend the polyketide by two carbons each. [ citation needed ] After each successive condensation , the growing polyketide is further modified according to enzymatic domains that are present to reduce and dehydrate it, thereby introducing the diversity of functionalities observed in rapamycin (figure 1). [ citation needed ] Once the linear polyketide is complete, L-pipecolic acid, which is synthesized by a lysine cycloamidase from an L-lysine, is added to the terminal end of the polyketide by an NRPS. [ citation needed ] Then, the NSPS cyclizes the polyketide, giving prerapamycin, the first enzyme-free product. [ citation needed ] The macrocyclic core is then customized by a series of post-PKS enzymes through methylations by MTases and oxidations by P-450s to yield rapamycin. [ citation needed ] In February 2023, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Hyftor, intended for the treatment of angiofibroma. [ 56 ] The applicant for this medicinal product is Plusultra pharma GmbH. [ 56 ] Hyftor was authorized for medical used in the European Union in May 2023. [ 7 ] Sirolimus, as Rapamune solution, was approved for medical use in the United States in 1999; [ 16 ] and as Rapamune tablets in August 2000. [ 57 ] Sirolimus, as Fyarro, was approved for medical use in the United States in November 2021. [ 58 ] [ 59 ] Sirolimus, as Hyftor, was approved for medical use in the United States in March 2022. [ 28 ] The antiproliferative effects of sirolimus may have a role in treating cancer. When dosed appropriately, sirolimus can enhance the immune response to tumor targeting [ 60 ] or otherwise promote tumor regression in clinical trials. [ 61 ] Sirolimus seems to lower the cancer risk in some transplant patients. [ 62 ] Sirolimus was shown to inhibit the progression of dermal Kaposi's sarcoma in patients with renal transplants. [ 63 ] Other mTOR inhibitors , such as temsirolimus (CCI-779) or everolimus (RAD001), are being tested for use in cancers such as glioblastoma multiforme and mantle cell lymphoma . However, these drugs have a higher rate of fatal adverse events in cancer patients than control drugs. [ 64 ] A combination therapy of doxorubicin and sirolimus has been shown to drive Akt -positive lymphomas into remission in mice. [ citation needed ] Akt signalling promotes cell survival in Akt-positive lymphomas and acts to prevent the cytotoxic effects of chemotherapy drugs, such as doxorubicin or cyclophosphamide . [ citation needed ] Sirolimus blocks Akt signalling and the cells lose their resistance to the chemotherapy. [ citation needed ] Bcl-2 -positive lymphomas were completely resistant to the therapy; eIF4E -expressing lymphomas are not sensitive to sirolimus. [ 65 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] Sirolimus also shows promise in treating tuberous sclerosis complex (TSC), a congenital disorder that predisposes those afflicted to benign tumor growth in the brain, heart, kidneys, skin, and other organs. [ citation needed ] After several studies conclusively linked mTOR inhibitors to remission in TSC tumors, specifically subependymal giant-cell astrocytomas in children and angiomyolipomas in adults, many US doctors began prescribing sirolimus (Wyeth's Rapamune) and everolimus (Novartis's RAD001) to TSC patients off-label. [ citation needed ] Numerous clinical trials using both rapamycin analogs, involving both children and adults with TSC, are underway in the United States. [ 70 ] mTOR , specifically mTORC1, was first shown to be important in aging in 2003, in a study on worms; sirolimus was shown to inhibit and slow aging in worms, yeast, and flies, and then to improve the condition of mouse models of various diseases of aging. [ 71 ] [ 72 ] Sirolimus was first shown to extend lifespan in wild-type mice in a study published by NIH investigators in 2009; the studies have been replicated in mice of many different genetic backgrounds. [ 72 ] A study published in 2020 found late-life sirolimus dosing schedules enhanced mouse lifespan in a sex-specific manner: limited rapamycin exposure enhanced male but not female lifespan, providing evidence for sex differences in sirolimus response. [ 73 ] [ 74 ] The results are further supported by the finding that genetically modified mice with impaired mTORC1 signalling live longer. [ 72 ] Sirolimus has potential for widespread use as a longevity-promoting drug, with evidence pointing to its ability to prevent age-associated decline of cognitive and physical health. In 2014, researchers at Novartis showed that a related compound, everolimus , increased elderly patients' immune response on an intermittent dose. [ 75 ] This led to many in the anti-aging community self-experimenting with the compound. [ 76 ] However, because of the different biochemical properties of sirolimus, the dosing is potentially very different from that of everolimus. Ultimately, due to known side-effects of sirolimus, as well as inadequate evidence for optimal dosing, it was concluded in 2016 that more research was required before sirolimus could be widely prescribed for this purpose. [ 72 ] [ 77 ] Two human studies on the effects of sirolimus (rapamycin) on longevity did not show statistically significant benefits. However, due to limitations in the studies, further research is needed to fully assess its potential in humans. [ 78 ] Sirolimus has complex effects on the immune system—while IL-12 goes up and IL-10 decreases, which suggests an immunostimulatory response, TNF and IL-6 are decreased, which suggests an immunosuppressive response. The duration of the inhibition and the exact extent to which mTORC1 and mTORC2 are inhibited play a role, but were not yet well understood according to a 2015 paper. [ 79 ] When applied as a topical preparation, researchers showed that rapamycin can regenerate collagen and reverse clinical signs of aging in elderly patients. [ 80 ] The concentrations are far lower than those used to treat angiofibromas. [ citation needed ] Rapamycin has been proposed as a treatment for severe acute respiratory syndrome coronavirus 2 insofar as its immunosuppressive effects could prevent or reduce the cytokine storm seen in very serious cases of COVID-19. [ 81 ] Moreover, inhibition of cell proliferation by rapamycin could reduce viral replication . [ 81 ] Rapamycin can accelerate degradation of oxidized LDL cholesterol in endothelial cells , thereby lowering the risk of atherosclerosis. [ 82 ] Oxidized LDL cholesterol is a major contributor to atherosclerosis. [ 83 ] As of 2016, studies in cells, animals, and humans have suggested that mTOR activation as process underlying systemic lupus erythematosus and that inhibiting mTOR with rapamycin may be a disease-modifying treatment. [ 84 ] As of 2016 rapamycin had been tested in small clinical trials in people with lupus. [ 84 ] Lymphatic malformation , lymphangioma or cystic hygroma, is an abnormal growth of lymphatic vessels that usually affects children around the head and neck area and more rarely involving the tongue causing macroglossia. LM is caused by a PIK3CA mutation during lymphangiogenesis early in gestational cell formation causing the malformation of lymphatic tissue. Treatment often consists of removal of the affected tissue via excision, laser ablation or sclerotherapy, but the rate of recurrence can be high and surgery can have complications. Sirolimus has shown evidence of being an effective treatment in alleviating symptoms and reducing the size of the malformation by way of altering the mTOR pathway in lymphangiogenesis. Although an off label use of the drug, Sirolimus has been shown to be an effective treatment for both microcystic and macrocystic LM. More research is however needed to develop and create targeted, effective treatment therapies for LM. [ 85 ] Due to its immunosuppressant activity, Rapamycin has been assessed as prophylaxis or treatment agent of Graft-versus-host disease (GVHD), a complication of hematopoietic stem cell transplantation . While contrasted results were obtained in clinical trials, [ 86 ] pre-clinical studies have shown that Rapamycin can mitigate GVHD by increasing the proliferation of regulatory T cells, inhibiting cytotoxic T cells and lowering the differentiation of effector T cells. [ 87 ] [ 88 ] Rapamycin is used in biology research as an agent for chemically induced dimerization . [ 89 ] In this application, rapamycin is added to cells expressing two fusion constructs, one of which contains the rapamycin-binding FRB domain from mTOR and the other of which contains an FKBP domain. Each fusion protein also contains additional domains that are brought into proximity when rapamycin induces binding of FRB and FKBP. In this way, rapamycin can be used to control and study protein localization and interactions. [ citation needed ] A number of veterinary medicine teaching hospitals are participating in a long-term clinical study examining the effect of rapamycin on the longevity of dogs . [ 90 ] A clinical trial lead by NC State College of Veterinary Medicine (HALT), run at a number of veterinary hospitals across the US, found that rapamycin reverses the effects of hypertrophic cardiomyopathy in cats. [ 91 ] In March 2025, the US Food and Drug Administration announced conditional approval of sirolimus delayed-release tablets (Felycin-CA1) for the management of ventricular hypertrophy in cats with subclinical hypertrophic cardiomyopathy. [ 92 ] [ 93 ] This is the first product approved for use in cats with hypertrophic cardiomyopathy for any indication. [ 93 ] Cardiomyopathy is a disease of the heart muscle. [ 93 ] Hypertrophic cardiomyopathy in cats causes thickening of the heart's left ventricle. [ 93 ] It is the most common heart disease in cats and is one of the most common causes of death in cats. [ 93 ] While the cause is unknown in most cases, hypertrophic cardiomyopathy is associated with a genetic mutation in certain breeds, such as Maine Coons, Ragdolls, and Persians. [ 93 ] Hypertrophic cardiomyopathy is a progressive disease. [ 93 ] Cats in the subclinical phase have thickening of their heart wall but do not show clinical symptoms of the disease yet. [ 93 ] Cats may live for years in the subclinical phase, while others may progress to congestive heart failure, arterial thromboembolism, or sudden death. [ 93 ]
https://en.wikipedia.org/wiki/Sirolimus
Sirtris Pharmaceuticals, Inc. was a biotechnology company based in Cambridge, MA that developed therapies for type 2 diabetes, cancer, and other diseases. Conceived in 2004 by Harvard University biologist David Sinclair and Andrew Perlman , [ 1 ] and founded that year by Sinclair and Perlman, along with Christoph Westphal , Richard Aldrich, Richard Pops, and Paul Schimmel, [ 2 ] the company was focused on developing Sinclair's research into activators of sirtuins , work that began in the laboratory of Leonard P. Guarente where Sinclair worked as a post-doc before starting his own lab. [ 1 ] The company was specifically focused on resveratrol formulations and derivatives as activators of the SIRT1 enzyme; Sinclair became known for making statements about resveratrol like: “(It's) as close to a miraculous molecule as you can find.... One hundred years from now, people will maybe be taking these molecules on a daily basis to prevent heart disease, stroke, and cancer.” [ 1 ] Most of the anti-aging field was more cautious, especially with regard to what else resveratrol might do in the body and its lack of bioavailability . [ 1 ] [ 3 ] The company's initial product was called SRT501, and was a formulation of resveratrol. [ 4 ] Sirtris went public in 2007 and was subsequently purchased and made a subsidiary of GlaxoSmithKline in 2008 for $720 million. [ 5 ] GSK paid $22.50/share, when Sirtris's stock was trading at $12/share, down 45% from its highest price of the previous year. [ 6 ] Studies published in 2009 and early 2010 by scientists from Amgen and Pfizer cast doubt on whether SIRT1 was directly activated by resveratrol and showed that the apparent activity was actually due to a fluorescent reagent used in the experiments. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] In August 2010, a nonprofit called the Healthy Lifespan Institute, which had been formed the year before by Westphal and Michelle Dipp , who joined GSK from Sirtris, began selling SRT501 as a dietary supplement online; [ 4 ] when this became public GSK required Westphal and Dipp, who were still GSK employees, to resign from the nonprofit. [ 13 ] [ 14 ] GSK/Sirtris terminated development of SRT501 in late 2010. [ 15 ] [ 16 ] GSK said it was terminating SRT501 due to side effects of nausea, vomiting, and diarrhea it caused, and because the compound's activity wasn't specific to SIRT1, at some doses it actually inhibited SIRT1, and the compound itself wasn't patentable. [ 15 ] [ 16 ] The company said at that time that it was focused on two compounds called SRT2104 and SRT2379 that were not resveratrol analogs, had better drug-like qualities, and were more selective SIRT1 activators. [ 15 ] [ 16 ] In 2013 GSK shut down Sirtris and its development candidates were absorbed into GSK, where research and development continued. [ 5 ] [ 17 ] [ 18 ] At that time, GSK/Sirtris' lead candidate was SRT2104, described as a "first-generation sirtuin-activating compound." [ 18 ] [ 19 ]
https://en.wikipedia.org/wiki/Sirtris_Pharmaceuticals
Sirtuin-activating compounds ( STAC ) are chemical compounds having an effect on sirtuins , a group of enzymes that use NAD+ to remove acetyl groups from proteins. They are caloric restriction mimetic compounds that may be helpful in treating various aging -related diseases. [ 1 ] Leonard P. Guarente is recognized as the leading proponent of the hypothesis that caloric restriction slows aging by activation of Sirtuins. [ citation needed ] STACs have been discovered by Konrad Howitz of Biomol Inc and biologist David Sinclair . In September 2003, Howitz and Sinclair et al. published a highly cited paper reporting that polyphenols such as resveratrol [ 2 ] activate human SIRT1 and extend the lifespan of budding yeast (Howitz et al., Nature, 2003). Other examples of such products are butein , piceatannol , isoliquiritigenin , fisetin , and quercetin . [ citation needed ] Sirtuins depend on the crucial cellular molecule called nicotinamide adenine dinucleotide (NAD+) for their function. Falling NAD+ levels during aging may adversely impact sirtuin maintenance of DNA integrity and ability to combat oxidative stress-induced cell damage. Increasing cellular NAD+ levels with supplements like nicotinamide mononucleotide (NMN) during aging may slow or reverse certain aging processes with sirtuin function enhancement. [ 3 ] Some STACs can cause artificial effects in the assay initially used for their identification, but it has been shown that STACs also activate SIRT1 against regular polypeptide substrates, with an influence of the substrate sequence. [ 4 ] [ 5 ] Sirtris Pharmaceuticals , Sinclair's company, was purchased by GlaxoSmithKline (GSK) in 2008, and subsequently shut down as a separate entity within GSK. [ citation needed ]
https://en.wikipedia.org/wiki/Sirtuin-activating_compound
In mathematics, the Sister Beiter conjecture is a conjecture about the size of coefficients of ternary cyclotomic polynomials (i.e. where the index is the product of three prime numbers). It is named after Marion Beiter , a Catholic nun who first proposed it in 1968. [ 1 ] For n ∈ N > 0 {\displaystyle n\in \mathbb {N} _{>0}} the maximal coefficient (in absolute value) of the cyclotomic polynomial Φ n ( x ) {\displaystyle \Phi _{n}(x)} is denoted by A ( n ) {\displaystyle A(n)} . Let 3 ≤ p ≤ q ≤ r {\displaystyle 3\leq p\leq q\leq r} be three prime numbers. In this case the cyclotomic polynomial Φ p q r ( x ) {\displaystyle \Phi _{pqr}(x)} is called ternary . In 1895, A. S. Bang [ 2 ] proved that A ( p q r ) ≤ p − 1 {\displaystyle A(pqr)\leq p-1} . This implies the existence of M ( p ) := max p ≤ q ≤ r prime A ( p q r ) {\displaystyle M(p):=\max \limits _{p\leq q\leq r{\text{ prime}}}A(pqr)} such that 1 ≤ M ( p ) ≤ p − 1 {\displaystyle 1\leq M(p)\leq p-1} . Sister Beiter conjectured [ 1 ] in 1968 that M ( p ) ≤ p + 1 2 {\displaystyle M(p)\leq {\frac {p+1}{2}}} . This was later disproved, but a corrected Sister Beiter conjecture was put forward as M ( p ) ≤ 2 3 p {\displaystyle M(p)\leq {\frac {2}{3}}p} . A preprint [ 3 ] from 2023 explains the history in detail and claims to prove this corrected conjecture. Explicitly it claims to prove M ( p ) ≤ 2 3 p and lim p → ∞ M ( p ) p = 2 3 . {\displaystyle M(p)\leq {\frac {2}{3}}p{\text{ and }}\lim \limits _{p\rightarrow \infty }{\frac {M(p)}{p}}={\frac {2}{3}}.}
https://en.wikipedia.org/wiki/Sister_Beiter_conjecture
In mathematics, Sister Celine's polynomials are a family of hypergeometric polynomials introduced by Mary Celine Fasenmyer in 1947. [ 1 ] They include Legendre polynomials , Jacobi polynomials and Bateman polynomials as special cases. This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sister_Celine's_polynomials
In phylogenetics , a sister group or sister taxon , also called an adelphotaxon , [ 1 ] comprises the closest relative(s) of another given unit in an evolutionary tree . [ 2 ] The expression is most easily illustrated by a cladogram : Taxon A Taxon B Taxon C More tree branches Taxon A and taxon B are sister groups to each other. Taxa A and B, together with any other extant or extinct descendants of their most recent common ancestor (MRCA), [ Note 1 ] form a monophyletic group, the clade AB. Clade AB and taxon C are also sister groups. Taxa A, B, and C, together with all other descendants of their MRCA form the clade ABC. The whole clade ABC is itself a subtree of a larger tree which offers yet more sister group relationships, both among the leaves and among larger, more deeply rooted clades. The tree structure shown connects through its root to the rest of the universal tree of life . In cladistic standards, taxa A, B, and C may represent specimens, species , genera , or any other taxonomic units. If A and B are at the same taxonomic level, terminology such as sister species or sister genera can be used. The term sister group is used in phylogenetic analysis , however, only groups identified in the analysis are labeled as "sister groups". An example is birds , whose commonly cited living sister group is the crocodiles , but that is true only when discussing extant organisms ; [ 3 ] [ 4 ] when other, extinct groups are considered, the relationship between birds and crocodiles appears distant. Although the bird family tree is rooted in the dinosaurs , there were a number of other, earlier groups, such as the pterosaurs , that branched off the line leading to the dinosaurs after the last common ancestor of birds and crocodiles . [ 5 ] The term sister group must thus be seen as a relative term, with the caveat that the sister group is only the closest relative among the groups/species/specimens that are included in the analysis. [ 6 ]
https://en.wikipedia.org/wiki/Sister_group
In ultra- low-temperature physics , Sisyphus cooling , the Sisyphus effect , or polarization gradient cooling involves the use of specially selected laser light, hitting atoms from various angles to both cool and trap them in a potential well, effectively rolling the atom down a hill of potential energy until it has lost its kinetic energy . It is a type of laser cooling of atoms used to reach temperatures below the Doppler cooling limit . This cooling method was first proposed by Claude Cohen-Tannoudji in 1989, [ 1 ] motivated by earlier experiments which observed sodium atoms cooled below the Doppler limit in an optical molasses . [ 2 ] Cohen-Tannoudji received part of the Nobel Prize in Physics in 1997 for his work. The technique is named after Sisyphus , a figure in the Greek mythology who was doomed, for all eternity, to roll a stone up a mountain only to have it roll down again whenever he got it near the summit. Sisyphus cooling can be achieved by shining two counter-propagating laser beams with orthogonal polarization onto an atom sample. Atoms moving through the potential landscape along the direction of the standing wave lose kinetic energy as they move to a potential maximum, at which point optical pumping moves them back to a lower energy state, thus lowering the total energy of the atom. This description of Sisyphus cooling is largely based on Foot's description. [ 3 ] The counter-propagation of two orthogonally polarized lasers generates a standing wave in polarization with a gradient between σ − {\textstyle \sigma -} (left-hand circularly polarized light), linear, and σ + {\textstyle \sigma +} (right-hand circularly polarized light) along the standing wave. Note that this counter propagation does not make a standing wave in intensity, but only in polarization. This gradient occurs over a length scale of λ 2 {\textstyle {\frac {\lambda }{2}}} , and then repeats, mirrored about the y-z plane. At positions where the counter-propagating beams have a phase difference of π 2 {\textstyle {\frac {\pi }{2}}} , the polarization is circular, and where there is no phase difference, the polarization is linear. In the intermediate regions, there is a gradient ellipticity of the superposed fields. Consider, for example, an atom with ground state angular momentum J = 1 2 {\textstyle J={\frac {1}{2}}} and excited state angular momentum J ′ = 3 2 {\textstyle J'={\frac {3}{2}}} . The M J {\textstyle M_{J}} sublevels for the ground state are M J = − 1 2 , + 1 2 {\displaystyle M_{J}=-{\frac {1}{2}},+{\frac {1}{2}}} and the M J ′ {\textstyle M_{J'}} levels for the excited state are M J ′ = − 3 2 , − 1 2 , + 1 2 , + 3 2 {\displaystyle M_{J'}=-{\frac {3}{2}},-{\frac {1}{2}},+{\frac {1}{2}},+{\frac {3}{2}}} In the field-free case, all of these energy levels for each J value are degenerate, but in the presence of a circularly polarized light field, the Autler-Townes effect , (AC Stark shift or light shift), lifts this degeneracy. The extent and direction of this lifted degeneracy is dependent on the polarization of the light. It is this polarization dependence that is leveraged to apply a spatially-dependent slowing force to the atom. In order to have a cooling effect, there must be some dissipation of energy. Selection rules for dipole transitions dictate that for this example, Δ J = − 1 , + 1 {\displaystyle \Delta J=-1,+1} and Δ M J = 0 , − 1 , + 1 {\displaystyle \Delta M_{J}=0,-1,+1} with relative intensities given by the square of the Clebsch-Gordan coefficients . Suppose we start with a single atom in the ground state, J = 1 2 {\textstyle J={\frac {1}{2}}} , in the M J = 1 2 {\textstyle M_{J}={\frac {1}{2}}} state at z = 0 {\textstyle z=0} with velocity in the +z direction. The atom is now pumped to the M J ′ = − 1 2 {\textstyle M_{J'}=-{\frac {1}{2}}} excited state, where it spontaneously emits a photon and decays to the M J = − 1 2 {\textstyle M_{J}=-{\frac {1}{2}}} ground state. The key concept is that in the presence of σ − {\textstyle \sigma -} light, the AC stark shift lowers the M J = − 1 2 {\textstyle M_{J}=-{\frac {1}{2}}} further in energy than the M J = + 1 2 {\textstyle M_{J}=+{\frac {1}{2}}} state. In going from the M J = + 1 2 {\textstyle M_{J}=+{\frac {1}{2}}} to the M J = − 1 2 {\textstyle M_{J}=-{\frac {1}{2}}} state, the atom has indeed lost U 0 {\textstyle U_{0}} in energy, where U 0 = E M J = + 1 2 − E M J = − 1 2 {\displaystyle U_{0}=E_{M_{J}=+{\frac {1}{2}}}-E_{M_{J}=-{\frac {1}{2}}}} approximately equal to the AC Stark shift U 0 ≃ ℏ Ω 2 4 δ {\displaystyle U_{0}\simeq {\frac {\hbar \Omega ^{2}}{4\delta }}} where omega is the Rabi frequency and delta is the detuning. At this point, the atom is moving in the +z direction with some velocity, and eventually moves into a region with σ + {\textstyle \sigma +} light. The atom, still in its M J = − 1 2 {\textstyle M_{J}=-{\frac {1}{2}}} state that it was pumped into, now experiences the opposite AC Stark shift as it did in σ {\textstyle \sigma } - light, and the M J = 1 2 {\textstyle M_{J}={\frac {1}{2}}} state is now lower in energy than the M J = − 1 2 {\textstyle M_{J}=-{\frac {1}{2}}} state. The atom is pumped to the M J ′ = 1 2 {\textstyle M_{J'}={\frac {1}{2}}} excited state, where it spontaneously emits a photon and decays to the M J = + 1 2 {\textstyle M_{J}=+{\frac {1}{2}}} state. As before, this energy level has been lowered by the AC Stark shift, and the atom loses another U 0 {\textstyle U_{0}} of energy. Repeated cycles of this nature convert kinetic energy to potential energy, and this potential energy is lost via the photon emitted during optical pumping. The fundamental lower limit of Sisyphus cooling is the recoil temperature , T r {\textstyle T_{r}} , set by the energy of the photon emitted in the decay from the J' to J state. This limit is k b T r = h 2 M λ 2 {\displaystyle k_{b}T_{r}={\frac {h^{2}}{M\lambda ^{2}}}} though practically the limit is a few times this value because of the extreme sensitivity to external magnetic fields in this cooling scheme. Atoms typically reach temperatures on the order of μ K {\textstyle \mu K} , as compared to the doppler limit T D ≃ 250 μ K {\textstyle T_{D}\simeq 250\mu K} .
https://en.wikipedia.org/wiki/Sisyphus_cooling
SitNGo Wizard (sometimes referred to as SNG Wizard ) is a poker tool software program to aid online poker players in determining their optimal betting actions during the late stages of Sit and go poker contests. [ 1 ] [ 2 ] The software enables players to load their hand histories so that they can get computerized feedback regarding their choices. [ 3 ] It accepts both manual and downloaded entry of tournament situations for analysis. [ 4 ] The software is not intended to be used in-game and many of its features become inoperable while an online poker client software program is active. [ 3 ] For example, PokerStars lists it among its prohibited third party tools. [ 5 ] The software is based on the Independent Chip Model (ICM) and is in the class of software described as Automated Independent Chip Model (AICM). [ 6 ] The program also uses Future Game Simulation Model. [ 7 ] In addition to the user's hole card and table positions, the software uses number of opponents, stack sizes, and opponent calling ranges to calculate the optimal action. [ 8 ] The software is capable of producing graphs to show the implications of varying an opponent's hand range. [ 7 ] The game view feature contains a summary of the analysis and recommendation. [ 7 ] The software includes a quiz model that enables users to practice making push or fold decisions. [ 3 ] The quiz mode serves as a type of poker flash card generator by creating random games for users to practice decision making. [ 8 ] The quiz feature is customizeable with parameters for difficulty level, number of players, table position, and several other considerations. [ 6 ] The software is a substitute for having a poker coach in the sense that the software tells you what to do before and after games and reviews your performance to help you understand the mistakes you made. It is designed to help a user become better at making the right in-game decisions, which should improve the user's ability to compete in the current landscape of players who use software to improve their play, and thus improve the user's profitability. [ 3 ] [ 4 ] Some of the technical features are considered likely to be offputting to some users. [ 3 ] Pokersoftware.com's review considered this to be the most powerful Poker AICM. [ 6 ] The program is subject to some of the pitfalls of the ICM method, but it has the Future Game Simulation (FGS) feature that attempts to compensate for the downfall of the ICM method of short-stacking. [ 6 ] The key advantage of the program is as an objective instructor of counterintuitive optimal play, which if learned will give a user an advantage over his/her untrained opponents. [ 4 ] The graphics are not considered impressive. [ 6 ] There are additional cosmetic issues such as its interface and navigation options that weigh against the program's functionality. [ 4 ]
https://en.wikipedia.org/wiki/SitNGo_Wizard
Sit Kim Ping is a Singaporean biochemist and an Emeritus Professor at the Department of Biochemistry at the National University of Singapore . [ 1 ] She was the Head of the Department of Biochemistry (part of the Yong Loo Lin School of Medicine ) from 1996 to 2000. [ 2 ] Sit was born in 1941 and attended Tanjong Katong Girls' School . [ 3 ] She studied science at the National University of Singapore and obtained first-class honours when she graduated top of her class. [ 3 ] She obtained her PhD in biochemistry from McGill University . [ 4 ] Sit was instrumental in the development of the New Life Science Undergraduate Curriculum, and was awarded the Emeritus Professorship in 2008. [ 5 ] Sit studied detoxification , namely the process of conjugation by which metabolic by-products are made soluble prior to excretion. [ 6 ] She also studied metabolism within cancer cells and found aerobic respiration within mitochondria in cancer cells, which contradicts the Warburg hypothesis . [ 7 ] Sit is married to a clinician and has two children. [ 3 ]
https://en.wikipedia.org/wiki/Sit_Kim_Ping
Site-directed mutagenesis is a molecular biology method that is used to make specific and intentional mutating changes to the DNA sequence of a gene and any gene products . Also called site-specific mutagenesis or oligonucleotide-directed mutagenesis , it is used for investigating the structure and biological activity of DNA , RNA , and protein molecules, and for protein engineering . Site-directed mutagenesis is one of the most important laboratory techniques for creating DNA libraries by introducing mutations into DNA sequences. There are numerous methods for achieving site-directed mutagenesis, but with decreasing costs of oligonucleotide synthesis , artificial gene synthesis is now occasionally used as an alternative to site-directed mutagenesis. Since 2013, the development of the CRISPR /Cas9 technology, based on a prokaryotic viral defense system, has also allowed for the editing of the genome , and mutagenesis may be performed in vivo with relative ease. [ 1 ] Early attempts at mutagenesis using radiation or chemical mutagens were non-site-specific, generating random mutations. [ 2 ] Analogs of nucleotides and other chemicals were later used to generate localized point mutations , [ 3 ] examples of such chemicals are aminopurine , [ 4 ] nitrosoguanidine , [ 5 ] and bisulfite . [ 6 ] Site-directed mutagenesis was achieved in 1974 in the laboratory of Charles Weissmann using a nucleotide analogue N 4 -hydroxycytidine, which induces transition of GC to AT. [ 7 ] [ 8 ] These methods of mutagenesis, however, are limited by the kind of mutation they can achieve, and they are not as specific as later site-directed mutagenesis methods. In 1971, Clyde Hutchison and Marshall Edgell showed that it is possible to produce mutants with small fragments of phage ΦX174 and restriction nucleases . [ 9 ] [ 10 ] Hutchison later produced with his collaborator Michael Smith in 1978 a more flexible approach to site-directed mutagenesis by using oligonucleotides in a primer extension method with DNA polymerase. [ 11 ] For his part in the development of this process, Michael Smith later shared the Nobel Prize in Chemistry in October 1993 with Kary B. Mullis , who invented polymerase chain reaction . The basic procedure requires the synthesis of a short DNA primer. This synthetic primer contains the desired mutation and is complementary to the template DNA around the mutation site so it can hybridize with the DNA in the gene of interest. The mutation may be a single base change (a point mutation ), multiple base changes, deletion , or insertion . The single-strand primer is then extended using a DNA polymerase , which copies the rest of the gene. The gene thus copied contains the mutated site, and is then introduced into a host cell in a vector and cloned . Finally, mutants are selected by DNA sequencing to check that they contain the desired mutation. The original method using single-primer extension was inefficient due to a low yield of mutants. This resulting mixture contains both the original unmutated template as well as the mutant strand, producing a mixed population of mutant and non-mutant progenies. Furthermore, the template used is methylated while the mutant strand is unmethylated, and the mutants may be counter-selected due to presence of mismatch repair system that favors the methylated template DNA, resulting in fewer mutants. Many approaches have since been developed to improve the efficiency of mutagenesis. A large number of methods are available to effect site-directed mutagenesis, [ 12 ] although most of them have rarely been used in laboratories since the early 2000s, as newer techniques allow for simpler and easier ways of introducing site-specific mutation into genes. In 1985, Thomas Kunkel introduced a technique that reduces the need to select for the mutants. [ 13 ] The DNA fragment to be mutated is inserted into a phagemid such as M13mp18/19 and is then transformed into an E. coli strain deficient in two enzymes, dUTPase ( dut ) and uracil deglycosidase ( udg ). Both enzymes are part of a DNA repair pathway that protects the bacterial chromosome from mutations by the spontaneous deamination of dCTP to dUTP. The dUTPase deficiency prevents the breakdown of dUTP, resulting in a high level of dUTP in the cell. The uracil deglycosidase deficiency prevents the removal of uracil from newly synthesized DNA. As the double-mutant E. coli replicates the phage DNA, its enzymatic machinery may, therefore, misincorporate dUTP instead of dTTP, resulting in single-strand DNA that contains some uracils (ssUDNA). The ssUDNA is extracted from the bacteriophage that is released into the medium, and then used as template for mutagenesis. An oligonucleotide containing the desired mutation is used for primer extension. The heteroduplex DNA, that forms, consists of one parental non-mutated strand containing dUTP and a mutated strand containing dTTP. The DNA is then transformed into an E. coli strain carrying the wildtype dut and udg genes. Here, the uracil-containing parental DNA strand is degraded, so that nearly all of the resulting DNA consists of the mutated strand. Unlike other methods, cassette mutagenesis need not involve primer extension using DNA polymerase. In this method, a fragment of DNA is synthesized, and then inserted into a plasmid. [ 14 ] It involves the cleavage by a restriction enzyme at a site in the plasmid and subsequent ligation of a pair of complementary oligonucleotides containing the mutation in the gene of interest to the plasmid. Usually, the restriction enzymes that cut at the plasmid and the oligonucleotide are the same, permitting sticky ends of the plasmid and insert to ligate to one another. This method can generate mutants at close to 100% efficiency, but is limited by the availability of suitable restriction sites flanking the site that is to be mutated. The limitation of restriction sites in cassette mutagenesis may be overcome using polymerase chain reaction with oligonucleotide " primers ", such that a larger fragment may be generated, covering two convenient restriction sites. The exponential amplification in PCR produces a fragment containing the desired mutation in sufficient quantity to be separated from the original, unmutated plasmid by gel electrophoresis , which may then be inserted in the original context using standard recombinant molecular biology techniques. There are many variations of the same technique. The simplest method places the mutation site toward one of the ends of the fragment whereby one of two oligonucleotides used for generating the fragment contains the mutation. This involves a single step of PCR, but still has the inherent problem of requiring a suitable restriction site near the mutation site unless a very long primer is used. Other variations, therefore, employ three or four oligonucleotides, two of which may be non-mutagenic oligonucleotides that cover two convenient restriction sites and generate a fragment that can be digested and ligated into a plasmid, whereas the mutagenic oligonucleotide may be complementary to a location within that fragment well away from any convenient restriction site. These methods require multiple steps of PCR so that the final fragment to be ligated can contain the desired mutation. The design process for generating a fragment with the desired mutation and relevant restriction sites can be cumbersome. Software tools like SDM-Assist [ 15 ] can simplify the process. For plasmid manipulations, other site-directed mutagenesis techniques have been supplanted largely by techniques that are highly efficient but relatively simple, easy to use, and commercially available as a kit. An example of these techniques is the "Quikchange" method, [ 16 ] wherein a pair of complementary mutagenic primers are used to amplify the entire plasmid in a thermocycling reaction using a high-fidelity non-strand-displacing DNA polymerase such as Pfu polymerase . The reaction generates a nicked , circular DNA. The template DNA must be eliminated by enzymatic digestion with a restriction enzyme such as Dpn I , which is specific for methylated DNA. All DNA produced from most Escherichia coli strains would be methylated; the template plasmid that is biosynthesized in E. coli will, therefore, be digested, while the mutated plasmid, which is generated in vitro and is therefore unmethylated, would be left undigested. Note that, in these double-strand plasmid mutagenesis methods, while the thermocycling reaction may be used, the DNA is not exponentially amplified if the two primers are designed such that they bind symmetrically to the same region around the mutagenesis site, as described in the original protocol. In this case the amplification is linear, and it is therefore inaccurate to describe the procedure as a PCR, since there is no chain reaction. However, if the primers are designed to bind in an offset manner such that mutagenesis site is close to the 5' end of both primers, the 3' region of the primers can bind also to the amplified products and thus exponential product formation is observed. The name "Quikchange" originates from the registered trademark "QuikChange mutagenesis" of Stratagene , now Agilent Technologies , for site directed mutagenesis kits. The method was developed by scientists working at Stratagene. [ 16 ] Note that Pfu polymerase can become strand-displacing at higher extension temperature (≥70 °C) which can result in the failure of the experiment, therefore the extension reaction should be performed at the recommended temperature of 68 °C. In some applications, this method has been observed to lead to insertion of multiple copies of primers. [ 17 ] A variation of this method, called SPRINP, prevents this artifact and has been used in different types of site directed mutagenesis. [ 17 ] Other techniques such as scanning mutagenesis of oligo-directed targets (SMOOT) can semi-randomly combine mutagenic oligonucleotides in plasmid mutagenesis. [ 18 ] This technique can create plasmid mutagenesis libraries ranging from single mutations to comprehensive codon mutagenesis across an entire gene. Since 2013, the development of CRISPR -Cas9 technology has allowed for the efficient introduction of various mutations into the genome of a wide variety of organisms. The method does not require a transposon insertion site, leaves no marker, and its efficiency and simplicity has made it the preferred method for genome editing . [ 21 ] [ 22 ] Site-directed mutagenesis is used to generate mutations that may produce a rationally designed protein that has improved or special properties (i.e.protein engineering). Investigative tools – specific mutations in DNA allow the function and properties of a DNA sequence or a protein to be investigated in a rational approach. Furthermore, single amino-acid changes by site-directed mutagenesis in proteins can help understand the importance of post-translational modifications. For instance changing a particular serine (phosphoacceptor) to an alanine (phospho-non-acceptor) in a substrate protein blocks the attachment of a phosphate group, thereby allows the phosphorylation to be investigated. This approach has been used to uncover the phosphorylation of the protein CBP by the kinase HIPK2 [ 23 ] Another comprehensive approach is site saturation mutagenesis where one codon or a set of codons may be substituted with all possible amino acids at the specific positions. [ 24 ] Commercial applications – Proteins may be engineered to produce mutant forms that are tailored for a specific application. For example, commonly used laundry detergents may contain subtilisin , whose wild-type form has a methionine that can be oxidized by bleach, significantly reducing the activity the protein in the process. [ 25 ] This methionine may be replaced by alanine or other residues, making it resistant to oxidation thereby keeping the protein active in the presence of bleach. [ 26 ] As the cost of DNA oligonucleotides synthesis falls, artificial synthesis of a complete gene is now a viable method for introducing mutation into gene. This method allows for extensive mutagenesis over multiples sites, including the complete redesign of the codon usage of gene to optimise it for a particular organism. [ 27 ]
https://en.wikipedia.org/wiki/Site-directed_mutagenesis
Site-directed spin labeling (SDSL) is a technique for investigating the structure and local dynamics of proteins using electron spin resonance . The theory of SDSL is based on the specific reaction of spin labels with amino acids . A spin label's built-in protein structure can be detected by EPR spectroscopy. SDSL is also a useful tool in examinations of the protein folding process. [ 1 ] Site-directed spin labeling (SDSL) was pioneered in the laboratory of Dr. W.L. Hubbell . [ 2 ] [ 3 ] In SDSL, sites for attachment of spin labels are introduced into recombinantly expressed proteins by site-directed mutagenesis . Functional groups contained within the spin label determine their specificity. At neutral pH, protein thiol groups specifically react with the functional groups methanethiosulfonate, maleimide, and iodoacetamide, creating a covalent bond with the amino acid Cys . [ 4 ] Spin labels are a unique molecular reporter, in that they are paramagnetic (contain an unpaired electron). Spin labels were first synthesized in the laboratory of H. M. McConnell in 1965. [ 5 ] Since then, a variety of nitroxide spin labels have enjoyed widespread use for the study of macromolecular structure and dynamics because of their stability and simple EPR signal. The nitroxyl radical (N-O) is usually incorporated into a heterocyclic ring (e.g. pyrrolidine ), and the unpaired electron is predominantly localized to the N-O bond. Once incorporated into the protein, a spin label's motions are dictated by its local environment. Because spin labels are exquisitely sensitive to motion, this has profound effects on its EPR spectrum. [ 4 ] [ 6 ] The assembly of multi-subunit membrane protein complexes has also been studied using spin labeling. The binding of the PsaC subunit to the PsaA and PsaB subunits of the photosynthetic reaction center, Photosystem I, has been analyzed in great detail using this technique. [ 7 ] Dr. Ralf Langen's group showed that SDSL with EPR (University of Southern California, Los Angeles) can be used to understand the structure of amyloid fibrils and the structure of membrane bound Parkinson's disease protein alpha-synuclein. [ 8 ] A 2012 study generated a high resolution structure of IAPP fibrils using a combination of SDSL, pulse EPR and computational biology. [ 9 ]
https://en.wikipedia.org/wiki/Site-directed_spin_labeling
Site-specific architecture (SSA) is architecture which is of its time and of its place. It is designed to respond to both its physical context, and the metaphysical context within which it has been conceived and executed. The physical context will include its location , local materials, planning framework, building codes , whilst the metaphysical context will include the client's aspirations, community values, and architects ideas about the building type, client, location, building use, etc. The first examples seen of site-specific architecture orient around Spain , Italy and China in ancient cave and cliff dwellings dating back to the Neolithic period. [ 1 ] Architecture of the Neolithic period is the first example of site-specific architecture, the buildings being dedicated to religion or social practices. Buildings of this time were made for purposes beyond the physical constructs but rather for the significance of the site they were created on. These early examples of site-specific architecture can be seen to use local materials that were available to humans at the time such as clay , stone , tree trunks and mudbrick . [ 2 ] This use of the natural elements allows for the structures to seamlessly blend into their environments. Following this period there was a move towards more ornamental architectural structures as seen in the Roman and Byzantine era. [ 3 ] For several centuries architecture was concerned mainly with decorative and cosmetic structures that stood out from their environments. [ 4 ] This for example seen in the Renaissance period, whereby structures were dedicated to symmetry and proportion rather than organic lines and shapes. [ 5 ] More recent iterations of architectural styles eventually moved away from the styles of classical architecture as moved towards modernism . [ 6 ] This shift happened as a result of art periods such as the Bauhaus and De Stijl which introduced the idea of function into architecture. [ 7 ] Modernist architecture can be seen in many movements such as expressionist , constructivist and art deco . [ 8 ] The American Modernist period saw the re-emergence of site-specific architecture where architects considered the forms of their structures and how they would blend into their surrounding environments. Contemporary interpretations of site-specific architecture are notably seen in the 1950s when Frank Lloyd Wright coined the term organic architecture , [ 9 ] this interpretation of site-specific architecture revolves around design that coexists with the pre-existing elements of a site. 21st century ‘ contemporary architecture ’ structures are no longer limited to the boundaries of previous centuries. There are now innovative materials and tools which can assist architects in their designs making it easier to create buildings in unprecedented forms. [ 10 ] Sustainability is at the forefront of contemporary architects thinking due to the climate change emergency, this move towards eco-conscious buildings has assisted in the re-emergence of site-specific designs. [ 11 ] Site-specific architecture surrounds the practice of creating a structure which cohesively blends with the space that it was intended, through this style buildings do not only exists in the physical but also inspire spiritual connection. This theory concludes that when designing and creating a building, the use of the space and area in which it is intended must be at the forefront of the designers thinking. [ 12 ] The components that are important in this way of thinking include the location, local materials, environment and weather of the region as well as the community values, experiences and aspirations of the client or intended users. This genre of architecture is aimed to integrate with its surrounding, surrounding the concept that all components of the design must support one another, and grow with the environment rather than against it. [ 13 ] American architect, Frank Lloyd Wright is often dubbed ‘the father of modernism’, practiced architecture through the theory that “form follows nature”. [ 14 ] Lloyd Wright took this notion a step further insinuating that “form and function are one”. Site specific architecture is primarily associated with the aim to promote sustainable design solutions, due to the failings of late modernist planning to respond to local characteristics site specific architecture has emerged as a crucial genre. Beyond cohesively blending buildings into their surroundings, site-specific architecture also involves the curation of space in relation to its purpose. This can entail religious and spiritual spaces. Historically examples of this can be seen in the original cave dwellings of areas in Malta, more recent examples can be seen in the development of spiritual sanctuaries and retreats. [ 15 ] Frank Lloyd Wright (1867-1959) was an American architect famed for his revolutionary designs in the 20th century, 8 of his designs including, Fallingwater , the Guggenheim Museum and Unity Temple are listed as UNESCO World Heritage sites. Lloyd Wright was known largely for the coining of the term ‘ organic architecture ’ which saw the cohesion of environment and buildings using texture, earthy tones and a sensitive attention to materials in architectural design. [ 16 ] Throughout his career Lloyd Wright published several articles and books expanding upon the philosophy of organic architecture and the importance in the relationship between a site, building and time, “No house should ever be on a hill or on anything. It should be of the hill. Belonging to it. Hill and house should live together, each the happier for the other” [ 17 ] (Lloyd Wright, 1932, p 168). Lloyd Wrights approach towards architecture was not aesthetic nor stylistic but rather philosophical , he designed in alignment with the principles of site-specific architecture principles to create a space in which blends seamlessly with its surroundings. [ 18 ] Network of Architecture is a collaborative architecture firm that was founded by Lukas Rungger and Stefan Rier, the firm works with the philosophy of designs that centres round the natural landscape of a space. Their goal is not to build houses to but design stories, the network approaches each project with specific research and an “intense learning in process” in order to understand the traditional culture of an area as well as looking towards the ways of modern life. [ 19 ] Richard Buckminster Fuller was an American architect who works across a variety of fields including architecture , design , geometry , science , engineering and cartography in order to create designs for 100% of humanity. [ 20 ] Fuller who throughout his career perfected his design of the ‘ geodesic dome ’ [ 21 ] believed in cultivating design solutions to create structures that moulded with the environment rather than against it. Fuller was able to understand the complex relationship between society , technology and the environment and thus through this understanding created architecture in which intended to exist with both humankind and eco-systems . His approach to site-specific theories saw a specific study of the elements of nature and how structures interacted with them. [ 22 ] He was able to promote responsible protection of the environment through his designs and theories. Hal Salfieni Hypogeum is an underground burial site that was discovered in 1902, the remains of the site date back to 4000 BC. The underground cemetery is located on a hill overlooking the Grand Harbour of Valletta in Malta . [ 23 ] Hal Salfileni Hypogeum is an early example of site-specific architecture whereby the builders of the site have considered both the pre-existing environmental structure of the area as well as the purpose of the space. The expanse of the space was carved entirely of solid limestone , the excavation the space was only able to be achieved by the rudimentary technology of the Stone Age people. [ 23 ] It is estimated that there were more than 6000 bodies buried within the site, some historian's hypothesis that the ritual of burying saw bodies left to decompose until the flesh fell off the bones. The bones were then collected and stacked within the hypogeum . Frank Lloyd Wright designed Fallingwater in 1935 for Edgar J. Kaufmann and his wife Liliane, the site has been referred to as one of the best examples of American architecture. The house was commissioned as a summer house for the couple to escape from their lives in Pittsburgh .  Lloyd Wright designed the home to complement the site, wanting to integrate the pre-existing waterfall into the home so that the Kaufmann's lived ‘with’ the waterfall as an ‘integral’ part of their lives. [ 24 ] Originally the Kaufmann's had been disappointed with the plans of Lloyd Wright as they had wanted to have a view of the waterfall from their home. [ 24 ] However, Lloyd Wright ignored the desires of his client and followed his own vision. Fallingwater is described as a place that “effectively unites architecture and nature as one” (Laseau and Tice, 1992, 94). [ 25 ] The concrete and limestone exterior seamlessly blends into the environment surrounding, this naturalistic aesthetic extends inside through house using stone floors. Designed by architectural team DECA, Aloni is a house that was built entirely for its site. Located on the Greek island of Antiparos the house is moulded to the shape of the land. The design for Aloni responds specifically to the topography of the rural landscape as well as the historical artefacts that are found on the Island. There are endless terraces that have been built into the landscape of the Aegean islands in order to create flat surfaces that would allow for agricultural production. [ 26 ] DECA responded to these stone walls by creating a house that blends with the earth. The underground home is built with natural materials to maintain a serenity among the landscape. Through studying the site DECA was able to minimise the boundaries associated with building on the delicate terrain of the Cycladic landscape, [ 27 ] instead of creating homes DECA transforms the landscape so that it can be inhabited. Kendrick Bangs Kellogg designed High Desert House on the edge of Joshua Tree National Park for artists Jay and Bev Doolittle. The house is composed of 26 concrete columns that are sunk into the bedrock. The large natural boulders of the area are incorporated into the design creating a monolithic aesthetic within the space. [ 28 ]
https://en.wikipedia.org/wiki/Site-specific_architecture
Site-specific recombinase technologies are genome engineering tools that depend on recombinase enzymes to replace targeted sections of DNA. In the late 1980s gene targeting in murine embryonic stem cells (ESCs) enabled the transmission of mutations into the mouse germ line, and emerged as a novel option to study the genetic basis of regulatory networks as they exist in the genome. Still, classical gene targeting proved to be limited in several ways as gene functions became irreversibly destroyed by the marker gene that had to be introduced for selecting recombinant ESCs. These early steps led to animals in which the mutation was present in all cells of the body from the beginning leading to complex phenotypes and/or early lethality. There was a clear need for methods to restrict these mutations to specific points in development and specific cell types. This dream became reality when groups in the USA were able to introduce bacteriophage and yeast-derived site-specific recombination (SSR-) systems into mammalian cells as well as into the mouse. [ 1 ] [ 2 ] [ 3 ] Common genetic engineering strategies require a permanent modification of the target genome. To this end great sophistication has to be invested in the design of routes applied for the delivery of transgenes. Although for biotechnological purposes random integration is still common, it may result in unpredictable gene expression due to variable transgene copy numbers, lack of control about integration sites and associated mutations. The molecular requirements in the stem cell field are much more stringent. Here, homologous recombination (HR) can, in principle, provide specificity to the integration process, but for eukaryotes it is compromised by an extremely low efficiency. Although meganucleases, zinc-finger- and transcription activator-like effector nucleases (ZFNs and TALENs) are actual tools supporting HR, it was the availability of site-specific recombinases (SSRs) which triggered the rational construction of cell lines with predictable properties. Nowadays both technologies, HR and SSR can be combined in highly efficient "tag-and-exchange technologies". [ 4 ] Many site-specific recombination systems have been identified to perform these DNA rearrangements for a variety of purposes, but nearly all of these belong to either of two families, tyrosine recombinases (YR) and serine recombinases (SR), depending on their mechanism . These two families can mediate up to three types of DNA rearrangements (integration, excision/resolution, and inversion) along different reaction routes based on their origin and architecture. [ 5 ] The founding member of the YR family is the lambda integrase , encoded by bacteriophage λ , enabling the integration of phage DNA into the bacterial genome. A common feature of this class is a conserved tyrosine nucleophile attacking the scissile DNA-phosphate to form a 3'-phosphotyrosine linkage. Early members of the SR family are closely related resolvase / DNA invertases from the bacterial transposons Tn3 and γδ, which rely on a catalytic serine responsible for attacking the scissile phosphate to form a 5'-phosphoserine linkage. These undisputed facts, however, were compromised by a good deal of confusion at the time other members entered the scene, for instance the YR recombinases Cre and Flp (capable of integration, excision/resolution as well as inversion), which were nevertheless welcomed as new members of the "integrase family". The converse examples are PhiC31 and related SRs, which were originally introduced as resolvase/invertases although, in the absence of auxiliary factors, integration is their only function. Nowadays the standard activity of each enzyme determines its classification reserving the general term "recombinase" for family members which, per se, comprise all three routes, INT, RES and INV: Our table extends the selection of the conventional SSR systems and groups these according to their performance. All of these enzymes recombine two target sites, which are either identical (subfamily A1) or distinct (phage-derived enzymes in A2, B1 and B2). [ 6 ] Whereas for A1 these sites have individual designations (" FRT " in case of Flp-recombinase, loxP for Cre-recombinase), the terms " att P" and " att B" (attachment sites on the phage and bacterial part, respectively) are valid in the other cases. In case of subfamily A1 we have to deal with short (usually 34 bp-) sites consisting of two (near-)identical 13 bp arms (arrows) flanking an 8 bp spacer (the crossover region, indicated by red line doublets). [ 7 ] Note that for Flp there is an alternative, 48 bp site available with three arms, each accommodating a Flp unit (a so-called "protomer"). att P- and att B-sites follow similar architectural rules, but here the arms show only partial identity (indicated by the broken lines) and differ in both cases. These features account for relevant differences: In order to streamline this chapter the following implementations will be focused on two recombinases (Flp and Cre) and just one integrase (PhiC31) since their spectrum covers the tools which, at present, are mostly used for directed genome modifications. This will be done in the framework of the following overview. The mode integration/resolution and inversion (INT/RES and INV) depend on the orientation of recombinase target sites (RTS), among these pairs of att P and att B. Section C indicates, in a streamlined fashion, the way recombinase-mediated cassette exchange (RMCE) can be reached by synchronous double-reciprocal crossovers (rather than integration, followed by resolution). [ 8 ] [ 9 ] Tyr-Recombinases are reversible, while the Ser-Integrase is unidirectional. Of note is the way reversible Flp (a Tyr recombinase) integration/resolution is modulated by 48 bp (in place of 34 bp minimal) FRT versions: the extra 13 bp arm serves as a Flp "landing path" contributing to the formation of the synaptic complex, both in the context of Flp-INT and Flp-RMCE functions (see the respective equilibrium situations). While it is barely possible to prevent the (entropy-driven) reversion of integration in section A for Cre and hard to achieve for Flp, RMCE can be completed if the donor plasmid is provided at an excess due to the bimolecular character of both the forward- and the reverse reaction. Posing both FRT sites in an inverse manner will lead to an equilibrium of both orientations for the insert (green arrow). In contrast to Flp, the Ser integrase PhiC31 (bottom representations) leads to unidirectional integration, at least in the absence of an recombinase-directionality (RDF-)factor. [ 10 ] Relative to Flp-RMCE, which requires two different ("heterospecific") FRT -spacer mutants, the reaction partner ( att B) of the first reacting att P site is hit arbitrarily, such that there is no control over the direction the donor cassette enters the target (cf. the alternative products). Also different from Flp-RMCE , several distinct RMCE targets cannot be mounted in parallel, owing to the lack of heterospecific (non-crossinteracting) att P/ att B combinations. Cre recombinase (Cre) is able to recombine specific sequences of DNA without the need for cofactors . The enzyme recognizes 34 base pair DNA sequences called loxP ("locus of crossover in phage P1"). Depending on the orientation of target sites with respect to one another, Cre will integrate/excise or invert DNA sequences. Upon the excision (called "resolution" in case of a circular substrate) of a particular DNA region, normal gene expression is considerably compromised or terminated. [ 11 ] Due to the pronounced resolution activity of Cre, one of its initial applications was the excision of lox P-flanked ("floxed") genes leading to cell-specific gene knockout of such a floxed gene after Cre becomes expressed in the tissue of interest. Current technologies incorporate methods, which allow for both the spatial and temporal control of Cre activity. A common method facilitating the spatial control of genetic alteration involves the selection of a tissue-specific promoter to drive Cre expression. Placement of Cre under control of such a promoter results in localized, tissue-specific expression. As an example, Leone et al. have placed the transcription unit under the control of the regulatory sequences of the myelin proteolipid protein (PLP) gene, leading to induced removal of targeted gene sequences in oligodendrocytes and Schwann cells . [ 12 ] The specific DNA fragment recognized by Cre remains intact in cells, which do not express the PLP gene; this in turn facilitates empirical observation of the localized effects of genome alterations in the myelin sheath that surround nerve fibers in the central nervous system (CNS) and the peripheral nervous system (PNS). [ 13 ] Selective Cre expression has been achieved in many other cell types and tissues as well. In order to control temporal activity of the excision reaction, forms of Cre which take advantage of various ligand binding domains have been developed. One successful strategy for inducing specific temporal Cre activity involves fusing the enzyme with a mutated ligand-binding domain for the human estrogen receptor (ERt). Upon the introduction of tamoxifen (an estrogen receptor antagonist ), the Cre-ERt construct is able to penetrate the nucleus and induce targeted mutation. ERt binds tamoxifen with greater affinity than endogenous estrogens , which allows Cre-ERt to remain cytoplasmic in animals untreated with tamoxifen. The temporal control of SSR activity by tamoxifen permits genetic changes to be induced later in embryogenesis and/or in adult tissues. [ 12 ] This allows researchers to bypass embryonic lethality while still investigating the function of targeted genes. Recent extensions of these general concepts led to generating the "Cre-zoo", i.e. collections of hundreds of mouse strains for which defined genes can be deleted by targeted Cre expression. [ 3 ] In its natural host (S. cerevisiae) the Flp/ FRT system enables replication of a "2μ plasmid" by the inversion of a segment that is flanked by two identical, but oppositely oriented FRT sites ("flippase" activity). This inversion changes the relative orientation of replication forks within the plasmid enabling "rolling circle"—amplification of the circular 2μ entity before the multimeric intermediates are resolved to release multiple monomeric products. Whereas 34 bp minimal FRT sites favor excision/resolution to a similar extent as the analogue lox P sites for Cre, the natural, more extended 48 bp FRT variants enable a higher degree of integration, while overcoming certain promiscuous interactions as described for phage enzymes like Cre- [ 5 ] and PhiC31. [ 6 ] An additional advantage is the fact, that simple rules can be applied to generate heterospecific FRT sites which undergo crossovers with equal partners but nor with wild type FRT s. These facts have enabled, since 1994, the development and continuous refinements of recombinase-mediated cassette exchange (RMCE-)strategies permitting the clean exchange of a target cassette for an incoming donor cassette. [ 6 ] Based on the RMCE technology, a particular resource of pre-characterized ES-strains that lends itself to further elaboration has evolved in the framework of the EUCOMM (European Conditional Mouse Mutagenesis) program, based on the now established Cre- and/or Flp-based "FlExing" (Flp-mediated excision/inversion) setups, [ 6 ] involving the excision and inversion activities. Initiated in 2005, this project focused first on saturation mutagenesis to enable complete functional annotation of the mouse genome (coordinated by the International Knockout-Mouse Consortium, IKMC) with the ultimate goal to have all protein genes mutated via gene trapping and -targeting in murine ES cells. [ 14 ] These efforts mark the top of various "tag-and-exchange" strategies, which are dedicated to tagging a distinct genomic site such that the "tag" can serve as an address to introduce novel (or alter existing) genetic information. The tagging step per se may address certain classes of integration sites by exploiting integration preferences of retroviruses or even site specific integrases like PhiC31, both of which act in an essentially unidirectional fashion. The traditional, laborious "tag-and-exchange" procedures relied on two successive homologous recombination (HR-)steps, the first one ("HR1") to introduce a tag consisting of a selection marker gene. "HR2" was then used to replace the marker by the "GOI. In the first ("knock-out"-) reaction the gene was tagged with a selectable marker, typically by insertion of a hygtk ([+/-]) cassette providing G418 resistance. In the following "knock-in" step, the tagged genomic sequence was replaced by homologous genomic sequences with certain mutations. Cell clones could then be isolated by their resistance to ganciclovir due to loss of the HSV-tk gene, i.e. ("negative selection"). This conventional two-step tag-and-exchange procedure [ 15 ] could be streamlined after the advent of RMCE, which could take over and add efficiency to the knock-in step. Without much doubt, Ser integrases are the current tools of choice for integrating transgenes into a restricted number of well-understood genomic acceptor sites that mostly (but not always) mimic the phage att P site in that they attract an att B-containing donor vector. At this time the most prominent member is PhiC31-INT with proven potential in the context of human and mouse genomes. Contrary to the above Tyr recombinases, PhiC31-INT as such acts in a unidirectional manner, firmly locking in the donor vector at a genomically anchored target. An obvious advantage of this system is that it can rely on unmodified, native att P (acceptor) and att B donor sites. Additional benefits (together with certain complications) may arise from the fact that mouse and human genomes per se contain a limited number of endogenous targets (so called " att P-pseudosites"). Available information suggests that considerable DNA sequence requirements let the integrase recognize fewer sites than retroviral or even transposase-based integration systems opening its career as a superior carrier vehicle for the transport and insertion at a number of well established genomic sites, some of which with so called "safe-harbor" properties. [ 10 ] Exploiting the fact of specific ( att P x att B) recombination routes, RMCE becomes possible without requirements for synthetic, heterospecific att -sites. This obvious advantage, however comes at the expense of certain shortcomings, such as lack of control about the kind or directionality of the entering (donor-) cassette. [ 6 ] Further restrictions are imposed by the fact that irreversibility does not permit standard multiplexing-RMCE setups including "serial RMCE" reactions, i.e., repeated cassette exchanges at a given genomic locus . Annotation of the human and mouse genomes has led to the identification of >20 000 protein-coding genes and >3 000 noncoding RNA genes, which guide the development of the organism from fertilization through embryogenesis to adult life. Although dramatic progress is noted, the relevance of rare gene variants has remained a central topic of research. As one of the most important platforms for dealing with vertebrate gene functions on a large scale, genome-wide genetic resources of mutant murine ES cells have been established. To this end four international programs aimed at saturation mutagenesis of the mouse genome have been founded in Europe and North America (EUCOMM, KOMP, NorCOMM, and TIGM). Coordinated by the International Knockout Mouse Consortium (IKSC) these ES-cell repositories are available for exchange between international research units. Present resources comprise mutations in 11 539 unique genes, 4 414 of these conditional. [ 14 ] The relevant technologies have now reached a level permitting their extension to other mammalian species and to human stem cells, most prominently those with an iPS (induced pluripotent) status.
https://en.wikipedia.org/wiki/Site-specific_recombinase_technology
Site-specific recombination , also known as conservative site-specific recombination , is a type of genetic recombination in which DNA strand exchange takes place between segments possessing at least a certain degree of sequence homology . [ 1 ] [ 2 ] [ 3 ] Enzymes known as site-specific recombinases (SSRs) perform rearrangements of DNA segments by recognizing and binding to short, specific DNA sequences (sites), at which they cleave the DNA backbone, exchange the two DNA helices involved, and rejoin the DNA strands. In some cases the presence of a recombinase enzyme and the recombination sites is sufficient for the reaction to proceed; in other systems a number of accessory proteins and/or accessory sites are required. Many different genome modification strategies , among these recombinase-mediated cassette exchange (RMCE), an advanced approach for the targeted introduction of transcription units into predetermined genomic loci, rely on SSRs. Site-specific recombination systems are highly specific, fast, and efficient, even when faced with complex eukaryotic genomes. [ 4 ] They are employed naturally in a variety of cellular processes, including bacterial genome replication , differentiation and pathogenesis , and movement of mobile genetic elements . [ 5 ] For the same reasons, they present a potential basis for the development of genetic engineering tools. [ 6 ] Recombination sites are typically between 30 and 200 nucleotides in length and consist of two motifs with a partial inverted-repeat symmetry, to which the recombinase binds, and which flank a central crossover sequence at which the recombination takes place. The pairs of sites between which the recombination occurs are usually identical, but there are exceptions (e.g. attP and attB of λ integrase ). [ 7 ] Based on amino acid sequence homologies and mechanistic relatedness, most site-specific recombinases are grouped into one of two families: the tyrosine (Tyr) recombinase family or serine (Ser) recombinase family. The names stem from the conserved nucleophilic amino acid residue present in each class of recombinase which is used to attack the DNA and which becomes covalently linked to it during strand exchange. The earliest identified members of the serine recombinase family were known as resolvases or DNA invertases , while the founding member of the tyrosine recombinases, lambda phage integrase (using attP/B recognition sites), differs from the now well-known enzymes such as Cre (from the P1 phage ) and FLP (from the yeast Saccharomyces cerevisiae ). Famous serine recombinases include enzymes such as gamma-delta resolvase (from the Tn 1000 transposon ), Tn3 resolvase (from the Tn3 transposon), and φ C31 integrase (from the φ C31 phage). [ 8 ] There are several classes of serine recombinases, consisting of the small serine recombinase, the ISXc5 resolvase, the serine transposase, and the large serine recombinase. [ 9 ] Although the individual members of the two recombinase families can perform reactions with the same practical outcomes, the families are unrelated to each other, having different protein structures and reaction mechanisms. Unlike tyrosine recombinases, serine recombinases are highly modular, as was first hinted by biochemical studies [ 10 ] and later shown by crystallographic structures. [ 11 ] [ 12 ] Knowledge of these protein structures could prove useful when attempting to re-engineer recombinase proteins as tools for genetic manipulation. Recombination between two DNA sites begins by the recognition and binding of these sites – one site on each of two separate double-stranded DNA molecules, or at least two distant segments of the same molecule – by the recombinase enzyme. This is followed by synapsis , i.e. bringing the sites together to form the synaptic complex. It is within this synaptic complex that the strand exchange takes place, as the DNA is cleaved and rejoined by controlled transesterification reactions. During strand exchange, each double-stranded DNA molecule is cut at a fixed point within the crossover region of the recognition site, releasing a deoxyribose hydroxyl group , while the recombinase enzyme forms a transient covalent bond to a DNA backbone phosphate . This phosphodiester bond between the hydroxyl group of the nucleophilic serine or tyrosine residue conserves the energy that was expended in cleaving the DNA. Energy stored in this bond is subsequently used for the rejoining of the DNA to the corresponding deoxyribose hydroxyl group on the other DNA molecule. The entire reaction therefore proceeds without the need for external energy-rich cofactors such as ATP . Although the basic chemical reaction is the same for both tyrosine and serine recombinases, there are some differences between them. [ 13 ] Tyrosine recombinases, such as Cre or FLP , cleave one DNA strand at a time at points that are staggered by 6–8bp, linking the 3' end of the strand to the hydroxyl group of the tyrosine nucleophile (Fig. 1). [ 14 ] Strand exchange then proceeds via a crossed strand intermediate analogous to the Holliday junction in which only one pair of strands has been exchanged. [ 15 ] [ 16 ] The mechanism and control of serine recombinases is much less well understood. This group of enzymes was only discovered in the mid-1990s and is still relatively small. The now classical members gamma-delta and Tn3 resolvase , but also new additions like φC31-, Bxb1-, and R4 integrases, cut all four DNA strands simultaneously at points that are staggered by 2 bp (Fig. 2). [ 17 ] During cleavage, a protein–DNA bond is formed via a transesterification reaction, in which a phosphodiester bond is replaced by a phosphoserine bond between a 5' phosphate at the cleavage site and the hydroxyl group of the conserved serine residue (S10 in resolvase). [ 18 ] [ 19 ] It is still not entirely clear how the strand exchange occurs after the DNA has been cleaved. However, it has been shown that the strands are exchanged while covalently linked to the protein, with a resulting net rotation of 180°. [ 20 ] [ 21 ] The most quoted (but not the only) model accounting for these facts is the "subunit rotation model" (Fig. 2). [ 13 ] [ 22 ] Independent of the model, DNA duplexes are situated outside of the protein complex, and large movement of the protein is needed to achieve the strand exchange. In this case the recombination sites are slightly asymmetric, which allows the enzyme to tell apart the left and right ends of the site. When generating products, left ends are always joined to the right ends of their partner sites, and vice versa. This causes different recombination hybrid sites to be reconstituted in the recombination products. Joining of left ends to left or right to right is avoided due to the asymmetric "overlap" sequence between the staggered points of top and bottom strand exchange, which is in stark contrast to the mechanism employed by tyrosine recombinases. [ 13 ] The reaction catalysed by Cre-recombinase, for instance, may lead to excision of the DNA segment flanked by the two sites (Fig. 3A), but may also lead to integration or inversion of the orientation of the flanked DNA segment (Fig. 3B). What the outcome of the reaction will be is dictated mainly by the relative locations and orientations of the sites that are to be recombined, but also by the innate specificity of the site-specific system in question. Excisions and inversions occur if the recombination takes place between two sites that are found on the same molecule (intramolecular recombination), and if the sites are in the same (direct repeat) or in an opposite orientation (inverted repeat), respectively. Insertions, on the other hand, take place if the recombination occurs on sites that are situated on two different DNA molecules (intermolecular recombination), provided that at least one of these molecules is circular. Most site-specific systems are highly specialised, catalysing only one of these different types of reaction, and have evolved to ignore the sites that are in the "wrong" orientation.
https://en.wikipedia.org/wiki/Site-specific_recombination
In the construction industry , site managers , often referred to as construction managers , site agents or building managers , are responsible for the day-to-day on site running of a construction project . Site managers are required to keep within the timescale and budget of a project, and manage any delays or problems encountered on-site during a construction project. Also involved in the role is the managing of quality control, health and safety checks and the inspection of work carried out. Many site managers will be involved before site activity takes place, and are responsible for managing communications between all parties involved in the on-site development of the project. Site managers are often required to deal with inquiries and communication with the public. Typically a site manager is employed by a construction company, contractor or civil engineering firm but they are often employed by local authorities to oversee the refurbishment of council owned properties. [ 1 ] Qualifying as a site manager in the UK can be done by several routes, with many site managers having progressed from project or contract management roles. The Chartered Institute of Building (CIOB) provides an educational and accreditation framework for structural engineers and other roles in the industry, [ 2 ] and a specific Graduate Diploma Programme for those occupying jobs in construction but without a construction related degree. [ 3 ] Site managers' remuneration depends on a number of factors including sector, level of experience and the size of the project. A 2010 salary survey of the construction and built environment industry [ citation needed ] showed the average annual salary of a site manager in the UK to be £36,981. Site managers in areas of growth in the construction industry such as the Middle East earn more, with the average earning across all sector and all levels of experience at £42,424. [ 4 ]
https://en.wikipedia.org/wiki/Site_manager
Site Reliability Engineering ( SRE ) is a discipline in the field of Software Engineering and IT infrastructure support that monitors and improves the availability and performance of deployed software systems and large software services (which are expected to deliver reliable response times across events such as new software deployments, hardware failures, and cybersecurity attacks). [ 1 ] There is typically a focus on automation and an infrastructure as Code methodology. SRE uses elements of software engineering , IT infrastructure , web development , and operations [ 2 ] to assist with reliability. It is similar to DevOps as they both aim to improve the reliability and availability of deployed software systems. Site Reliability Engineering originated at Google with Benjamin Treynor Sloss, [ 3 ] [ 4 ] who founded SRE team in 2003. [ 5 ] The concept expanded within the software development industry, leading various companies to employ site reliability engineers . [ 6 ] By March 2016, Google had more than 1,000 site reliability engineers on staff. [ 7 ] Dedicated SRE teams are common at larger web development companies. In middle-sized and smaller companies, DevOps teams sometimes perform SRE, as well. [ 6 ] Organizations that have adopted the concept include Airbnb , Dropbox , IBM , [ 8 ] LinkedIn , [ 9 ] Netflix , [ 7 ] and Wikimedia . [ 10 ] Site reliability engineers (SREs) are responsible for a combination of system availability , latency , performance , efficiency, change management , monitoring , emergency response , and capacity planning . [ 11 ] SREs often have backgrounds in software engineering , systems engineering , and/or system administration . [ 12 ] The focuses of SRE include automation , system design , and improvements to system resilience . [ 12 ] SRE is considered a specific implementation of DevOps; [ 13 ] focusing specifically on building reliable systems, whereas DevOps covers a broader scope of operations. [ 14 ] [ 15 ] [ 16 ] Despite having different focuses, some companies have rebranded their operations teams to SRE teams. [ 6 ] Common definitions of the practices include (but are not limited to): [ 2 ] [ 17 ] Common definitions of the principles include (but are not limited to): SRE teams collaborate with other departments within organizations to guide the implementation of the mentioned principles. Below is an overview of common practices: [ 19 ] Kitchen Sink refers to the expansive and often unbounded scope of services and workflows that SRE teams oversee. Unlike traditional roles with clearly defined boundaries, SREs are tasked with various responsibilities, including system performance optimization, incident management, and automation. This approach allows SREs to address multiple challenges, ensuring that systems run efficiently and evolve in response to changing demands and complexities. Infrastructure SRE teams focus on maintaining and improving the reliability of systems that support other teams' workflows. While they sometimes collaborate with platform engineering teams, their primary responsibility is ensuring up-time, performance, and efficiency. Platform teams, on the other hand, primarily develop the software and systems used across the organization. While reliability is a goal for both, platform teams prioritize creating and maintaining the tools and services used by internal stakeholders, whereas Infrastructure SRE teams are tasked with ensuring those systems run smoothly and meet reliability standards. SRE teams utilize a variety of tools with the aim of measuring, maintaining, and enhancing system reliability. These tools play a role in monitoring performance, identifying issues, and facilitating proactive maintenance. For instance, Nagios Core is commonly employed for system monitoring and alerting, while Prometheus (software) is frequently used for collecting and querying metrics in cloud-native environments. SRE teams dedicated to specific products or applications are common in large organizations. [ 20 ] These teams are responsible for ensuring the reliability, scalability, and performance of key services. In larger companies, it's typical to have multiple SRE teams, each focusing on different products or applications, ensuring that each area receives specialized attention to meet performance and availability targets. In an embedded model, individual SREs or small SRE pairs are integrated within software engineering teams. These SREs collaborate with developers, applying core SRE principles—such as automation, monitoring, and incident response—directly to the software development lifecycle. This approach aims to enhance reliability, performance, and collaboration between SREs and developers. Consulting SRE teams specialize in advising organizations on the implementation of SRE principles and practices. Typically composed of seasoned SREs with a history across various implementations, these teams provide insights and guidance for specific organizational needs. When working directly with clients, these SREs are often referred to as ' Customer Reliability Engineers .' In large organizations that have adopted SRE, a hybrid model is common [ citation needed ] . This model includes various implementations, such as multiple Product/Application SRE teams dedicated to addressing the specific reliability needs of different products. An Infrastructure SRE team may collaborate with a Platform engineering group to achieve shared reliability goals for a unified platform that supports all products and applications. Since 2014, the USENIX organization has hosted the annual SREcon conference, bringing together site reliability engineers from various industries. This conference is a platform for professionals to share knowledge, explore effective practices, and discuss trends in site reliability engineering. [ 21 ]
https://en.wikipedia.org/wiki/Site_reliability_engineering
The Sitnikov problem is a restricted version of the three-body problem named after Russian mathematician Kirill Alexandrovitch Sitnikov that attempts to describe the movement of three celestial bodies due to their mutual gravitational attraction. A special case of the Sitnikov problem was first discovered by the American scientist William Duncan MacMillan in 1911, but the problem as it currently stands wasn't discovered until 1961 by Sitnikov. The system consists of two primary bodies with the same mass ( m 1 = m 2 = m 2 ) {\displaystyle \left(m_{1}=m_{2}={\tfrac {m}{2}}\right)} , which move in circular or elliptical Kepler orbits around their center of mass . The third body, which is substantially smaller than the primary bodies and whose mass can be set to zero ( m 3 = 0 ) {\displaystyle (m_{3}=0)} , moves under the influence of the primary bodies in a plane that is perpendicular to the orbital plane of the primary bodies (see Figure 1). The origin of the system is at the focus of the primary bodies. A combined mass of the primary bodies m = 1 {\displaystyle m=1} , an orbital period of the bodies 2 π {\displaystyle 2\pi } , and a radius of the orbit of the bodies a = 1 {\displaystyle a=1} are used for this system. In addition, the gravitational constant is 1. In such a system that the third body only moves in one dimension – it moves only along the z-axis. In order to derive the equation of motion in the case of circular orbits for the primary bodies, use that the total energy E {\displaystyle \,E} is: After differentiating with respect to time, the equation becomes: This, according to Figure 1, is also true: Thus, the equation of motion is as follows: which describes an integrable system since it has one degree of freedom. If on the other hand the primary bodies move in elliptical orbits then the equations of motion are where ρ ( t ) = ρ ( t + 2 π ) {\displaystyle \rho (t)=\rho (t+2\pi )} is the distance of either primary from their common center of mass. Now the system has one-and-a-half degrees of freedom and is known to be chaotic. Although it is nearly impossible in the real world to find or arrange three celestial bodies exactly as in the Sitnikov problem, the problem is still widely and intensively studied for decades: although it is a simple case of the more general three-body problem, all the characteristics of a chaotic system can nevertheless be found within the problem, making the Sitnikov problem ideal for general studies on effects in chaotic dynamical systems.
https://en.wikipedia.org/wiki/Sitnikov_problem
The Siwoloboff method is used to determine the boiling point of small samples of liquid chemicals. A sample in an ignition tube (also called a fusion tube) is attached to a thermometer with a rubber band, and immersed in a Thiele tube , water bath, or other suitable medium for heating. A sealed capillary, open end pointing down, is placed in the ignition tube. The apparatus is heated. Dissolved gases evolve from the sample first, and the air in the capillary tube expands. Once the sample starts to boil, heating is stopped, and the temperature starts to fall. The temperature at which the liquid sample is sucked into the sealed capillary is the boiling point of the sample. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Siwoloboff_method
Six degrees of freedom ( 6DOF ), or sometimes six degrees of movement , refers to the six mechanical degrees of freedom of movement of a rigid body in three-dimensional space . Specifically, the body is free to change position as forward/backward (surge), up/down (heave), left/right (sway) translation in three perpendicular axes , combined with changes in orientation through rotation about three perpendicular axes, often termed yaw (normal axis), pitch (transverse axis), and roll (longitudinal axis). Three degrees of freedom ( 3DOF ), a term often used in the context of virtual reality , typically refers to tracking of rotational motion only: pitch, yaw, and roll. [ 1 ] [ 2 ] Serial and parallel manipulator systems are generally designed to position an end-effector with six degrees of freedom , consisting of three in translation and three in orientation. This provides a direct relationship between actuator positions and the configuration of the manipulator defined by its forward and inverse kinematics . Robot arms are described by their degrees of freedom . This is a practical metric, in contrast to the abstract definition of degrees of freedom which measures the aggregate positioning capability of a system. [ 3 ] In 2007, Dean Kamen , inventor of the Segway , unveiled a prototype robotic arm [ 4 ] with 14 degrees of freedom for DARPA . Humanoid robots typically have 30 or more degrees of freedom, with six degrees of freedom per arm, five or six in each leg, and several more in torso and neck . [ 5 ] The term is important in mechanical systems , especially biomechanical systems , for analyzing and measuring properties of these types of systems that need to account for all six degrees of freedom. Measurement of the six degrees of freedom is accomplished today through both AC and DC magnetic or electromagnetic fields in sensors that transmit positional and angular data to a processing unit. The data is made relevant through software that integrates the data based on the needs and programming of the users. The six degrees of freedom of a mobile unit are divided in two motional classes as described below. Translational envelopes: Rotational envelopes: In terms of a headset , such as the kind used for virtual reality , rotational envelopes can also be thought of in the following terms: There are three types of operational envelope in the Six degrees of freedom. These types are Direct , Semi-direct (conditional) and Non-direct, all regardless of the time remaining for the execution of the maneuver, the energy remaining to execute the maneuver and finally, if the motion is commanded via a biological entity (e.g. human), a robotical entity (e.g. computer) or both. Transitional type also exists in some vehicles. For example, when the Space Shuttle operated in low Earth orbit , the craft was described as fully-direct-six because in the vacuum of space, its six degrees could be commanded via reaction wheels and RCS thrusters . However, when the Space Shuttle was descending through the Earth's atmosphere for its return, the fully-direct-six degrees were no longer applicable as it was gliding through the air using its wings and control surfaces . Six degrees of freedom also refers to movement in video game-play. First-person shooter (FPS) games generally provide five degrees of freedom: forwards/backwards, slide left/right, up/down (jump/crouch/lie), yaw (turn left/right), and pitch (look up/down). If the game allows leaning control, then some consider it a sixth DOF; however, this may not be completely accurate, as a lean is a limited partial rotation. The term 6DOF has sometimes been used to describe games which allow freedom of movement, but do not necessarily meet the full 6DOF criteria. For example, Dead Space 2 , and to a lesser extent, Homeworld and Zone Of The Enders allow freedom of movement. Some examples of true 6DOF games, which allow independent control of all three movement axes and all three rotational axes, include Elite Dangerous , Shattered Horizon , the Descent franchise, the Everspace franchise, Retrovirus , Miner Wars , Space Engineers , Forsaken and Overload (from the same creators of Descent ). The space MMO Vendetta Online also features 6 degrees of freedom. Motion tracking hardware devices such as TrackIR and software-based apps like Eyeware Beam are used for 6DOF head tracking. This device often finds its places in flight simulators and other vehicle simulators that require looking around the cockpit to locate enemies or simply avoiding accidents in-game. The acronym 3DOF , meaning movement in the three dimensions but not rotation, is sometimes encountered. The Razer Hydra , a motion controller for PC, tracks position and rotation of two wired nunchucks , providing six degrees of freedom on each hand. The SpaceOrb 360 is a 6DOF computer input device released in 1996 originally manufactured and sold by the SpaceTec IMC company (first bought by Labtec , which itself was later bought by Logitech ). They now offer the 3Dconnexion range of 6DOF controllers, primarily targeting the professional CAD industry. The controllers sold with HTC VIVE provide 6DOF information by the lighthouse, which adopts Time of Flight (TOF) technology to determine the position of controllers.
https://en.wikipedia.org/wiki/Six_degrees_of_freedom
In mathematics , specifically transcendental number theory , the six exponentials theorem is a result that, given the right conditions on the exponents, guarantees the transcendence of at least one of a set of six exponentials. e x 1 y 1 , e x 1 y 2 , e x 2 y 1 , e x 2 y 2 , e x 3 y 1 , e x 3 y 2 . {\displaystyle e^{x_{1}y_{1}},e^{x_{1}y_{2}},e^{x_{2}y_{1}},e^{x_{2}y_{2}},e^{x_{3}y_{1}},e^{x_{3}y_{2}}.} The theorem can be stated in terms of logarithms by introducing the set L of logarithms of algebraic numbers : The theorem then says that if λ i j {\displaystyle \lambda _{ij}} are elements of L for i = 1 , 2 , j = 1 , 2 , 3 {\displaystyle i=1,2,j=1,2,3} such that λ 11 , λ 12 , λ 13 {\displaystyle \lambda _{11},\lambda _{12},\lambda _{13}} are linearly independent over the rational numbers, and λ 11 and λ 21 are also linearly independent over the rational numbers, then the matrix has rank 2. A special case of the result where x 1 , x 2 , and x 3 are logarithms of positive integers , y 1 = 1, and y 2 is real , was first mentioned in a paper by Leonidas Alaoglu and Paul Erdős from 1944 in which they try to prove that the ratio of consecutive colossally abundant numbers is always prime . They claimed that Carl Ludwig Siegel knew of a proof of this special case, but it is not recorded. [ 1 ] Using the special case they manage to prove that the ratio of consecutive colossally abundant numbers is always either a prime or a semiprime . The theorem was first explicitly stated and proved in its complete form independently by Serge Lang [ 2 ] and Kanakanahalli Ramachandra [ 3 ] in the 1960s. A stronger, related result is the five exponentials theorem , [ 4 ] which is as follows. Let x 1 , x 2 and y 1 , y 2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let γ be a non-zero algebraic number. Then at least one of the following five numbers is transcendental: This theorem implies the six exponentials theorem and in turn is implied by the as yet unproven four exponentials conjecture, which says that in fact one of the first four numbers on this list must be transcendental. Another related result that implies both the six exponentials theorem and the five exponentials theorem is the sharp six exponentials theorem . [ 5 ] This theorem is as follows. Let x 1 , x 2 , and x 3 be complex numbers that are linearly independent over the rational numbers, and let y 1 and y 2 be a pair of complex numbers that are linearly independent over the rational numbers, and suppose that β ij are six algebraic numbers for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 such that the following six numbers are algebraic: Then x i y j = β ij for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2. The six exponentials theorem then follows by setting β ij = 0 for every i and j , while the five exponentials theorem follows by setting x 3 = γ/ x 1 and using Baker's theorem to ensure that the x i are linearly independent. There is a sharp version of the five exponentials theorem as well, although it as yet unproven so is known as the sharp five exponentials conjecture . [ 6 ] This conjecture implies both the sharp six exponentials theorem and the five exponentials theorem, and is stated as follows. Let x 1 , x 2 and y 1 , y 2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let α, β 11 , β 12 , β 21 , β 22 , and γ be six algebraic numbers with γ ≠ 0 such that the following five numbers are algebraic: Then x i y j = β ij for 1 ≤ i , j ≤ 2 and γ x 2 = α x 1 . A consequence of this conjecture that isn't currently known would be the transcendence of e π² , by setting x 1 = y 1 = β 11 = 1, x 2 = y 2 = i π, and all the other values in the statement to be zero. A further strengthening of the theorems and conjectures in this area are the strong versions. The strong six exponentials theorem is a result proved by Damien Roy that implies the sharp six exponentials theorem. [ 7 ] This result concerns the vector space over the algebraic numbers generated by 1 and all logarithms of algebraic numbers, denoted here as L ∗ . So L ∗ is the set of all complex numbers of the form for some n ≥ 0, where all the β i and α i are algebraic and every branch of the logarithm is considered. The strong six exponentials theorem then says that if x 1 , x 2 , and x 3 are complex numbers that are linearly independent over the algebraic numbers, and if y 1 and y 2 are a pair of complex numbers that are also linearly independent over the algebraic numbers then at least one of the six numbers x i y j for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 is not in L ∗ . This is stronger than the standard six exponentials theorem which says that one of these six numbers is not simply the logarithm of an algebraic number. There is also a strong five exponentials conjecture formulated by Michel Waldschmidt . [ 8 ] It would imply both the strong six exponentials theorem and the sharp five exponentials conjecture. This conjecture claims that if x 1 , x 2 and y 1 , y 2 are two pairs of complex numbers, with each pair being linearly independent over the algebraic numbers, then at least one of the following five numbers is not in L ∗ : All the above conjectures and theorems are consequences of the unproven extension of Baker's theorem , that logarithms of algebraic numbers that are linearly independent over the rational numbers are automatically algebraically independent too. The diagram on the right shows the logical implications between all these results. The exponential function e z uniformizes the exponential map of the multiplicative group G m . Therefore, we can reformulate the six exponential theorem more abstractly as follows: (In order to derive the classical statement, set u ( z ) = (e y 1 z ; e y 2 z ) and note that Q x 1 + Q x 2 + Q x 3 is a subset of L ). In this way, the statement of the six exponentials theorem can be generalized to an arbitrary commutative group variety G over the field of algebraic numbers. This generalized six exponential conjecture , however, seems out of scope at the current state of transcendental number theory . For the special but interesting cases G = G m × E and G = E × E′ , where E , E′ are elliptic curves over the field of algebraic numbers, results towards the generalized six exponential conjecture were proven by Aleksander Momot. [ 9 ] These results involve the exponential function e z and a Weierstrass function ℘ {\displaystyle \wp } resp. two Weierstrass functions ℘ , ℘ ′ {\displaystyle \wp ,\wp '} with algebraic invariants g 2 , g 3 , g 2 ′ , g 3 ′ {\displaystyle g_{2},g_{3},g_{2}',g_{3}'} , instead of the two exponential functions e y 1 z , e y 2 z {\displaystyle e^{y_{1}z},e^{y_{2}z}} in the classical statement. Let G = G m × E and suppose E is not isogenous to a curve over a real field and that u ( C ) is not an algebraic subgroup of G ( C ) . Then L is generated over Q either by two elements x 1 , x 2 , or three elements x 1 , x 2 , x 3 which are not all contained in a real line R c , where c is a non-zero complex number. A similar result is shown for G = E × E′ . [ 10 ]
https://en.wikipedia.org/wiki/Six_exponentials_theorem
The six-factor formula is used in nuclear engineering to determine the multiplication of a nuclear chain reaction in a non-infinite medium. The symbols are defined as: [ 2 ] The multiplication factor, k , is defined as (see nuclear chain reaction ):
https://en.wikipedia.org/wiki/Six_factor_formula
A sequence of six consecutive nines occurs in the decimal representation of the number pi ( π ), starting at the 762nd decimal place. [ 1 ] [ 2 ] It has become famous because of the mathematical coincidence , and because of the idea that one could memorize the digits of π up to that point, and then suggest that π is rational . The earliest known mention of this idea occurs in Douglas Hofstadter 's 1985 book Metamagical Themas , where Hofstadter states [ 3 ] [ 4 ] I myself once learned 380 digits of π , when I was a crazy high-school kid. My never-attained ambition was to reach the spot, 762 digits out in the decimal expansion, where it goes "999999", so that I could recite it out loud, come to those six 9s, and then impishly say, "and so on!" This sequence of six nines is colloquially known as the " Feynman point ", after physicist Richard Feynman , who allegedly stated this same idea in a lecture. [ 5 ] However it is not clear when, or even if, Feynman ever made such a statement. It is not mentioned in his memoirs and unknown to his biographer James Gleick . [ 6 ] π is conjectured , but not known, to be a normal number . For a normal number sampled uniformly at random, the probability of a specific sequence of six digits occurring this early in the decimal representation is about 0.08%. [ 5 ] The early string of six 9s is also the first occurrence of four and five consecutive identical digits. The next sequence of six consecutive identical digits is again composed of 9s, starting at position 193,034. [ 5 ] The next distinct sequence of six consecutive identical digits after that starts with the digit 8 at position 222,299. [ 7 ] The positions of the first occurrence of a string of 1, 2, 3, 4, 5, 6, 7, 8, and 9 consecutive 9s in the decimal expansion are 5; 44; 762; 762; 762; 762; 1,722,776; 36,356,642; and 564,665,206, respectively (sequence A048940 in the OEIS ). [ 1 ] The first 1,001 digits of π (1,000 decimal places), showing consecutive runs of three or more digits including the consecutive six 9's underlined, are as follows: [ 8 ] 3.1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679 8214808651 3282306647 0938446095 5058223172 5359408128 48 111 74502 8410270193 852110 555 9 6446229489 5493038196 4428810975 6659334461 2847564823 3786783165 2712019091 4564856692 3460348610 4543266482 1339360726 0249141273 7245870066 0631558817 4881520920 9628292540 9171536436 7892590360 0113305305 4882046652 1384146951 9415116094 3305727036 5759591953 0921861173 8193261179 3105118548 0744623799 6274956735 1885752724 8912279381 8301194912 9833673362 4406566430 8602139494 6395224737 1907021798 6094370277 0539217176 2931767523 8467481846 7669405132 000 5681271 4526356082 7785771342 7577896091 7363717872 1468440901 2249534301 4654958537 1050792279 6892589235 4201995611 2129021960 8640344181 5981362977 4771309960 5187072113 4 999999 837 2978049951 0597317328 1609631859 5024459455 3469083026 4252230825 3344685035 2619311881 7101 000 313 7838752886 5875332083 8142061717 7669147303 5982534904 2875546873 1159562863 8823537875 9375195778 1857780532 1712268066 1300192787 66 111 95909 2164201989
https://en.wikipedia.org/wiki/Six_nines_in_pi
In arithmetic and algebra the sixth power of a number n is the result of multiplying six instances of n together. So: Sixth powers can be formed by multiplying a number by its fifth power , multiplying the square of a number by its fourth power , by cubing a square, or by squaring a cube. The sequence of sixth powers of integers are: They include the significant decimal numbers 10 6 (a million ), 100 6 (a short-scale trillion and long-scale billion), 1000 6 (a quintillion and a long-scale trillion ) and so on. The sixth powers of integers can be characterized as the numbers that are simultaneously squares and cubes. [ 1 ] In this way, they are analogous to two other classes of figurate numbers : the square triangular numbers , which are simultaneously square and triangular, and the solutions to the cannonball problem , which are simultaneously square and square-pyramidal. Because of their connection to squares and cubes, sixth powers play an important role in the study of the Mordell curves , which are elliptic curves of the form When k {\displaystyle k} is divisible by a sixth power, this equation can be reduced by dividing by that power to give a simpler equation of the same form. A well-known result in number theory , proven by Rudolf Fueter and Louis J. Mordell , states that, when k {\displaystyle k} is an integer that is not divisible by a sixth power (other than the exceptional cases k = 1 {\displaystyle k=1} and k = − 432 {\displaystyle k=-432} ), this equation either has no rational solutions with both x {\displaystyle x} and y {\displaystyle y} nonzero or infinitely many of them. [ 2 ] In the archaic notation of Robert Recorde , the sixth power of a number was called the "zenzicube", meaning the square of a cube. Similarly, the notation for sixth powers used in 12th century Indian mathematics by Bhāskara II also called them either the square of a cube or the cube of a square. [ 3 ] There are numerous known examples of sixth powers that can be expressed as the sum of seven other sixth powers, but no examples are yet known of a sixth power expressible as the sum of just six sixth powers. [ 4 ] This makes it unique among the powers with exponent k = 1, 2, ... , 8, the others of which can each be expressed as the sum of k other k -th powers, and some of which (in violation of Euler's sum of powers conjecture ) can be expressed as a sum of even fewer k -th powers. In connection with Waring's problem , every sufficiently large integer can be represented as a sum of at most 24 sixth powers of integers. [ 5 ] There are infinitely many different nontrivial solutions to the Diophantine equation [ 6 ] It has not been proven whether the equation has a nontrivial solution, [ 7 ] but the Lander, Parkin, and Selfridge conjecture would imply that it does not.
https://en.wikipedia.org/wiki/Sixth_power
Size in general is the magnitude or dimensions of a thing. More specifically, geometrical size (or spatial size ) can refer to three geometrical measures : length , area , or volume . Length can be generalized to other linear dimensions (width, height , diameter , perimeter ). Size can also be measured in terms of mass , especially when assuming a density range. In mathematical terms, "size is a concept abstracted from the process of measuring by comparing a longer to a shorter". [ 1 ] Size is determined by the process of comparing or measuring objects, which results in the determination of the magnitude of a quantity, such as length or mass, relative to a unit of measurement . Such a magnitude is usually expressed as a numerical value of units on a previously established spatial scale , such as meters or inches . The sizes with which humans tend to be most familiar are body dimensions (measures of anthropometry ), which include measures such as human height and human body weight . These measures can, in the aggregate, allow the generation of commercially useful distributions of products that accommodate expected body sizes, [ 2 ] as with the creation of clothing sizes and shoe sizes , and with the standardization of door frame dimensions, ceiling heights, and bed sizes . The human experience of size can lead to a psychological tendency towards size bias, [ 3 ] wherein the relative importance or perceived complexity of organisms and other objects is judged based on their size relative to humans , and particularly whether this size makes them easy to observe without aid. Humans most frequently perceive the size of objects through visual cues . [ 4 ] One common means of perceiving size is to compare the size of a newly observed object with the size of a familiar object whose size is already known. Binocular vision gives humans the capacity for depth perception , which can be used to judge which of several objects is closer, and by how much, which allows for some estimation of the size of the more distant object relative to the closer object. This also allows for the estimation of the size of large objects based on comparison of closer and farther parts of the same object. The perception of size can be distorted by manipulating these cues, for example through the creation of forced perspective . Some measures of size may also be determined by sound . Visually impaired humans often use echolocation to determine features of their surroundings, such as the size of spaces and objects. However, even humans who lack this ability can tell if a space that they are unable to see is large or small from hearing sounds echo in the space. Size can also be determined by touch , which is a process of haptic perception . The sizes of objects that can not readily be measured merely by sensory input may be evaluated with other kinds of measuring instruments . For example, objects too small to be seen with the naked eye may be measured when viewed through a microscope , while objects too large to fit within the field of vision may be measured using a telescope , or through extrapolation from known reference points. However, even very advanced measuring devices may still present a limited field of view . Objects being described by their relative size are often described as being comparatively big and little, or large and small, although "big and little tend to carry affective and evaluative connotations, whereas large and small tend to refer only to the size of a thing". [ 5 ] A wide range of other terms exist to describe things by their relative size, with small things being described for example as tiny, miniature, or minuscule, and large things being described as, for example, huge, gigantic, or enormous. Objects are also typically described as tall or short specifically relative to their vertical height, and as long or short specifically relative to their length along other directions. People who have experienced excessive growth and height significantly above average are described as having gigantism . Outside of humans, deep-sea gigantism (or abyssal gigantism) is the tendency for species of deep-sea dwelling animals to be larger than their shallower-water relatives across a large taxonomic range, and island gigantism (or insular gigantism) is a biological phenomenon in which the size of an animal species isolated on an island increases dramatically in comparison to its mainland relatives. Although the size of an object may be reflected in its mass or its weight , each of these is a different concept. In scientific contexts, mass refers loosely to the amount of " matter " in an object (though "matter" may be difficult to define), whereas weight refers to the force experienced by an object due to gravity . [ 6 ] An object with a mass of 1.0 kilogram will weigh approximately 9.81 newtons ( newton is the unit of force, while kilogram is the unit of mass) on the surface of the Earth (its mass multiplied by the gravitational field strength ). Its weight will be less on Mars (where gravity is weaker), more on Saturn , and negligible in space when far from any significant source of gravity, but it will always have the same mass. Two objects of equal size, however, may have very different mass and weight, depending on the composition and density of the objects. By contrast, if two objects are known to have roughly the same composition, then some information about the size of one can be determined by measuring the size of the other, and determining the difference in weight between the two. For example, if two blocks of wood are equally dense, and it is known that one weighs ten kilograms and the other weighs twenty kilograms, and that the ten kilogram block has a volume of one cubic foot, then it can be deduced that the twenty kilogram block has a volume of two cubic feet. The concept of size is often applied to ideas that have no physical reality. In mathematics , magnitude is the size of a mathematical object , which is an abstract object with no concrete existence. Magnitude is a property by which the object can be compared as larger or smaller than other objects of the same kind. More formally, an object's magnitude is an ordering (or ranking) of the class of objects to which it belongs. There are various other mathematical concepts of size for sets, such as: In statistics ( hypothesis testing ), the "size" of the test refers to the rate of false positives , denoted by α. In astronomy , the magnitude of brightness or intensity of a star is measured on a logarithmic scale . Such a scale is also used to measure the intensity of an earthquake , and this intensity is often referred to as the "size" of the event. [ 7 ] In computing, file size is a measure of the size of a computer file , typically measured in bytes . The actual amount of disk space consumed by the file depends on the file system . The maximum file size a file system supports depends on the number of bits reserved to store size information and the total size of the file system in terms of its capacity to store bits of information. In physics , the Planck length , denoted ℓ P , is a unit of length , equal to 1.616 199 (97) × 10 −35 metres . It is a unit in the system of Planck units , developed by physicist Max Planck . The Planck length is defined in terms of three fundamental physical constants : the speed of light , the Planck constant , and the Newtonian constant of gravitation . In contrast, the largest observable thing is the observable universe . The comoving distance – the distance as would be measured at a specific time, including the present – between Earth and the edge of the observable universe is 46 billion light-years (14 × 10 ^ 9 pc), making the diameter of the observable universe about 91 billion light-years (28 × 10 ^ 9 pc). In poetry , fiction , and other literature , size is occasionally assigned to characteristics that do not have measurable dimensions, such as the metaphorical reference to the size of a person's heart as a shorthand for describing their typical degree of kindness or generosity . With respect to physical size, the concept of resizing is occasionally presented in fairy tales , fantasy , and science fiction , placing humans in a different context within their natural environment by depicting them as having physically been made exceptionally large or exceptionally small through some fantastic means. A famous example is associated with the fictional character, the Grinch , who was said in the story to have been born with a heart that they say was "two sizes too small", such that when he is later redeemed, his heart grows "three sizes that day", leading cardiologist David Kass to humorously suggest that the rapid growth of the Grinch's heart at the end of the story indicates that the Grinch has the physiology of a Burmese python . [ 8 ]
https://en.wikipedia.org/wiki/Size
Size-asymmetric competition refers to situations in which larger individuals exploit disproportionately greater amounts of resources when competing with smaller individuals. [ 1 ] This type of competition is common among plants [ 2 ] but also exists among animals . [ 3 ] Size-asymmetric competition usually results from large individuals monopolizing the resource by "pre-emption"—i.e., exploiting the resource before smaller individuals are able to obtain it. [ 1 ] Size-asymmetric competition has major effects on population structure and diversity within ecological communities . [ 4 ] [ 5 ] [ 6 ] [ 7 ] Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition ) to perfectly size-symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size-asymmetric (the largest individuals exploit all the available resource). The degree of size asymmetry can be described by the parameter θ in the following equation focusing on the partition of the resource r among n individuals of sizes B j . [ 1 ] [ 8 ] r i = r ( B i ) θ {\displaystyle r_{i}=r(B_{i})^{\theta }} / ∑ j = 1 n ( B j ) θ {\displaystyle /\sum _{j=1}^{n}(B_{j})^{\theta }} where r i refers to the amount of resources consumed by individuals in the neighbourhood of j . When θ = 1, competition is perfectly size-symmetric—e.g., if a large individual is twice the size of its smaller competitor, the large individual will acquire twice the amount of that resource (i.e. both individuals will exploit the same amount of resource per biomass unit). When θ > 1, competition is size-asymmetric—e.g., if a large individual is twice the size of its smaller competitor and θ = 2, the large individual will acquire four times the amount of that resource (i.e., the large individual will exploit twice the amount of resource per biomass unit). As θ increases, competition becomes more size-asymmetric, and larger plants get larger amounts of resources per unit of biomass compared with smaller plants. Competition among plants for light is size-asymmetric because of the directionality of its supply. [ 2 ] Higher leaves shade lower leaves but not vice versa. Competition for nutrients appears to be relatively size-symmetric, [ 9 ] although it has been hypothesized that a patchy distribution of nutrients in the soil may lead to size asymmetry in competition among roots. [ 1 ] [ 10 ] Nothing is known about the size asymmetry of competition for water . [ 1 ] Various ecological processes and patterns have been shown to be affected by the degree of size asymmetry—e.g., succession , [ 11 ] biomass distribution, [ 2 ] [ 12 ] grazing response, [ 7 ] population growth , [ 8 ] ecosystem functioning, [ 13 ] coexistence [ 14 ] and species richness . [ 4 ] [ 5 ] [ 6 ] [ 7 ] A large body of evidence shows that species loss following nutrient enrichment ( eutrophication ) is related to light competition. [ 5 ] [ 15 ] [ 16 ] However, there is still a debate whether this phenomenon is related to the size asymmetry of light competition [ 5 ] [ 6 ] or to other factors. [ 17 ] Contrasting assumptions about size asymmetry characterise the two leading and competing theories in plant ecology, [ 6 ] the R* theory and the CSR theory . The R* theory assumes that competition is size-symmetric and therefore predicts that competitive ability in nature results from the ability to withstand low level of resources (known as the R* rule ). [ 18 ] In contrast the CSR theory assumes that competition is size-asymmetric and therefore predicts that competitive ability in nature results from the ability to grow fast and attain a large size. [ 19 ] Size-asymmetric competition affects also several evolutionary processes in relation to trait selection. Evolution of plant height is highly affected by asymmetric light competition. [ 20 ] [ 21 ] Theory predicts that only under asymmetric light competition, plants will grow upward and invest in wood production at the expense of investment in leaves, or in reproductive organs ( flowers and fruits ). [ 20 ] [ 21 ] Consistent with this, there is evidence that plant height increases as water availability increases, [ 22 ] presumably due to increase in the relative importance of size-asymmetric competition for light. Similarly, investment in the size of seeds at the expense of their number may be more effective under size-asymmetric resource competition, since larger seeds tend to produce larger seedlings that are better competitors. [ 23 ] Size-asymmetric competition can be exploited in managing plant communities, such as the suppression of weed in crop fields. [ 23 ] Weeds are a greater problem for farmer in dry than in moist environments, in large part because crops can suppress weeds much more effectively under size-asymmetric competition for light than under more size-symmetric competition below ground.
https://en.wikipedia.org/wiki/Size-asymmetric_competition
Size-exclusion chromatography , also known as molecular sieve chromatography , [ 1 ] is a chromatographic method in which molecules in solution are separated by their shape , and in some cases size . [ 2 ] It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers . [ 3 ] Typically, when an aqueous solution is used to transport the sample through the column, the technique is known as gel filtration chromatography , versus the name gel permeation chromatography , which is used when an organic solvent is used as a mobile phase. The chromatography column is packed with fine, porous beads which are commonly composed of dextran , agarose , or polyacrylamide polymers. The pore sizes of these beads are used to estimate the dimensions of macromolecules . [ 1 ] SEC is a widely used polymer characterization method because of its ability to provide good molar mass distribution (Mw) results for polymers. Size-exclusion chromatography (SEC) is fundamentally different from all other chromatographic techniques in that separation is based on a simple procedure of classifying molecule sizes rather than any type of interaction. [ 4 ] The main application of size-exclusion chromatography is the fractionation of proteins and other water-soluble polymers, while gel permeation chromatography is used to analyze the molecular weight distribution of organic-soluble polymers. Either technique should not be confused with gel electrophoresis , where an electric field is used to "pull" molecules through the gel depending on their electrical charges. The amount of time a solute remains within a pore is dependent on the size of the pore. Larger solutes will have access to a smaller volume and vice versa. Therefore, a smaller solute will remain within the pore for a longer period of time compared to a larger solute. [ 5 ] Even though size exclusion chromatography is widely utilized to study natural organic material, there are limitations. One of these limitations include that there is no standard molecular weight marker; [ 6 ] thus, there is nothing to compare the results back to. If precise molecular weight is required, other methods should be used. The advantages of this method include good separation of large molecules from the small molecules with a minimal volume of eluate, [ 7 ] and that various solutions can be applied without interfering with the filtration process, all while preserving the biological activity of the particles to separate. The technique is generally combined with others that further separate molecules by other characteristics, such as acidity, basicity, charge, and affinity for certain compounds. With size exclusion chromatography, there are short and well-defined separation times and narrow bands, which lead to good sensitivity. There is also no sample loss because solutes do not interact with the stationary phase. The other advantage to this experimental method is that in certain cases, it is feasible to determine the approximate molecular weight of a compound. The shape and size of the compound (eluent) determine how the compound interacts with the gel (stationary phase). To determine approximate molecular weight, the elution volumes of compounds with their corresponding molecular weights are obtained and then a plot of “K av ” vs “log(Mw)” is made, where K a v = ( V e − V o ) / ( V t − V o ) {\displaystyle K_{av}=(V_{e}-V_{o})/(V_{t}-V_{o})} and Mw is the molecular mass. This plot acts as a calibration curve, which is used to approximate the desired compound's molecular weight. The V e component represents the volume at which the intermediate molecules elute such as molecules that have partial access to the beads of the column. In addition, V t is the sum of the total volume between the beads and the volume within the beads. The V o component represents the volume at which the larger molecules elute, which elute in the beginning. [ 8 ] [ 9 ] Disadvantages are, for example, that only a limited number of bands can be accommodated because the time scale of the chromatogram is short, and, in general, there must be a 10% difference in molecular mass to have a good resolution. [ 7 ] The technique was invented in 1955 by Grant Henry Lathe and Colin R Ruthven, working at Queen Charlotte's Hospital, London. [ 10 ] [ 11 ] They later received the John Scott Award for this invention. [ 12 ] While Lathe and Ruthven used starch gels as the matrix, Jerker Porath and Per Flodin later introduced dextran gels; [ 13 ] other gels with size fractionation properties include agarose and polyacrylamide. A short review of these developments has appeared. [ 14 ] There were also attempts to fractionate synthetic high polymers; however, it was not until 1964, when J. C. Moore of the Dow Chemical Company published his work on the preparation of gel permeation chromatography (GPC) columns based on cross-linked polystyrene with controlled pore size, [ 15 ] that a rapid increase of research activity in this field began. It was recognized almost immediately that with proper calibration, GPC was capable to provide molar mass and molar mass distribution information for synthetic polymers. Because the latter information was difficult to obtain by other methods, GPC came rapidly into extensive use. [ 16 ] SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works by trapping smaller molecules in the pores of the adsorbent ("stationary phase"). This process is usually performed within a column, which typically consists of a hollow tube tightly packed with micron-scale polymer beads containing pores of different sizes. These pores may be depressions on the surface or channels through the bead. As the solution travels down the column some particles enter into the pores. Larger particles cannot enter into as many pores. The larger the particles, the faster the elution. The larger molecules simply pass by the pores because those molecules are too large to enter the pores. Larger molecules therefore flow through the column more quickly than smaller molecules, that is, the smaller the molecule, the longer the retention time. One requirement for SEC is that the analyte does not interact with the surface of the stationary phases, with differences in elution time between analytes ideally being based solely on the solute volume the analytes can enter, rather than chemical or electrostatic interactions with the stationary phases. Thus, a small molecule that can penetrate every region of the stationary phase pore system can enter a total volume equal to the sum of the entire pore volume and the interparticle volume. This small molecule elutes late (after the molecule has penetrated all of the pore- and interparticle volume—approximately 80% of the column volume). At the other extreme, a very large molecule that cannot penetrate any the smaller pores can enter only the interparticle volume (~35% of the column volume) and elutes earlier when this volume of mobile phase has passed through the column. The underlying principle of SEC is that particles of different sizes elute (filter) through a stationary phase at different rates. This results in the separation of a solution of particles based on size. Provided that all the particles are loaded simultaneously or near-simultaneously, particles of the same size should elute together. However, as there are various measures of the size of a macromolecule (for instance, the radius of gyration and the hydrodynamic radius), a fundamental problem in the theory of SEC has been the choice of a proper molecular size parameter by which molecules of different kinds are separated. Experimentally, Benoit and co-workers found an excellent correlation between elution volume and a dynamically based molecular size, the hydrodynamic volume , for several different chain architecture and chemical compositions. [ 17 ] The observed correlation based on the hydrodynamic volume became accepted as the basis of universal SEC calibration. Still, the use of the hydrodynamic volume, a size based on dynamical properties, in the interpretation of SEC data is not fully understood. [ 18 ] This is because SEC is typically run under low flow rate conditions where hydrodynamic factor should have little effect on the separation. In fact, both theory and computer simulations assume a thermodynamic separation principle: the separation process is determined by the equilibrium distribution (partitioning) of solute macromolecules between two phases: a dilute bulk solution phase located at the interstitial space and confined solution phases within the pores of column packing material. Based on this theory, it has been shown that the relevant size parameter to the partitioning of polymers in pores is the mean span dimension (mean maximal projection onto a line). [ 19 ] Although this issue has not been fully resolved, it is likely that the mean span dimension and the hydrodynamic volume are strongly correlated. Each size exclusion column has a range of molecular weights that can be separated. The exclusion limit defines the molecular weight at the upper end of the column 'working' range and is where molecules are too large to get trapped in the stationary phase. The lower end of the range is defined by the permeation limit, which defines the molecular weight of a molecule that is small enough to penetrate all pores of the stationary phase. All molecules below this molecular mass are so small that they elute as a single band. [ 7 ] The filtered solution that is collected at the end is known as the eluate . The void volume includes any particles too large to enter the medium, and the solvent volume is known as the column volume . Following are the materials which are commonly used for porous gel beads in size exclusion chromatography [ 20 ] And Trade name (kDa) In real-life situations, particles in solution do not have a fixed size, resulting in the probability that a particle that would otherwise be hampered by a pore passing right by it. Also, the stationary-phase particles are not ideally defined; both particles and pores may vary in size. Elution curves, therefore, resemble Gaussian distributions . The stationary phase may also interact in undesirable ways with a particle and influence retention times, though great care is taken by column manufacturers to use stationary phases that are inert and minimize this issue. Like other forms of chromatography, increasing the column length enhances resolution, and increasing the column diameter increases column capacity. Proper column packing is important for maximum resolution: An over-packed column can collapse the pores in the beads, resulting in a loss of resolution. An under-packed column can reduce the relative surface area of the stationary phase accessible to smaller species, resulting in those species spending less time trapped in pores. Unlike affinity chromatography techniques, a solvent head at the top of the column can drastically diminish resolution as the sample diffuses prior to loading, broadening the downstream elution. In simple manual columns, the eluent is collected in constant volumes, known as fractions. The more similar the particles are in size the more likely they are in the same fraction and not detected separately. More advanced columns overcome this problem by constantly monitoring the eluent. The collected fractions are often examined by spectroscopic techniques to determine the concentration of the particles eluted. Common spectroscopy detection techniques are refractive index (RI) and ultraviolet (UV). When eluting spectroscopically similar species (such as during biological purification), other techniques may be necessary to identify the contents of each fraction. It is also possible to analyze the eluent flow continuously with RI, LALLS , Multi-Angle Laser Light Scattering MALS, UV, and/or viscosity measurements. The elution volume (Ve) decreases roughly linear with the logarithm of the molecular hydrodynamic volume . Columns are often calibrated using 4-5 standard samples (e.g., folded proteins of known molecular weight), and a sample containing a very large molecule such as thyroglobulin to determine the void volume . (Blue dextran is not recommended for Vo determination because it is heterogeneous and may give variable results) The elution volumes of the standards are divided by the elution volume of the thyroglobulin (Ve/Vo) and plotted against the log of the standards' molecular weights. In general, SEC is considered a low-resolution chromatography as it does not discern similar species very well, and is therefore often reserved for the final step of a purification. The technique can determine the quaternary structure of purified proteins that have slow exchange times, since it can be carried out under native solution conditions, preserving macromolecular interactions. SEC can also assay protein tertiary structure , as it measures the hydrodynamic volume (not molecular weight), allowing folded and unfolded versions of the same protein to be distinguished. For example, the apparent hydrodynamic radius of a typical protein domain might be 14 Å and 36 Å for the folded and unfolded forms, respectively. SEC allows the separation of these two forms, as the folded form elutes much later due to its smaller size. SEC can be used as a measure of both the size and the polydispersity of a synthesized polymer , that is, the ability to find the distribution of the sizes of polymer molecules. If standards of a known size are run previously, then a calibration curve can be created to determine the sizes of polymer molecules of interest in the solvent chosen for analysis (often THF ). In alternative fashion, techniques such as light scattering and/or viscometry can be used online with SEC to yield absolute molecular weights that do not rely on calibration with standards of known molecular weight. Due to the difference in size of two polymers with identical molecular weights, the absolute determination methods are, in general, more desirable. A typical SEC system can quickly (in about half an hour) give polymer chemists information on the size and polydispersity of the sample. The preparative SEC can be used for polymer fractionation on an analytical scale. In SEC, mass is not measured so much as the hydrodynamic volume of the polymer molecules, that is, how much space a particular polymer molecule takes up when it is in solution. However, the approximate molecular weight can be calculated from SEC data because the exact relationship between molecular weight and hydrodynamic volume for polystyrene can be found. For this, polystyrene is used as a standard. But the relationship between hydrodynamic volume and molecular weight is not the same for all polymers, so only an approximate measurement can be obtained. [ 21 ] Another drawback is the possibility of interaction between the stationary phase and the analyte. Any interaction leads to a later elution time and thus mimics a smaller analyte size. When performing this method, the bands of the eluting molecules may be broadened. This can occur by turbulence caused by the flow of the mobile phase molecules passing through the molecules of the stationary phase. In addition, molecular thermal diffusion and friction between the molecules of the glass walls and the molecules of the eluent contribute to the broadening of the bands. Besides broadening, the bands also overlap with each other. As a result, the eluent usually gets considerably diluted. A few precautions can be taken to prevent the likelihood of the bands broadening. For instance, one can apply the sample in a narrow, highly concentrated band on the top of the column. The more concentrated the eluent is, the more efficient the procedure would be. However, it is not always possible to concentrate the eluent, which can be considered as one more disadvantage. [ 9 ] Absolute size-exclusion chromatography (ASEC) is a technique that couples a light scattering instrument, most commonly multi-angle light scattering (MALS) or another form of static light scattering (SLS), but possibly a dynamic light scattering (DLS) instrument, to a size-exclusion chromatography system for absolute molar mass and/or size measurements of proteins and macromolecules as they elute from the chromatography system. [ 22 ] The definition of “absolute” in this case is that calibration of retention time on the column with a set of reference standards is not required to obtain molar mass or the hydrodynamic size, often referred to as hydrodynamic diameter (D H in units of nm). Non-ideal column interactions, such as electrostatic or hydrophobic surface interactions that modulate retention time relative to standards, do not impact the final result. Likewise, differences between conformation of the analyte and the standard have no effect on an absolute measurement; for example, with MALS analysis, the molar mass of inherently disordered proteins are characterized accurately even though they elute at much earlier times than globular proteins with the same molar mass, and the same is true of branched polymers which elute late compared to linear reference standards with the same molar mass. [ 22 ] [ 23 ] [ 24 ] Another benefit of ASEC is that the molar mass and/or size is determined at each point in an eluting peak, and therefore indicates homogeneity or polydispersity within the peak. For example, SEC-MALS analysis of a monodisperse protein will show that the entire peak consists of molecules with the same molar mass, something that is not possible with standard SEC analysis. Determination of molar mass with SLS requires combining the light scattering measurements with concentration measurements. Therefore SEC-MALS typically includes the light scattering detector and either a differential refractometer or UV/Vis absorbance detector. In addition, MALS determines the rms radius R g of molecules above a certain size limit, typically 10 nm. SEC-MALS can therefore analyze the conformation of polymers via the relationship of molar mass to R g . For smaller molecules, either DLS or, more commonly, a differential viscometer is added to determine hydrodynamic radius and evaluate molecular conformation in the same manner. In SEC-DLS, the sizes of the macromolecules are measured as they elute into the flow cell of the DLS instrument from the size exclusion column set. The hydrodynamic size of the molecules or particles are measured and not their molecular weights. For proteins a Mark-Houwink type of calculation can be used to estimate the molecular weight from the hydrodynamic size. A major advantage of DLS coupled with SEC is the ability to obtain enhanced DLS resolution. [ 25 ] Batch DLS is quick and simple and provides a direct measure of the average size, but the baseline resolution of DLS is a ratio of 3:1 in diameter. Using SEC, the proteins and protein oligomers are separated, allowing oligomeric resolution. Aggregation studies can also be done using ASEC. Though the aggregate concentration may not be calculated with light scattering (an online concentration detector such as that used in SEC-MALS for molar mass measurement also determines aggregate concentration), the size of the aggregate can be measured, only limited by the maximum size eluting from the SEC columns. Limitations of ASEC with DLS detection include flow-rate, concentration, and precision. Because a correlation function requires anywhere from 3–7 seconds to properly build, a limited number of data points can be collected across the peak. ASEC with SLS detection is not limited by flow rate and measurement time is essentially instantaneous, and the range of concentration is several orders of magnitude larger than for DLS. However, molar mass analysis with SEC-MALS does require accurate concentration measurements. MALS and DLS detectors are often combined in a single instrument for more comprehensive absolute analysis following separation by SEC.
https://en.wikipedia.org/wiki/Size-exclusion_chromatography
In quantum chemistry , size consistency and size extensivity are concepts relating to how the behaviour of quantum-chemistry calculations changes with the system size. Size consistency (or strict separability ) is a property that guarantees the consistency of the energy behaviour when interaction between the involved molecular subsystems is nullified (for example, by distance). Size extensivity , introduced by Bartlett, is a more mathematically formal characteristic which refers to the correct (linear) scaling of a method with the number of electrons. [ 1 ] Let A and B be two non-interacting systems. If a given theory for the evaluation of the energy is size-consistent, then the energy of the supersystem A + B, separated by a sufficiently large distance such that there is essentially no shared electron density, is equal to the sum of the energy of A plus the energy of B taken by themselves: E ( A + B ) = E ( A ) + E ( B ) . {\displaystyle E(A+B)=E(A)+E(B).} This property of size consistency is of particular importance to obtain correctly behaving dissociation curves . Others have more recently argued that the entire potential energy surface should be well-defined. [ 2 ] Size consistency and size extensivity are sometimes used interchangeably in the literature. However, there are very important distinctions to be made between them. [ 3 ] Hartree–Fock (HF), coupled cluster , many-body perturbation theory (to any order), and full configuration interaction (FCI) are size-extensive but not always size-consistent. For example, the restricted Hartree–Fock model is not able to correctly describe the dissociation curves of H 2 , and therefore all post-HF methods that employ HF as a starting point will fail in that matter (so-called single-reference methods). Sometimes numerical errors can cause a method that is formally size-consistent to behave in a non-size-consistent manner. [ 4 ] Core extensivity is yet another related property, which extends the requirement to the proper treatment of excited states. [ 5 ] This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Size_consistency_and_size_extensivity
According to the classical theories of elastic or plastic structures made from a material with non- random strength ( f t ), the nominal strength ( σ N ) of a structure is independent of the structure size ( D ) when geometrically similar structures are considered. [ 1 ] Any deviation from this property is called the size effect . For example, conventional strength of materials predicts that a large beam and a tiny beam will fail at the same stress if they are made of the same material. In the real world, because of size effects, a larger beam will fail at a lower stress than a smaller beam. The structural size effect concerns structures made of the same material, with the same microstructure . It must be distinguished from the size effect of material inhomogeneities, particularly the Hall-Petch effect , which describes how the material strength increases with decreasing grain size in polycrystalline metals . The size effect can have two causes: The statistical size effect occurs for a broad class of brittle structures that follow the weakest-link model. This model means that macro-fracture initiation from one material element, or more precisely one representative volume element (RVE), causes the whole structure to fail, like the failure of one link in a chain (Fig. 1a). Since the material strength is random, the strength of the weakest material element in the structure (Fig. 1a) is likely to decrease with increasing structure size D {\displaystyle D} (as noted already by Mariotte in 1684). Denoting the failure probabilities of structure as P f {\displaystyle P_{f}} and of one RVE under stress σ k {\displaystyle \sigma _{k}} as P 1 ( σ k ) {\displaystyle P_{1}(\sigma _{k})} , and noting that the survival probability of a chain is the joint probability of survival of all its N {\displaystyle N} links, one readily concludes that The key is the left tail of the distribution of P 1 ( σ k ) {\displaystyle P_{1}(\sigma _{k})} . It was not successfully identified until Weibull in 1939 recognized that the tail is a power law. Denoting the tail exponent as m {\displaystyle m} , one can then show that, if the structure is sufficiently larger than one RVE (i.e., if N/l 0 → ∞ ), the failure probability of a structure as a function of σ N {\displaystyle \sigma _{N}} is Eq. 2 is the cumulative Weibull distribution with scale parameter S 0 {\displaystyle S_{0}} and shape parameter m {\displaystyle m} ; Ψ = ∫ V [ σ ^ ( ξ ) ] m d V ( ξ ) {\displaystyle \Psi =\int _{V}\left[{\hat {\sigma }}(\xi )\right]^{m}\,{\mbox{d}}V(\xi )} = constant factor depending on the structure geometry, V {\displaystyle V} = structure volume; ξ {\displaystyle \xi } = relative (size-independent) coordinate vectors, σ ^ ( ξ ) {\displaystyle {\hat {\sigma }}(\xi )} = dimensionless stress field (dependent on geometry), scaled so that the maximum stress be 1; n d {\displaystyle n_{d}} = number of spatial dimensions ( n d {\displaystyle n_{d}} = 1, 2 or 3); l 0 {\displaystyle l_{0}} = material characteristic length representing the effective size of the RVE (typically about 3 inhomogeneity sizes). The RVE is here defined as the smallest material volume whose failure suffices to make the whole structure fail. From experience, the structure is sufficiently larger than one RVE if the equivalent number N e q {\displaystyle N_{eq}} of RVEs in the structure is larger than about 10 4 {\displaystyle 10^{4}} ; N e q = ( D / l 0 ) n d Ψ {\displaystyle N_{eq}=(D/l_{0})^{n_{d}}\Psi } = number of RVEs giving the same P f {\displaystyle P_{f}} if the stress field is homogeneous (always N e q < N {\displaystyle N_{eq}<N} , and usually N e q ≪ N {\displaystyle N_{eq}\ll N} ). For most normal-scale applications to metals and fine-grained ceramics, except for micrometer scale devices, the size is large enough for the Weibull theory to apply (but not for coarse-grained materials such as concrete). From Eq. 2 one can show that the mean strength and the coefficient of variation of strength are obtained as follows: (where Γ {\displaystyle \Gamma } is the gamma function) The first equation shows that the size effect on the mean nominal strength is a power function of size D {\displaystyle D} , regardless of structure geometry. Weibull parameter m {\displaystyle m} can be experimentally identified by two methods: 1) The values of σ N {\displaystyle \sigma _{N}} measured on many identical specimens are used to calculate the coefficient of variation of strength, and the value of m {\displaystyle m} then follows by solving Eq. (4); or 2) the values of σ ¯ N {\displaystyle {\bar {\sigma }}_{N}} are measured on geometrically similar specimens of several different sizes D {\displaystyle D} and the slope of their linear regression in the plot of log ⁡ σ ¯ N {\displaystyle \log {\bar {\sigma }}_{N}} versus log ⁡ D {\displaystyle \log D} gives − 1 / m {\displaystyle -1/m} . Method 1 must give the same result for different sizes, and method 2 the same as method 1. If not, the size effect is partly or totally non-Weibullian. Omission of testing for different sizes has often led to incorrect conclusions. Another check is that the histogram of the strengths of many identical specimens must be a straight line when plotted in the Weibull scale. A deviation to the right at high strength range means that N e q {\displaystyle N_{eq}} is too small and the material quasibrittle. The fact that the Weibull size effect is a power law means that it is self-similar, i.e., no characteristic structure size D 0 {\displaystyle D_{0}} exists, and l 0 {\displaystyle l_{0}} and material inhomogeneities are negligible compared to D {\displaystyle D} . This is the case for fatigue-embrittled metals or fine-grained ceramics except on the micrometer scale. The existence of a finite D 0 {\displaystyle D_{0}} is a salient feature of the energetic size effect, discovered in 1984. This kind of size effect represents a transition between two power laws and is observed in brittle heterogenous materials, termed quasibrittle. These materials include concrete, fiber composites, rocks, coarse-grained and toughened ceramics, rigid foams, sea ice, dental ceramics, dentine, bone, biological shells, many bio- and bio-inspired materials, masonry, mortar, stiff cohesive soils, grouted soils, consolidated snow, wood, paper, carton, coal, cemented sands, etc. On the micro- or nano scale, all the brittle materials become quasibrittle, and thus must exhibit the energetic size effect. A pronounced energetic size effect occurs in shear, torsional and punching failures of reinforced concrete, in pullout of anchors from concrete, in compression failure of slender reinforced concrete columns and prestressed concrete beams, in compression and tensile failures of fiber-polymer composites and sandwich structures, and in the failures of all the aforementioned quasibrittle materials. One may distinguish two basic types of this size effect. When the macro-crack initiates from one RVE whose size is not negligible compared to the structure size, the deterministic size effect dominates over the statistical size effect. What causes the size effect is a stress redistribution in the structure (Fig. 2c) due to damage in the initiating RVE, which is typically located at fracture surface. A simple intuitive justification of this size effect may be given by considering the flexural failure of an unnotched simply supported beam under a concentrated load P {\displaystyle P} at midspan (Fig. 2d). Due to material heterogeneity, what decides the maximum load P {\displaystyle P} is not the elastically calculated stress σ 1 = M c / I = 3 P L / 2 b D 2 {\displaystyle \sigma _{1}=Mc/I=3PL/2bD^{2}} at the tensile face, where M = P L / 4 {\displaystyle M=PL/4} = bending moment, D {\displaystyle D} = beam depth, c = D / 2 , I = b h 3 / 12 {\displaystyle c=D/2,\;I=bh^{3}/12} and b {\displaystyle b} = beam width. Rather, what decides is the stress value σ ¯ {\displaystyle {\bar {\sigma }}} roughly at distance l 0 / 2 {\displaystyle l_{0}/2} from the tensile face, which is at the middle of FPZ (2c). Noting that σ ¯ {\displaystyle {\bar {\sigma }}} = σ 1 − σ n ′ l 0 / 2 {\displaystyle \sigma _{1}-\sigma '_{n}l_{0}/2} , where σ n ′ {\displaystyle \sigma '_{n}} = stress gradient = 2 σ 1 / D {\displaystyle 2\sigma _{1}/D} and σ ¯ = f t ′ {\displaystyle {\bar {\sigma }}=f'_{t}} = intrinsic tensile strength of the material, and considering the failure condition σ ¯ {\displaystyle {\bar {\sigma }}} = f t ′ {\displaystyle f'_{t}} , one gets P / b D = σ N {\displaystyle P/bD=\sigma _{N}} = σ 0 / ( 1 − D b / D ) {\displaystyle \sigma _{0}/(1-D_{b}/D)} where σ 0 = ( 2 D / 3 L ) f t ′ {\displaystyle \sigma _{0}=(2D/3L)f'_{t}} , which is a constant because for geometrically similar beams L / D {\displaystyle L/D} = constant. This expression is valid only for small enough l 0 / D {\displaystyle l_{0}/D} , and so (according to the first two terms of the binomial expansion) one may approximate it as which is the law of Type 1 deterministic size effect (Fig. 2a). The purpose of the approximation made is: (a) to prevent σ N {\displaystyle \sigma _{N}} from becoming negative for very small D {\displaystyle D} , for which the foregoing argument does not apply; and (b) to satisfy the asymptotic condition that the deterministic size effect must vanish for D / l 0 → ∞ {\displaystyle D/l_{0}\to \infty } . Here r {\displaystyle r} = positive empirical constant; the values r = 1 {\displaystyle r=1} = or 2 have been used for concrete, while r ≈ 1.45 {\displaystyle r\approx 1.45} is optimum according to the existing test data from the literature (Fig. 2d). A fundamental derivation of Eq. 5 for a general structural geometry has been given by applying dimensional analysis and asymptotic matching to the limit case of energy release when the initial macro-crack length tends to zero. For general structures, the following effective size may be substituted in Eq. (5): where ϵ n ′ {\displaystyle \epsilon '_{n}} = strain gradient at the maximum strain point located at the surface, in the direction normal to the surface. Eq. 5 cannot apply for large sizes because it approaches for D → ∞ {\displaystyle D\to \infty } a horizontal asymptote. For large sizes, σ N {\displaystyle \sigma _{N}} must approach the Weibull statistical size effect, Eq. 3. This condition is satisfied by the generalized energetic-statistical size effect law: where r , m {\displaystyle r,m} are empirical constants ( r n d / m < 1 {\displaystyle rn_{d}/m<1} ). The deterministic formula (5) is recovered as the limit case for m → ∞ {\displaystyle m\rightarrow \infty } . (Fig. 2d) shows a comparison of the last formula with the test results for many different concretes, plotted as dimensionless strength σ N / f t ′ {\displaystyle \sigma _{N}/f'_{t}} versus dimensionless structure size D / D 0 {\displaystyle D/D_{0}} . The probabilistic theory of Type 1 size effect can be derived from fracture nano-mechanics. Kramer's transition rate theory shows that, on the nano-scale, the far-left tail of the probability distribution of nano-scale strength s {\displaystyle s} is a power law of the type s 2 {\displaystyle s^{2}} . Analysis of the multiscale transition to the material macro-scale then shows that the RVE strength distribution is Gaussian but with a Weibull (or power-law) left tail whose exponent m {\displaystyle m} is much larger than 2 and is grafted roughly at the probability of about 0.001. For structures with N e q < 10 4 {\displaystyle N_{eq}<10^{4}} , which are common for quasibrittle materials, the Weibull theory does not apply. But the underlying weakest-link model, expressed by Eq. (1) for P f {\displaystyle P_{f}} , does, albeit with a finite N {\displaystyle N} , which is a crucial point. The finiteness of the weakest-link chain model causes major deviations from the Weibull distribution. As the structure size, measured by N e q {\displaystyle N_{eq}} , increases, the grafting point of the Weibullian left part moves to the right until, at about N e q = 10 4 {\displaystyle N_{eq}=10^{4}} , the entire distribution becomes Weibullian. The mean strength can be computed from this distribution and, as it turns out, its plot is identical with the plot of Eq. 5 seen in Fig. 2g. The point of deviation from the Weibull asymptote is determined by the location of the grafting point on the strength distribution of one RVE (Fig. 2g). Note that the finiteness of the chain in the weakest-link model captures the deterministic part of size effect. This theory has also been extended to the size effect on the Evans and Paris' laws of crack growth in quasibrittle materials, and to the size effect on the static and fatigue lifetimes. It appeared that the size effect on the lifetime is much stronger than it is on the short-time strength (tail exponent m {\displaystyle m} is an order-of-magnitude smaller). The strongest possible size effect occurs for specimens with similar deep notches (Fig. 4b), or for structures in which a large crack, similar for different sizes, forms stably before the maximum load is reached. Because the location of fracture initiation is predetermined to occur at the crack tip and thus cannot sample the random strengths of different RVEs, the statistical contribution to the mean size effect is negligible. Such behavior is typical of reinforced concrete, damaged fiber-reinforced polymers and some compressed unreinforced structures. The energetic size effect may be intuitively explained by considering the panel in Fig. 1c,d, initially under a uniform stress equal to σ N {\displaystyle \sigma _{N}} . Introduction of a crack of length a {\displaystyle a} , with a damage zone of width h {\displaystyle h} at the tip, relieves the stress, and thus also the strain energy, from the shaded undamaged triangles of slope k {\displaystyle k} on the flanks of the crack. Then, if k {\displaystyle k} and a / D {\displaystyle a/D} are approximately the same for different sizes, the energy released from the shaded triangles is proportional to U ¯ D 2 {\displaystyle {\bar {U}}D^{2}} , while the energy dissipated by the fracture process is proportional to G f D {\displaystyle G_{f}D} ; here G f {\displaystyle G_{f}} = fracture energy of the material, U ¯ = σ N 2 / 2 E {\displaystyle {\bar {U}}=\sigma _{N}^{2}/2E} = energy density before fracture, and E {\displaystyle E} = Young's elastic modulus. The discrepancy between D {\displaystyle D} and D 2 {\displaystyle D^{2}} shows that a balance of energy release and dissipation rate can exist for every size D {\displaystyle D} only if σ N {\displaystyle \sigma _{N}} decreases with increasing D {\displaystyle D} . If the energy dissipated within the damage zone of width h {\displaystyle h} is added, one obtains the Bažant (1984) size effect law (Type 2): (Fig. 4c,d) where B , f t ′ , D 0 {\displaystyle B,f'_{t},D_{0}} = constants, where f t ′ {\displaystyle f'_{t}} = tensile strength of material, and B {\displaystyle B} accounts for the structure geometry. For more complex geometries such an intuitive derivation is not possible. However, dimensional analysis coupled with asymptotic matching showed that Eq. 8 is applicable in general, and that the dependence of its parameters on the structure geometry has approximately the following form: where c f ≈ {\displaystyle c_{f}\approx } half of the FPZ length, α 0 = a / D {\displaystyle \alpha _{0}=a/D} = relative initial crack length (which is constant for geometrically similar scaling); g ( α 0 ) = k 2 ( α 0 ) {\displaystyle g(\alpha _{0})=k^{2}(\alpha _{0})} = dimensionless energy release function of linear elastic fracture mechanics (LEFM), which brings about the effect of structure geometry; k ( α 0 ) = K ( α 0 ) b D / P {\displaystyle k(\alpha _{0})=K(\alpha _{0})b{\sqrt {D}}/P} , and K {\displaystyle K} = stress intensity factor. Fitting Eq. 8 to σ N {\displaystyle \sigma _{N}} data from tests of geometrically similar notched specimens of very different sizes is a good way to identify the G f {\displaystyle G_{f}} and c f {\displaystyle c_{f}} of the material. Numerical simulations of failure by finite element codes can capture the energetic (or deterministic) size effect only if the material law relating the stress to deformation possesses a characteristic length. This was not the case for the classical finite element codes with a material characterized solely by stress-strain relations. One simple enough computational method is the cohesive (or fictitious) crack model, in which it is assumed that the stress σ {\displaystyle \sigma } transmitted across a partially opened crack is a decreasing function of the crack opening w {\displaystyle w} , i.e., σ = f ( w ) {\displaystyle \sigma =f(w)} . The area under this function is G f {\displaystyle G_{f}} , and is the material characteristic length giving rise to the deterministic size effect. An even simpler method is the crack-band model, in which the cohesive crack is replaced in simulations by a crack band of width h {\displaystyle h} equal to one finite element size and a stress-strain relation that is softening in the cross-band direction as σ = f ^ ( ϵ ) {\displaystyle \sigma ={\hat {f}}(\epsilon )} where ϵ = w / h {\displaystyle \epsilon =w/h} = average strain in that direction. When h {\displaystyle h} needs to be adjusted, the softening stress strain relation is adjusted so as to maintain the correct energy dissipation G f {\displaystyle G_{f}} . A more versatile method is the nonlocal damage model in which the stress at a continuum point is a function not of the strain at that point but of the average of the strain field within a certain neighborhood of size h {\displaystyle h} centered at that point. Still another method is the gradient damage model in which the stress depends not only on the strain at that point but also on the gradient of strain. All these computational methods can ensure objectivity and proper convergence with respect to the refinement of the finite element mesh. The fractal properties of material, including the fractal aspect of crack surface roughness and the lacunar fractal aspect of pore structure, may have a role in the size effect in concrete, and may affect the fracture energy of material. However, the fractal properties have yet not been experimentally documented for a broad enough scale and the problem has not yet been studied in depth comparable to the statistical and energetic size effects. The main obstacle to the practical consideration of a fractal influence on the size effect is that, if calibrated for one structure geometry, it is not clear how infer the size effect for another geometry. The pros and cons were discussed, e.g., by Carpinteri et al. (1994, 2001) and Bažant and Yavari (2005). Taking the size effect into account is essential for safe prediction of strength of large concrete bridges, nuclear containments, roof shells, tall buildings, tunnel linings, large load-bearing parts of aircraft, spacecraft and ships made of fiber-polymer composites, wind turbines, large geotechnical excavations, earth and rock slopes, floating sea ice carrying loads, oil platforms under ice forces, etc. Their design depends on the material properties measured on much smaller laboratory specimens. These properties must be extrapolated to sizes greater by one or two orders of magnitude. Even if an expensive full-scale failure test, for example a failure test of the rudder of a very large aircraft, can be carried out, it is financially prohibitive to repeat it thousand times to obtain the statistical distribution of load capacity. Such statistical information, underlying the safety factors, is obtainable only by proper extrapolation of laboratory tests. The size effect is gaining in importance as larger and larger structures, of more and more slender forms, are being built. The safety factors, of course, give large safety margins—so large that even for the largest civil engineering structures the classical deterministic analysis based on the mean material properties normally yields failure loads smaller than the maximum design loads. For this reasons, the size effect on the strength in brittle failures of concrete structures and structural laminates has long been ignored. Then, however, the failure probability, which is required to be < 10 − 6 {\displaystyle <10^{-6}} , and actually does have such values for normal-size structures, may become for very large structures as low as 10 − 3 {\displaystyle 10^{-3}} per lifetime. Such high failure probability is intolerable as it adds significantly to the risks to which people are inevitably exposed. In fact, the historical experience shows that very large structures have been failing at a frequency several orders of magnitude higher than smaller ones. The reason it has not led to public outcry is that the large structures are few. But for the locals, who must use the structures daily, the risk is not acceptable. Another application is the testing of the fracture energy and characteristic material length. For quasibrittle materials, measuring the size effect on the peak loads (and on the specimen softening after the peak load) is the simplest approach. Knowing the size effect is also important in the reverse sense—for micrometer scale devices if they are designed partly or fully on the basis of material properties measured more conveniently on the scale of 0.01m to 0.1m.
https://en.wikipedia.org/wiki/Size_effect_on_structural_strength
Size functions are shape descriptors, in a geometrical/topological sense. They are functions from the half-plane x < y {\displaystyle x<y} to the natural numbers, counting certain connected components of a topological space . They are used in pattern recognition and topology . In size theory , the size function ℓ ( M , φ ) : Δ + = { ( x , y ) ∈ R 2 : x < y } → N {\displaystyle \ell _{(M,\varphi )}:\Delta ^{+}=\{(x,y)\in \mathbb {R} ^{2}:x<y\}\to \mathbb {N} } associated with the size pair ( M , φ : M → R ) {\displaystyle (M,\varphi :M\to \mathbb {R} )} is defined in the following way. For every ( x , y ) ∈ Δ + {\displaystyle (x,y)\in \Delta ^{+}} , ℓ ( M , φ ) ( x , y ) {\displaystyle \ell _{(M,\varphi )}(x,y)} is equal to the number of connected components of the set { p ∈ M : φ ( p ) ≤ y } {\displaystyle \{p\in M:\varphi (p)\leq y\}} that contain at least one point at which the measuring function (a continuous function from a topological space M {\displaystyle M} to R k {\displaystyle \mathbb {R} ^{k}} [ 1 ] [ 2 ] ) φ {\displaystyle \varphi } takes a value smaller than or equal to x {\displaystyle x} . [ 3 ] The concept of size function can be easily extended to the case of a measuring function φ : M → R k {\displaystyle \varphi :M\to \mathbb {R} ^{k}} , where R k {\displaystyle \mathbb {R} ^{k}} is endowed with the usual partial order . [ 4 ] A survey about size functions (and size theory ) can be found in. [ 5 ] Size functions were introduced in [ 6 ] for the particular case of M {\displaystyle M} equal to the topological space of all piecewise C 1 {\displaystyle C^{1}} closed paths in a C ∞ {\displaystyle C^{\infty }} closed manifold embedded in a Euclidean space . Here the topology on M {\displaystyle M} is induced by the C 0 {\displaystyle C^{0}} -norm, while the measuring function φ {\displaystyle \varphi } takes each path γ ∈ M {\displaystyle \gamma \in M} to its length. In [ 7 ] the case of M {\displaystyle M} equal to the topological space of all ordered k {\displaystyle k} -tuples of points in a submanifold of a Euclidean space is considered. Here the topology on M {\displaystyle M} is induced by the metric d ( ( P 1 , … , P k ) , ( Q 1 … , Q k ) ) = max 1 ≤ i ≤ k ‖ P i − Q i ‖ {\displaystyle d((P_{1},\ldots ,P_{k}),(Q_{1}\ldots ,Q_{k}))=\max _{1\leq i\leq k}\|P_{i}-Q_{i}\|} . An extension of the concept of size function to algebraic topology was made in [ 2 ] where the concept of size homotopy group was introduced. Here measuring functions taking values in R k {\displaystyle \mathbb {R} ^{k}} are allowed. An extension to homology theory (the size functor ) was introduced in . [ 8 ] The concepts of size homotopy group and size functor are strictly related to the concept of persistent homology group [ 9 ] studied in persistent homology . It is worth to point out that the size function is the rank of the 0 {\displaystyle 0} -th persistent homology group , while the relation between the persistent homology group and the size homotopy group is analogous to the one existing between homology groups and homotopy groups . Size functions have been initially introduced as a mathematical tool for shape comparison in computer vision and pattern recognition , and have constituted the seed of size theory . [ 3 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] The main point is that size functions are invariant for every transformation preserving the measuring function . Hence, they can be adapted to many different applications, by simply changing the measuring function in order to get the wanted invariance. Moreover, size functions show properties of relative resistance to noise, depending on the fact that they distribute the information all over the half-plane Δ + {\displaystyle \Delta ^{+}} . Assume that M {\displaystyle M} is a compact locally connected Hausdorff space . The following statements hold: If we also assume that M {\displaystyle M} is a smooth closed manifold and φ {\displaystyle \varphi } is a C 1 {\displaystyle C^{1}} -function, the following useful property holds: A strong link between the concept of size function and the concept of natural pseudodistance d ( ( M , φ ) , ( N , ψ ) ) {\displaystyle d((M,\varphi ),(N,\psi ))} between the size pairs ( M , φ ) , ( N , ψ ) {\displaystyle (M,\varphi ),\ (N,\psi )} exists. [ 1 ] [ 19 ] The previous result gives an easy way to get lower bounds for the natural pseudodistance and is one of the main motivation to introduce the concept of size function. An algebraic representation of size functions in terms of collections of points and lines in the real plane with multiplicities, i.e. as particular formal series, was furnished in [ 1 ] [ 20 ] . [ 21 ] The points (called cornerpoints ) and lines (called cornerlines ) of such formal series encode the information about discontinuities of the corresponding size functions, while their multiplicities contain the information about the values taken by the size function. Formally: This representation contains the same amount of information about the shape under study as the original size function does, but is much more concise. This algebraic approach to size functions leads to the definition of new similarity measures between shapes, by translating the problem of comparing size functions into the problem of comparing formal series. The most studied among these metrics between size function is the matching distance . [ 3 ]
https://en.wikipedia.org/wiki/Size_function
In mathematics , size theory studies the properties of topological spaces endowed with R k {\displaystyle \mathbb {R} ^{k}} -valued functions , with respect to the change of these functions. More formally, the subject of size theory is the study of the natural pseudodistance between size pairs . A survey of size theory can be found in . [ 1 ] The beginning of size theory is rooted in the concept of size function , introduced by Frosini. [ 2 ] Size functions have been initially used as a mathematical tool for shape comparison in computer vision and pattern recognition . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] An extension of the concept of size function to algebraic topology was made in the 1999 Frosini and Mulazzani paper [ 11 ] where size homotopy groups were introduced, together with the natural pseudodistance for R k {\displaystyle \mathbb {R} ^{k}} -valued functions. An extension to homology theory (the size functor ) was introduced in 2001. [ 12 ] The size homotopy group and the size functor are strictly related to the concept of persistent homology group [ 13 ] studied in persistent homology . It is worth to point out that the size function is the rank of the 0 {\displaystyle 0} -th persistent homology group, while the relation between the persistent homology group and the size homotopy group is analogous to the one existing between homology groups and homotopy groups . In size theory, size functions and size homotopy groups are seen as tools to compute lower bounds for the natural pseudodistance . Actually, the following link exists between the values taken by the size functions ℓ ( N , ψ ) ( x ¯ , y ¯ ) {\displaystyle \ell _{(N,\psi )}({\bar {x}},{\bar {y}})} , ℓ ( M , φ ) ( x ~ , y ~ ) {\displaystyle \ell _{(M,\varphi )}({\tilde {x}},{\tilde {y}})} and the natural pseudodistance d ( ( M , φ ) , ( N , ψ ) ) {\displaystyle d((M,\varphi ),(N,\psi ))} between the size pairs ( M , φ ) , ( N , ψ ) {\displaystyle (M,\varphi ),\ (N,\psi )} , [ 14 ] [ 15 ] An analogous result holds for size homotopy group . [ 11 ] The attempt to generalize size theory and the concept of natural pseudodistance to norms that are different from the supremum norm has led to the study of other reparametrization invariant norms. [ 16 ]
https://en.wikipedia.org/wiki/Size_theory
SkQ is a class of mitochondria -targeted antioxidants , developed by Professor Vladimir Skulachev and his team. In a broad sense, SkQ is a lipophilic cation , linked via saturated hydrocarbon chain to an antioxidant . Due to its lipophilic properties, SkQ can effectively penetrate through various cell membranes . The positive charge provides directed transport of the whole molecule including antioxidant moiety into the negatively charged mitochondrial matrix. Substances of this type, various drugs that are based on them, as well as methods of their use are patented in Russia and other countries such as United States, China, Japan, and in Europe. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Sometimes the term SkQ is used in a narrow sense for the denomination of a cationic derivative of the plant antioxidant plastoquinone . In 1969, triphenylphosphonium (TPP, charged triphenylphosphine) was proposed for use for the first time. [ 5 ] This compound with a low molecular weight consists of a positively charged phosphorus atom and surrounded by three hydrophobic phenyls that are accumulating in mitochondria. In 1970, the use of the TPP for targeting the delivery of compounds to the mitochondrial matrix was proposed. In 1974, the TPP, as well as its derivatives and other penetrating ions , were named "Skulachev's Ions" by the famous American biochemist David E. Green . [ 6 ] In 1999, the first work on the directed delivery of the antioxidant alpha-tocopherol linked by a hydrocarbon chain to TPP to the mitochondria was published. The compound was named TPPB or MitoVitE. [ 7 ] Several years later MitoQ, a better version of a mitochondria-targeted compound was synthesized. Its antioxidant part is represented by ubiquinone , which is linked with a 10-carbon aliphatic chain to TPP. [ 8 ] In the early 2000s, a group of researchers led by prof V. P. Skulachev in Moscow State University began the development of SkQ — the mitochondrial-targeted antioxidant, similar to MitoQ, but with the ubiquinone replaced with plastoquinone (more active analog of ubiquinone derived from plant chloroplasts ). [ 9 ] Since 2005, several modified SkQ compounds were synthesized and tested in vitro , [ 10 ] [ 11 ] the efficiency and the antioxidant effects of the tested compounds were higher than previous analogs by hundreds of times. All of these compounds have abbreviated names derived from the names of Skulachev (Sk), letters for quinone (Q) and denote the modification (alpha and/or numeric symbol, for example, R1 for a derivative of rhodamine and plastoquinone). The largest amount of data was obtained for SkQ1 and SkQR1. [ 12 ] [ 13 ] Later SkQ properties were tested in vitro on fibroblasts and in vivo in different organisms: mice , drosophilids , yeast , and many others. [ 14 ] It was found that SkQ is able to protect cells from death from oxidative stress and is effective as a treatment of age-related diseases in animals. [ 15 ] [ 16 ] Since 2008, the development of pharmaceuticals based on SkQ has been started. In 2012, The Ministry of Health of the Russian Federation approved the use of the eye drops "Visomitin" based on SkQ1 for the treatment of dry eye syndrome and the early stage of cataract. [ 17 ] Testing of the efficacy of SkQ-drugs against other diseases, both in Russia and in the United States is currently in progress. [ 18 ] [ 19 ] In 2016, phase 1 of a clinical trial of an oral drug containing SkQ1 was conducted in Russia. [ 20 ] In 2017, it was found that SkQ has a strong antibacterial effect and is able to inhibit the activity of multidrug-resistant enzymes in bacteria [ 21 ] [ 22 ] Since 2019 the Skulachev project is developing mitochondrial antioxidants in several areas: synthesis and testing of new SkQ compounds, testing the effects on a variety of model systems and in different diseases. [ 23 ] SkQ compound consists of three parts: antioxidant, C-aliphatic linker and lipophilic cation. A list of some of SkQ and substances with similar structure: Lipophilic cation determines the efficiency of penetration through the membranes into the mitochondrial matrix. The best properties are shown by SkQ-compounds with triphenylphosphonium ion (TPP): MitoQ, SkQ1, and others. Similar penetration efficiency was shown for compounds with rhodamine 19, such as SkQR1. Rhodamine has fluorescence properties, so its derivatives are used in the visualization of mitochondria. [ 24 ] The SkQ derivatives with acetylcarnitine (SkQ2M) tributyl ammonium (SkQ4) as lipophilic cations have weak penetrating properties. [ 25 ] The cations with the well-known medical properties — berberine and palmatine were also tested. SkQBerb and SkQPalm – SkQ derivatives, do not differ much in properties from SkQ1 and SkQR1. [ 26 ] In SkQ compounds, a decamethylenic linker (an aliphatic chain of 10 carbon atoms) is used. Reduction of the length of the chain leads to a deterioration in the penetrating ability of ions. The compound with such pentamethylenic linker is demonstrated on SkQ5. [ 27 ] Molecular dynamics in the membrane calculated with a computer have shown that the length of the linker of 10 is optimal for the manifestation of antioxidant properties of SkQ1. The quinone residue is located right next to the C9 or C13 atoms of the fatty acids of the membrane that has to be protected from oxidative damage. [ 28 ] Compounds without antioxidant part are used to control the effect of SkQ compound. For example, C 12 -TPP and C 12 R1 penetrate the mitochondria but do not inhibit the oxidation . Interestingly, these compounds partially demonstrate the positive effects of SkQ. This happens due to the phenomenon of soft depolarization (mild uncoupling) of the mitochondrial membrane. Compounds with tocopherol and ubiquinone for historical reasons are called MitoVitE and MitoQ, although formally they can be attributed to the class of SkQ-compounds. MitoQ is traditionally used for comparison with the SkQ compound. [ citation needed ] The highest antioxidant activity was shown for the compounds with thymoquinone (SkQT1 and SkQTК1). Thymoquinone is a derivative of plastoquinone, but with one methyl substituent in the aromatic ring. Next in the sequence of antioxidant activity connection is plastoquinone (SkQ1 and SkQR1), with two methyl substituents. SkQ3 is a less active compound, with three methyl substituents. SkQB without methyl substituents exhibits the weakest antioxidant properties. In general, SkQ-like compounds can be arrange by its antioxidant activity as follows: SkQB < MitoQ < DMMQ ≈ SkQ3 < SkQ1 < SkQT. [ 29 ] The positive effect of SkQ is associated with its following properties: [ citation needed ] Due to its lipophilic properties, SkQ-substances can penetrate the lipid bilayer . The transportation is caused by the electrical potential due to the presence of a positive charge in SkQ. Mitochondria are the only intracellular organelles with a negative charge. Therefore, SkQ effectively penetrates and accumulates there. [ citation needed ] The accumulation coefficient can be estimated using the Nernst equation . To do this, we must take into account that the potential of the plasma membrane of the cell is about 60 mV (the cytoplasm has a negative charge), and the potential of the mitochondrial membrane is about 180 mV (the matrix has a negative charge). As a result, the electric gradient SkQ between the extracellular medium and the mitochondrial matrix is 10 4 . It should also be taken into account that SkQ has a high coefficient of distribution between lipid and water , about 10 4 . Taking this into account, the total concentration gradient of SkQ inside the inner layer of the inner mitochondria membrane can be up to 10 8 . [ 30 ] Oxidation of organic substances by ROS is a chain process. Several types of active free radicals — peroxide (RO 2 * ), alkoxyl (RO*), alkyl (R*), and ROS ( superoxide anion , singlet oxygen), participate in these chain reactions. One of the main targets of ROS — cardiolipin , polyunsaturated phospholipid of the inner membrane of mitochondria, which is especially sensitive to peroxidation. After a radical attack on the C 11 atom of linoleic acid , cardiolipin forms peroxyl radical, which is stabilized at positions C 9 and C 13 due to its neighboring double bonds. The location of the SkQ1 in the mitochondrial membrane is that the plastoquinone residue is exactly near of C 9 or C 13 of cardiolipin (depending on the SkQ conformation). Thus, it can quickly and effectively quench the peroxyl radical of cardiolipin. [ 31 ] Another important property of SkQ is its recyclability. After ROS neutralization the SkQ antioxidant moiety is converted to its oxidized form (plastoquinone or semi-quinone). Then it can be quickly restored by the complex III of the respiratory chain . Thus, due to the functioning of the respiratory chain, SkQ exists mainly in a restored, active form. In some cases (for example, in experiments on the lifespan of Drosophila or plant models) compound C 12 -TPP (without the plastoquinone residue) can successfully substitute for SkQ1. [ 32 ] This phenomenon is explained by the fact that any hydrophobic compound with a delocalized positive charge is able to transfer anions of fatty acids from one side of the membrane to another, thus lowering the transmembrane potential. [ 33 ] This phenomenon is called uncoupling of respiration and ATP synthesis on the mitochondrial membrane. In the cell, this function is normally performed by uncoupling proteins (or UCPs, including thermogenin from brown fat adipocytes) and ATP/ADP antiporter. Weak depolarization of the membrane leads to a multiple reductions in the amount of ROS produced by mitochondria. [ 34 ] At high concentrations (micromolar and more) SkQ-compounds exhibit pro-oxidant properties stimulating ROS production. The advantage of SkQ1 is that the difference in concentrations between pro- and antioxidant activity is about 1000 fold. Experiments on mitochondria have shown that SkQ1 begins to exhibit antioxidant properties already at concentrations of 1 nM, and pro-oxidant properties at concentrations of about 1 μM. For comparison, this "concentration window" of MitoQ is only about 2-5 fold. The manifestation of antioxidant activity of MitoQ begins only with concentrations of 0.3 μM while it begins to demonstrate pro-oxidant effect at 0.6-1.0 μM. [ 35 ] In several experimental models (including experiments on laboratory animals ) SkQ1 and SkQR1 showed a pronounced anti-inflammatory effect. [ 36 ] SkQ1 and C 12 -TPP are substrates of ABC-transporters. The main function of these enzymes is the protection of cells from xenobiotics . Lipophilic cations compete with other substrates of these carriers and thus weaken the protection of cells from external influences. [ 37 ] SkQ is able to delay the development of several traits of aging and increase the life span in a variety of animals. Depending on the type of SkQ molecule, the substance may reduce early mortality, increase life expectancy and extend the maximum age of experimental animals. [ 38 ] Also in various experiments, SkQ has slowed down the development of several age-dependent pathologies and signs of aging. [ 39 ] [ 40 ] It was shown that SkQ accelerates wound healing , [ 41 ] as well as treats age-related diseases such as osteoporosis , cataracts , retinopathy , and others. [ 42 ] At the end of 2008, preparations for the official approval of SkQ-based pharmaceuticals in Russia has started. The efficiency of eye drops against " dry eye syndrome " was also confirmed in the following double-blind placebo -controlled studies: (a) international multicenter study in Russia and Ukraine , [ 43 ] phase II study in the United States. [ 44 ] A clinical study on patients with age-related cataracts was also successfully conducted. In Russia in 2019 clinical studies are in progress for two improved versions of SkQ1-based eye drops – Visomitin Forte (phase II study on patients with age-related macular degeneration ) [ 45 ] and Visomitin Ultra (Phase I clinical study). [ 46 ] In 2018-2021, both attempts at Phase III clinical trials in the United States failed to show any statistically significant results among 452 (VISTA-1/NCT03764735) [ 47 ] and 610 (VSTA-2/NCT04206020) [ 48 ] participants respectively. SkQ1 is included in the composition of cosmetic products such as Mitovitan Active, Mitovitan, and Exomitin. [ 49 ] [ 50 ] The drug "Visomitin" on the basis of SkQ1 used in veterinary practice for the treatment of ophthalmologic diseases in pets. In particular, the effectiveness is shown for the treatment of retinopathy in dogs , cats , and horses . [ 51 ] Experiments have shown an unexpected effect of SkQ on plants. The substance stimulated differentiation (in the treatment of callus ) and seed germination (patent US 8,557,733), increased the yield of different crops (Ph.D. thesis of A.I. Uskov). [ 52 ]
https://en.wikipedia.org/wiki/SkQ
The Skattebøl rearrangement is an organic reaction for converting a geminal dihalo cyclopropane to an allene using an organolithium base. [ 1 ] [ 2 ] This rearrangement reaction is named after its discoverer, Lars Skattebøl , Professor emeritus at the University of Oslo . It proceeds through a carbene reaction intermediate : When the cyclopropane ring is fitted with a 2- vinyl group , a cyclopentadiene is formed through a so-called foiled carbene intermediate. [ 3 ] [ 4 ] This process is more generally known as a vinylcyclopropane rearrangement . The reaction is closely related to the earlier Doering-LaFlamme procedure ( Doering–LaFlamme allene synthesis ), in which a gem- dibromocyclopropane is treated with an alkali metal to form the same cyclopropylidene intermediate. [ citation needed ]
https://en.wikipedia.org/wiki/Skattebøl_rearrangement
The skeletal formula , line-angle formula , bond-line formula or shorthand formula of an organic compound is a type of minimalist structural formula representing a molecule 's atoms , bonds and some details of its geometry . The lines in a skeletal formula represent bonds between carbon atoms, unless labelled with another element. [ 1 ] Labels are optional for carbon atoms, and the hydrogen atoms attached to them. An early form of this representation was first developed by organic chemist August Kekulé , while the modern form is closely related to and influenced by the Lewis structure of molecules and their valence electrons. Hence they are sometimes termed Kekulé structures [ a ] or Lewis–Kekulé structures . Skeletal formulas have become ubiquitous in organic chemistry , partly because they are relatively quick and simple to draw, and also because the curved arrow notation used for discussions of reaction mechanisms and electron delocalization can be readily superimposed. Several other ways of depicting chemical structures are also commonly used in organic chemistry (though less frequently than skeletal formulae). For example, conformational structures look similar to skeletal formulae and are used to depict the approximate positions of atoms in 3D space, as a perspective drawing. Other types of representation, such as Newman projection , Haworth projection or Fischer projection , also look somewhat similar to skeletal formulae. However, there are slight differences in the conventions used, and the reader needs to be aware of them in order to understand the structural details encoded in the depiction. While skeletal and conformational structures are also used in organometallic and inorganic chemistry , the conventions employed also differ somewhat. The skeletal structure of an organic compound is the series of atoms bonded together that form the essential structure of the compound. The skeleton can consist of chains, branches and/or rings of bonded atoms. Skeletal atoms other than carbon or hydrogen are called heteroatoms . [ 2 ] The skeleton has hydrogen and/or various substituents bonded to its atoms. Hydrogen is the most common non-carbon atom that is bonded to carbon and, for simplicity, is not explicitly drawn. In addition, carbon atoms are not generally labelled as such directly (i.e. with "C"), whereas heteroatoms are always explicitly noted as such ("N" for nitrogen , "O" for oxygen , etc.) Heteroatoms and other groups of atoms that give rise to relatively high rates of chemical reactivity, or introduce specific and interesting characteristics in the spectra of compounds are called functional groups , as they give the molecule a function. Heteroatoms and functional groups are collectively called "substituents", as they are considered to be a substitute for the hydrogen atom that would be present in the parent hydrocarbon of the organic compound. As in Lewis structures, covalent bonds are indicated by line segments, with a doubled or tripled line segment indicating double or triple bonding , respectively. Likewise, skeletal formulae indicate formal charges associated with each atom, with lone pairs usually being optional (see below ) . In fact, skeletal formulae can be thought of as abbreviated Lewis structures that observe the following simplifications: In the standard depiction of a molecule, the canonical form (resonance structure) with the greatest contribution is drawn. However, the skeletal formula is understood to represent the "real molecule" – that is, the weighted average of all contributing canonical forms. Thus, in cases where two or more canonical forms contribute with equal weight (e.g., in benzene, or a carboxylate anion) and one of the canonical forms is selected arbitrarily, the skeletal formula is understood to depict the true structure, containing equivalent bonds of fractional order, even though the delocalized bonds are depicted as nonequivalent single and double bonds. Since skeletal structures were introduced in the latter half of the 19th century, their appearance has undergone considerable evolution. The graphical conventions in use today date to the 1980s. Thanks to the adoption of the ChemDraw software package as a de facto industry standard (by American Chemical Society , Royal Society of Chemistry , and Gesellschaft Deutscher Chemiker publications, for instance), these conventions have been nearly universal in the chemical literature since the late 1990s. A few minor conventional variations, especially with respect to the use of stereobonds, continue to exist as a result of differing US, UK and European practice, or as a matter of personal preference. [ 3 ] As another minor variation between authors, formal charges can be shown with the plus or minus sign in a circle (⊕, ⊖) or without the circle. The set of conventions that are followed by most authors is given below, along with illustrative examples. For example, the skeletal formula of hexane (top) is shown below. The carbon atom labeled C 1 appears to have only one bond, so there must also be three hydrogens bonded to it, in order to make its total number of bonds four. The carbon atom labelled C 3 has two bonds to other carbons and is therefore bonded to two hydrogen atoms as well. A Lewis structure (middle) and ball-and-stick model (bottom) of the actual molecular structure of hexane, as determined by X-ray crystallography , are shown for comparison. It does not matter which end of the chain one starts numbering from, as long as consistency is maintained when drawing diagrams. The condensed formula or the IUPAC name will confirm the orientation. Some molecules will become familiar regardless of the orientation. All atoms that are not carbon or hydrogen are signified by their chemical symbol , for instance Cl for chlorine , O for oxygen , Na for sodium , and so forth. In the context of organic chemistry, these atoms are commonly known as heteroatoms (the prefix hetero- comes from Greek ἕτερος héteros, meaning "other"). Any hydrogen atoms bonded to heteroatoms are drawn explicitly. In ethanol , C 2 H 5 OH, for instance, the hydrogen atom bonded to oxygen is denoted by the symbol H, whereas the hydrogen atoms which are bonded to carbon atoms are not shown directly. Lines representing heteroatom-hydrogen bonds are usually omitted for clarity and compactness, so a functional group like the hydroxyl group is most often written −OH instead of −O−H. These bonds are sometimes drawn out in full in order to accentuate their presence when they participate in reaction mechanisms . Shown below for comparison are a skeletal formula (top), its Lewis structure (middle) and its ball-and-stick model (bottom) of the actual 3D structure of the ethanol molecule in the gas phase, as determined by microwave spectroscopy . There are also symbols that appear to be chemical element symbols , but represent certain very common substituents or indicate an unspecified member of a group of elements. These are called pseudoelement symbols or organic elements and are treated like univalent "elements" in skeletal formulae. [ 4 ] A list of common pseudoelement symbols: Sulfonate esters are often leaving groups in nucleophilic substitution reactions. See the articles on sulfonyl and sulfonate groups for further information. A protecting group or protective group is introduced into a molecule by chemical modification of a functional group to obtain chemoselectivity in a subsequent chemical reaction, facilitating multistep organic synthesis. Two atoms can be bonded by sharing more than one pair of electrons. The common bonds to carbon are single, double and triple bonds. Single bonds are most common and are represented by a single, solid line between two atoms in a skeletal formula. Double bonds are denoted by two parallel lines, and triple bonds are shown by three parallel lines. In more advanced theories of bonding, non- integer values of bond order exist. In these cases, a combination of solid and dashed lines indicate the integer and non-integer parts of the bond order, respectively. In recent years, benzene is generally depicted as a hexagon with alternating single and double bonds, much like the structure Kekulé originally proposed in 1872. As mentioned above, the alternating single and double bonds of "1,3,5-cyclohexatriene" are understood to be a drawing of one of the two equivalent canonical forms of benzene (the one explicitly shown and the one with the opposite pattern of formal single and double bonds), in which all carbon–carbon bonds are of equivalent length and have a bond order of exactly 1.5. For aryl rings in general, the two analogous canonical forms are almost always the primary contributors to the structure, but they are nonequivalent, so one structure may make a slightly greater contribution than the other, and bond orders may differ somewhat from 1.5. An alternate representation that emphasizes this delocalization uses a circle, drawn inside the hexagon of single bonds, to represent the delocalized pi orbital . This style, based on one proposed by Johannes Thiele , used to be very common in introductory organic chemistry textbooks and is still frequently used in informal settings. However, because this depiction does not keep track of electron pairs and is unable to show the precise movement of electrons, it has largely been superseded by the Kekuléan depiction in pedagogical and formal academic contexts. [ f ] Stereochemistry is conveniently denoted in skeletal formulae: [ 5 ] The relevant chemical bonds can be depicted in several ways: An early use of this notation can be traced back to Richard Kuhn who in 1932 used solid thick lines and dotted lines in a publication. The modern solid and hashed wedges were introduced in the 1940s by Giulio Natta to represent the structure of high polymers , and extensively popularised in the 1959 textbook Organic Chemistry by Donald J. Cram and George S. Hammond . [ 6 ] Skeletal formulae can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use but may still be required by computer software. [ 5 ] Hydrogen bonds are generally denoted by dotted or dashed lines. In other contexts, dashed lines may also represent partially formed or broken bonds in a transition state .
https://en.wikipedia.org/wiki/Skeletal_formula
Skeleton programming is a style of computer programming based on simple high-level program structures and so called dummy code . Program skeletons resemble pseudocode , but allow parsing , compilation and testing of the code. Dummy code is inserted in a program skeleton to simulate processing and avoid compilation error messages. It may involve empty function declarations, or functions that return a correct result only for a simple test case where the expected response of the code is known. Skeleton programming facilitates a top-down design approach, where a partially functional system with complete high-level structures is designed and coded, and this system is then progressively expanded to fulfill the requirements of the project. Program skeletons are also sometimes used for high-level descriptions of algorithms . A program skeleton may also be utilized as a template that reflects syntax and structures commonly used in a wide class of problems. Skeleton programs are utilized in the template method design pattern used in object-oriented programming . In object-oriented programming , dummy code corresponds to an abstract method , a method stub or a mock object . In the Java remote method invocation (Java RMI) nomenclature, a stub communicates on the client-side with a skeleton on the server-side . [ 1 ] A class skeleton is an outline of a class that is used in software engineering . It contains a description of the class's roles, and describes the purposes of the variables and methods , but does not implement them. The class is later implemented from the skeleton. The skeleton can also be known as either an interface or an abstract class , with languages that follow a polymorphic paradigm. Modern software [ 2 ] is often complicated due to a host of reasons. This can mean that not just a single programmer can develop it, or that other modules or parts have to be separately imported. The programs can also be too complex on their own, some with multiple methods accessing a single variable at the same time or even generating pixels for displays. Skeleton code is used to assist programmers to develop their code with the fewest errors during the time of compilation . Skeleton code is most commonly found in parallel programming , but is also applied in other situations, like documentation in programming languages . This helps to simplify the core functionality of a potentially confusing method. It can also be used to allow for a small function within a larger program to operate without full functionality temporarily. This method of programming is easier than writing a complete function, as these skeleton functions do not have to include main functionalities and can instead be hardcoded to use during development. They usually involve syntactically correct code to introduce the method, as well as comments to indicate the operation of the program. This is not always necessary to call a piece of text skeleton code. Pseudocode is most commonly found when developing the structure of a new piece of software . It is a plain English portrayal of a particular function within a larger system, or can even be a representation of a whole program. Pseudocode is similar to skeleton programming, however deviates in the fact that pseudocode is primarily an informal method of programming. [ 3 ] Dummy code is also very similar to this, where code is used simply as a placeholder, or to signify the intended existence of a method in a class or interface. Computer programmers are extremely dependent on pseudocode, so much so that it has a measurable impact on their psyche . [ 3 ] A typical programmer is so conditioned with the idea of writing simplified code in some manner, be it by writing pseudocode or skeleton code, or even just by drawing a diagram, that this has a measurable impact on how well they can write their final implementation. This has been found over a number of applications, with different programmers working in different languages and varied programming paradigms . This method of program design is also most often done on pen and paper, further moving the text from what is actually to be implemented. Skeleton programming mimics this, but differs in the way that it is commonly written in an integrated development environment , or text editors . This assists the further development of the program after the initial design stage. Skeleton programs also allow for simplistic functions to operate, if run. Skeleton programming can be implemented in a range of different programming applications. Most, if not all programming languages have skeleton code used to assist in the definition of all built-in functions and methods . This provides a simple means for newer programmers to understand the syntax and intended implementation of the written methods. Java , an object oriented language , focuses heavily on a structured documentation page with completely separated methods for each object part of Java's packages. [ 4 ] Object oriented languages focus on a hierarchy based structure to their implementations, rather than a simple top-down approach found in other languages. ‘Objects’ store data and variables in them, allowing for a typically more efficient program to be written. These objects have individual functions that can access internal variables, known as methods. Each method is defined in the same format, with the name of the method as well as the syntax to be used in an integrated development environment clearly visible at the top of a block. With Java's focus on scope , data types and inheritance , this syntax is extremely useful for new, if not all programmers. This is followed by an in-depth explanation of the operation of the method, with errors below. Python has a similar approach to document its in-built methods, however mimics the language's lack of fixation on scope and data types. [ 5 ] This documentation has the syntax of each method, along with a short description and an example of the typical use of the method or function. The skeleton code provided in the example gives programmers a good understanding of the function at a quick glance. Classes written by third-party developers, primarily as a part of libraries, also showcase their programming in the form of skeleton code. This helps to inform any that are new to the library as to how the functions and methods operate. P5.Js uses this format on their documentation page to explain the intended use of certain included functions. [ 6 ] This is different to the programming language documentation however, using skeleton code to display parameters rather than all possible uses of the method. Natural Language Interfaces (NLIs) are most typically found in situations where programmers attempt to take an input , usually colloquially termed (without the use of programming language specific jargon ) and use this to create a program or a method. An implementation of this uses a small set of skeleton code to imply the function running in the background. [ 7 ] Other forms of NLIs use different forms of input, ranging from other users speaking different languages, to gesture based input to produce a very similar result. With programming languages being developed and written primarily in English, people speaking other languages find it hard to develop new software. NLIs have been used in some studies [ 8 ] to assist people in these situations. The study showed classes written in Java through the use of NLIs. This removed the need for learning syntactical rules, however meant that the class was written using a basic set of skeleton code. Polymorphism is an ideology that follows with the object-oriented programming paradigm, where methods can be overridden or overloaded (methods with the same name in a child class which will take priority over a method written in a parent class). The definition of methods is based on a skeleton framework defined by the syntax of the language. [ 9 ] Very similar to class implementation, skeleton code can be used to define the methods that are part of an interface . An interface is essentially a blueprint of a class, which allows for strict object oriented languages (such as Java ) to use classes from different packages without the need to fully understand the internal functions. Interfaces simply define the methods that have to be present within the class, allowing anyone else to use the methods or implement the class for their personal needs An abstract class is almost the same as a class implementation, however depending on the language, at least one method is defined as abstract. This implies that any children of this class (any classes that extend or implement) need to have a method defined for this. Abstract classes have a very similar definition style to interfaces, however a keyword ‘abstract’ is typically used to identify the fact that it needs to be implemented in child classes. These examples use the Java syntax . Parallel programming is the operation of multiple functions simultaneously most commonly used to increase efficiency. These are typically the hardest types of programs to develop, due to their complexity and interconnectedness with the hardware in question as well. Many developers have attempted to write programs with this core functionality, [ 10 ] however this has been met by varied results. Algorithmic skeleton frameworks are used in parallel programming to abstractly describe the methods in question for later development. The frameworks are not limited to a single type, and each of these types have different purposes to increase the efficiency of the developer's program. These can be categorised into three main types: data-parallel , task-parallel and resolution. [ 10 ] These skeleton algorithms are used to develop programs that work on large data based software, usually identifying the connections between data for later use. Data parallel algorithms include ‘maps’, ‘forks’ and ‘reduces’ or ‘scans’. These operations, as their name suggests, work on tasks. Each type of algorithm under this is different due to a change in the behaviour between tasks. Task parallel algorithms include ‘sequentials’, ‘farms’, ‘pipes’, ‘if’, ‘for’ and ‘while’. These skeletons are very different to the typical skeletons found above. ‘Resolution’ algorithms use a combination of methods to solve a specified problem. The algorithm's given problem can be a “family of problems”. [ 10 ] There are two main types of these skeletons, ‘divide and conquer’ or ‘brand and bound’.
https://en.wikipedia.org/wiki/Skeleton_(computer_programming)
Skeptics in the Pub (abbreviated SITP ) is an informal social event designed to promote fellowship and social networking among skeptics , critical thinkers , freethinkers , rationalists and other like-minded individuals. It provides an opportunity for skeptics to talk, share ideas and have fun in a casual atmosphere, and discuss whatever topical issues come to mind, while promoting skepticism , science , and rationality . [ 1 ] "Skeptics in the Pub" is not a protected term, anyone can set one up. There also is no formal procedure to organising an event; organisers can fill in activities as they see fit. There are, however, some common approaches across the world in hosting such events that make them more successful. [ 2 ] The usual format of meetings includes an invited speaker who gives a talk on a specific topic, followed by a question-and-answer session. [ 3 ] Other meet-ups are informal socials, with no fixed agenda. The groups usually meet once a month at a public venue, most often a local pub . By 2012 there were more than 100 different "SitP" groups running around the world. [ 4 ] [ 5 ] The earliest and longest-running event [ 1 ] is the award-winning [ 6 ] London meeting, established by Australian philosophy professor Scott Campbell [ 7 ] [ 8 ] in 1999. [ 1 ] [ 9 ] Campbell based the idea around Philosophy in the Pub and Science in the Pub , two groups which had been running in Australia for some time. [ 10 ] The inaugural speaker was Wendy M. Grossman , the editor and founder of The Skeptic magazine, in February [ citation needed ] 1999; this first talk attracted 30 attendees. [ 1 ] The London group claims to be the "World's largest regular pub meeting," with 200 to 400 people in attendance at each meeting. [ 1 ] [ 8 ] [ 11 ] Campbell ran the London group for three years while there on a teaching sabbatical, [ 1 ] and was succeeded after his return to Australia by two sci-fi fans and skeptics, Robert Newman and Marc LaChappelle. Nick Pullar, who made a television appearance as "Convener of Skeptics in the Pub" on the BBC spoof show Shirley Ghostman , [ 12 ] [ 13 ] [ 14 ] then led the group from 2003 to 2008. As of 2011, the London group was co-convened by Sid Rodrigues, [ 1 ] who has co-organised events in several other cities around the world. [ citation needed ] This group has conducted experiments on the paranormal as part of James Randi's One Million Dollar Paranormal Challenge [ 15 ] and co-organised An Evening with James Randi & Friends . [ 16 ] [ 17 ] [ 18 ] Some of the speakers at London Skeptics in the Pub have been Simon Singh , Victor Stenger , Jon Ronson , Phil Plait , David Colquhoun , Richard J. Evans , S. Fred Singer , Ben Goldacre , David Nutt , and Mark Stevenson . [ citation needed ] The ease of use of the internet, via social networking sites and content management systems, [ 19 ] has led to more than 100 active chapters around the world, including more than 30 in the US and more than 40 in the UK. [ 4 ] [ 5 ] In 2009, D. J. Grothe described the rise of Skeptics in the Pub across cities in North America and elsewhere as a prominent example of "Skepticism 2.0". SITPs were often founded outside the realm of existing skeptical organisations (mostly centred around magazines ), with some successful meetings growing out to become fully-fledged membership organisations. [ 19 ] "Skeptics in the Pub" would later serve as the template for other skeptical, rationalist, and atheist meet-ups around the globe, including The James Randi Educational Foundation 's "The Amazing Meeting", Drinking Skeptically, The Brights , and the British Humanist Association social gatherings. [ citation needed ] Since 2010 Edinburgh Skeptics in the Pub has extended the Skeptics in the Pub concept over the whole Edinburgh International Festival Fringe , under the banner Skeptics on the Fringe and from 2012 done the same at the Edinburgh International Science Festival with the title At The Fringe of Reason . The Merseyside Skeptics Society and Greater Manchester Skeptics (forming North West Skeptical Events Ltd) hosted three two-day conferences, QED, in February 2011, March 2012 and April 2013. Glasgow Skeptics has also hosted two one-day conferences, as of July 2011. [ citation needed ]
https://en.wikipedia.org/wiki/Skeptics_in_the_Pub
A skeuomorph (also spelled skiamorph , / ˈ s k juː ə ˌ m ɔːr f , ˈ s k juː oʊ -/ ) [ 1 ] [ 2 ] is a derivative object that retains ornamental design cues (attributes) from structures that were necessary in the original. [ 3 ] Skeuomorphs are typically used to make something new feel familiar in an effort to speed understanding and acclimation. They employ elements that, while essential to the original object, serve no pragmatic purpose in the new system, except for identification. Examples include pottery embellished with imitation rivets reminiscent of similar pots made of metal [ 4 ] and a software calendar that imitates the appearance of binding on a paper desk calendar. [ 5 ] The term skeuomorph is compounded from the Greek skeuos (σκεῦος), meaning "container or tool", and morphḗ (μορφή), meaning "shape". It has been applied to material objects since 1890. [ 6 ] With the advent of graphical computer systems in the 1980s, skeuomorph is used to characterize the many "old fashioned" icons utilized in graphic user interfaces. [ 7 ] A similar alternative definition of skeuomorph is "a physical ornament or design on an object made to resemble another material or technique". [ citation needed ] This definition is broader in scope, as it can be applied to design elements that still serve the same function as they did in a previous design. Skeuomorphs may be deliberately employed to make a new design more familiar and comfortable or may be the result of cultural influences and norms on the designer. They may be the artistic expression on the part of the designer. [ 7 ] The usability researcher and academic Don Norman describes skeuomorphism in terms of cultural constraints: interactions with a system that are learned only through culture. Norman also popularized perceived affordances , where the user can tell what an object provides or does based on its appearance, which skeuomorphism can make easy. [ 8 ] The concept of skeuomorphism overlaps with other design concepts. Mimesis is an imitation, coming directly from the Greek word. [ 9 ] Archetype is the original idea or model that is emulated, where the emulations can be skeuomorphic. [ 10 ] Skeuomorphism is parallel to, but different from, path dependence in technology, where an element's functional behavior is maintained even when the original reasons for its design no longer exist. Many features of wooden buildings were repeated in stone by the Ancient Greeks when they transitioned from wood to masonry construction. Decorative stone features in the Doric order of classical architecture in Greek temples such as triglyphs , mutules , guttae , and modillions are supposed to be derived from true structural and functional features of the early wooden temples. The triglyph and guttae are seen as recreating, respectively, the carved beam-ends and six wooden pegs driven in to secure the beam in place. [ 11 ] [ 12 ] [ 13 ] Historically, high-status items such as the Minoans ' elaborate and expensive silver cups were recreated for a wider market using pottery , a cheaper material. The exchange of shapes between metalwork and ceramics, often from the former to the latter, is near-constant in the history of the decorative arts . Sometimes pellets of clay are used to evoke the rivets of the metal originals. [ 14 ] There is also evidence of skeuomorphism in material transitions. Leather and pottery often carry over features from the wooden counterparts of previous generations. Clay pottery has also been found bearing rope-shaped protrusions, pointing to craftsmen seeking familiar shapes and processes while working with new materials. [ 12 ] Another example is the tiny, non-functional handle on glass maple syrup bottles, which evoke stoneware jug handles. [ 15 ] In this context, skeuomorphs exist as traits sought in other objects, either for their social desirability or psychological comforts. [ 7 ] In the modern era, cheaper plastic items often attempt to mimic more expensive wooden and metal products, [ 16 ] though they are only skeuomorphic if new ornamentation references the original functionality, [ 17 ] such as molded screw heads in molded plastic items. Another well-known skeuomorph is the plastic Adirondack chair . [ 18 ] The lever on a mechanical slot machine, or " one-armed bandit ", is a skeuomorphic throwback feature when it appears on a modern video slot machine, since it is no longer required to set physical mechanisms and gears into motion. Articles of clothing are also given skeuomorphic treatment; for example, faux buckles on certain strap shoes such as Mary Janes for small children, which permit the retention of the original aesthetic but otherwise use velcro fastening for children to wear more easily. Automotive design has historically been full of physical skeuomorphisms, such as the transformation from wooden framed and bodied early vehicles produced by coachworks to those which incorporated both functional wood and steel (referred to as " woodies ") to, ultimately, simulated vinyl woodgrain cladding entirely for style by the 1960s. Other examples include thinly faux chrome-plated plastic components and imitation leather, gold, interior wood, pearl, or crystal jeweled elements. In The Design of Everyday Things , Don Norman notes that early automobiles were designed after horse-drawn carriages . [ 16 ] Indeed, the early automobile design Horsey Horseless even included a wooden horse head on the front to try to minimize scaring the real animals. [ 19 ] In the 1970s, opera windows and vinyl roofs on many luxury sedan cars similarly imitated carriage work from the horse and buggy era. As of 2019 [update] , most electric cars feature prominent front grilles, even though there is little need for intake of air to cool an absent internal combustion engine . [ 20 ] Many computer programs have a skeuomorphic graphical user interface that emulates the aesthetics of physical objects. Examples include a digital contact list resembling a Rolodex [ 21 ] and IBM's 1998 RealThings package. [ 22 ] A more extreme example is found in some music synthesis and audio processing software packages, which closely emulate physical musical instruments and audio equipment complete with buttons and dials. [ 23 ] On a smaller scale, the icons of GUIs may remain skeuomorphic representations of physical objects, such as an image of a physical paper folder to represent computer files [ 16 ] in the desktop metaphor . This is even the case for items that are no longer directly applicable to the task they represent (such as a drawing of a floppy disk to represent "save"). Apple Inc. , while under the direction of Steve Jobs , was known for its wide usage of skeuomorphic designs in various applications. This changed after Jobs's death when Scott Forstall , described as "the most vocal and high-ranking proponent of the visual design style favored by Mr. Jobs", resigned. Apple designer Jonathan Ive took over some of Forstall's responsibilities and had "made his distaste for the visual ornamentation in Apple's mobile software known within the company". [ 24 ] With the announcement of iOS 7 at WWDC in 2013, Apple officially shifted from skeuomorphism to a more simplified design , thus beginning the so-called "death of skeuomorphism" at Apple. [ 25 ] Skeuomorphism is a key component of Frutiger Aero , an Internet aesthetic derived from mid-2000s user interface designs. [ 26 ] Other virtual skeuomorphs do not employ literal images of some physical object; but rather allude to ritual human heuristics or heuristic motifs , such as slider bars that emulate linear potentiometers [ 23 ] and visual tabs that behave like physical tabbed file folders. Another example is the swiping hand gesture for turning the "pages" or screens of a tablet display. [ 27 ] [ 28 ] Virtual skeuomorphs can also be auditory. The shutter-click sound emitted by most camera phones when taking a picture is an auditory skeuomorph. [ 29 ] Other familiar examples are the paper-crumpling sound when a document is trashed [ 30 ] and sound engines in an electric car mimicking the sound of an internal combustion engine. [ 31 ] Retrofuturism incorporates visual motifs from old predictions of the future, especially visions of electro-industrialism. [ clarification needed ] [ 32 ] [ failed verification ] Skeuomorphic design is frequently incorporated in retrowave or synthwave illustrations. Skeuomorphic design is closely linked with metamodernism . Skeuomorphic design seems to be preferred by older recipient groups, often referred to as " digital immigrants ", while " digital natives " tend to favor flat design over skeuomorphisms. However, younger people are still able to understand the signifiers that skeuomorphic design employs. A better user experience could be measured for each respective design philosophy among digital natives and immigrants. [ 33 ] An argument in favor of skeuomorphic design in digital devices is that signifiers to affordances help those familiar with the original item learn to use the digital version. Interaction paradigms for computer devices are culturally entrenched; proposals for change often spawn debate. Don Norman describes this process as a form of cultural heritage , [ 8 ] and credits skeuomorphism with easing transitions to newer technology, stating that it "gives comfort and makes learning easier" until the newer devices no longer need to resemble their predecessors. [ 16 ] Compared to flat design, skeuomorphic design seems to facilitate a fast navigation through graphic user interfaces, because icons are more easily recognized and less abstract than their minimalistic counterparts found in flat design. [ 33 ] The arguments against virtual skeuomorphic design are that skeuomorphic interface elements are harder to operate and take up more screen space than standard interface elements, that this breaks operating system interface design standards , that it causes an inconsistent look and feel between applications, [ 34 ] that skeuomorphic interface elements rarely incorporate numeric input or feedback for accurately setting a value, that many users may have no experience with the original device being emulated, that skeuomorphic design can increase cognitive load with visual noise that after a few uses gives little or no value to the user, that skeuomorphic design limits creativity by grounding the user experience to physical counterparts, [ 35 ] and that skeuomorphic designs often do not accurately represent underlying system state or data types due to inappropriate mimesis . For example, an analog gauge interface may be read less precisely than a digital one.
https://en.wikipedia.org/wiki/Skeuomorph
In linear algebra , a skew-Hamiltonian matrix is a specific type of matrix that corresponds to a skew-symmetric bilinear form on a symplectic vector space . Let V {\displaystyle V} be a vector space equipped with a symplectic form, denoted by Ω. A symplectic vector space must necessarily be of even dimension . A linear map A : V ↦ V {\displaystyle A:\;V\mapsto V} is defined as a skew-Hamiltonian operator with respect to the symplectic form Ω if the bilinear form defined by ( x , y ) ↦ Ω ( A ( x ) , y ) {\displaystyle (x,y)\mapsto \Omega (A(x),y)} is skew-symmetric. Given a basis e 1 , … , e 2 n {\displaystyle e_{1},\ldots ,e_{2n}} in V {\displaystyle V} , the symplectic form  Ω  can be expressed as ∑ i e i ∧ e n + i {\textstyle \sum _{i}e_{i}\wedge e_{n+i}} . In this context, a linear operator A {\displaystyle A} is skew-Hamiltonian with respect to Ω if and only if its corresponding matrix satisfies the condition A T J = J A {\displaystyle A^{T}J=JA} , where J {\displaystyle J} is the skew-symmetric matrix defined as: With I n {\displaystyle I_{n}} representing the n × n {\displaystyle n\times n} identity matrix. Matrices that meet this criterion are classified as skew-Hamiltonian matrices. Notably, the square of any Hamiltonian matrix is skew-Hamiltonian. Conversely, any skew-Hamiltonian matrix can be expressed as the square of a Hamiltonian matrix. [ 1 ] [ 2 ] This article about matrices is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Skew-Hamiltonian_matrix
In linear algebra , a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. [ 1 ] That is, the matrix A {\displaystyle A} is skew-Hermitian if it satisfies the relation A skew-Hermitian ⟺ A H = − A {\displaystyle A{\text{ skew-Hermitian}}\quad \iff \quad A^{\mathsf {H}}=-A} where A H {\displaystyle A^{\textsf {H}}} denotes the conjugate transpose of the matrix A {\displaystyle A} . In component form, this means that A skew-Hermitian ⟺ a i j = − a j i ¯ {\displaystyle A{\text{ skew-Hermitian}}\quad \iff \quad a_{ij}=-{\overline {a_{ji}}}} for all indices i {\displaystyle i} and j {\displaystyle j} , where a i j {\displaystyle a_{ij}} is the element in the i {\displaystyle i} -th row and j {\displaystyle j} -th column of A {\displaystyle A} , and the overline denotes complex conjugation . Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices , or as the matrix analogue of the purely imaginary numbers. [ 2 ] The set of all skew-Hermitian n × n {\displaystyle n\times n} matrices forms the u ( n ) {\displaystyle u(n)} Lie algebra , which corresponds to the Lie group U( n ) . The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm . Note that the adjoint of an operator depends on the scalar product considered on the n {\displaystyle n} dimensional complex or real space K n {\displaystyle K^{n}} . If ( ⋅ ∣ ⋅ ) {\displaystyle (\cdot \mid \cdot )} denotes the scalar product on K n {\displaystyle K^{n}} , then saying A {\displaystyle A} is skew-adjoint means that for all u , v ∈ K n {\displaystyle \mathbf {u} ,\mathbf {v} \in K^{n}} one has ( A u ∣ v ) = − ( u ∣ A v ) {\displaystyle (A\mathbf {u} \mid \mathbf {v} )=-(\mathbf {u} \mid A\mathbf {v} )} . Imaginary numbers can be thought of as skew-adjoint (since they are like 1 × 1 {\displaystyle 1\times 1} matrices), whereas real numbers correspond to self-adjoint operators. For example, the following matrix is skew-Hermitian A = [ − i + 2 + i − 2 + i 0 ] {\displaystyle A={\begin{bmatrix}-i&+2+i\\-2+i&0\end{bmatrix}}} because − A = [ i − 2 − i 2 − i 0 ] = [ − i ¯ − 2 + i ¯ 2 + i ¯ 0 ¯ ] = [ − i ¯ 2 + i ¯ − 2 + i ¯ 0 ¯ ] T = A H {\displaystyle -A={\begin{bmatrix}i&-2-i\\2-i&0\end{bmatrix}}={\begin{bmatrix}{\overline {-i}}&{\overline {-2+i}}\\{\overline {2+i}}&{\overline {0}}\end{bmatrix}}={\begin{bmatrix}{\overline {-i}}&{\overline {2+i}}\\{\overline {-2+i}}&{\overline {0}}\end{bmatrix}}^{\mathsf {T}}=A^{\mathsf {H}}}
https://en.wikipedia.org/wiki/Skew-Hermitian_matrix
A skew arch (also known as an oblique arch ) is a method of construction that enables an arch bridge to span an obstacle at some angle other than a right angle . This results in the faces of the arch not being perpendicular to its abutments and its plan view being a parallelogram , rather than the rectangle that is the plan view of a regular, or "square" arch . In the case of a masonry skew arch, the construction requires precise stonecutting , as the cuts do not form right angles, but once the principles were fully understood in the early 19th century, it became considerably easier and cheaper to build a skew arch of brick . The problem of building skew arch masonry bridges was addressed by a number of early civil engineers and mathematicians , including Giovanni Barbara (1726), William Chapman (1787), Benjamin Outram (1798), Peter Nicholson (1828), George Stephenson (1830), Edward Sang (1835), Charles Fox (1836), George W. Buck (1839) and William Froude (c. 1844). Skew bridges are not a recent invention, having been built on exceptional occasions since Roman times, but they were little understood and rarely used before the advent of the railway . [ 1 ] [ 2 ] An early example of the skew arch is the Arco Barbara in the Floriana Lines fortifications in Malta , which was designed by the Maltese architect and military engineer Giovanni Barbara in 1726. [ 3 ] [ 4 ] Another notable exception is an aqueduct , designed by British engineer Benjamin Outram , constructed in masonry and completed in 1798, which still carries the Ashton Canal at an angle of 45° over Store Street in Manchester . [ 5 ] Outram's design is believed to be based on work done on the Kildare Canal in Ireland in 1787, [ 5 ] [ 6 ] in which William Chapman introduced the segmental oblique arch to the design of Finlay Bridge at Naas , [ 7 ] employing an arch barrel based on a circular segment that is smaller than a semicircle and which was repeated by Thomas Storey [ 8 ] in 1830 in the bridge carrying the Haggerleases branch of the Stockton and Darlington Railway over the River Gaunless near Cockfield, County Durham with a skew angle [A] of 63° and a skew span [B] of 42 feet (13 m), resulting in a clear span [C] of 18 feet (5.5 m) and a rise [D] of 7 feet (2.1 m). [ 9 ] [ 10 ] [ 11 ] The common method they all used was to clad the timber centring (also known as falsework ) with planks, known as "laggings", laid parallel with the abutments and carefully planed and levelled to approximate closely the required curve of the intrados of the arch. The positions of the courses in the vicinity of the crown were first marked out at right angles to the faces using long wooden straight-edges, then the remaining courses were marked out in parallel. The masons then laid the stones, cutting them to shape as required. [ 5 ] Contemporary designs by rival engineers were less successful and for a time skew bridges were considered weak in comparison with the regular, or "square" arch bridge and so were avoided if at all possible, [ 12 ] the alternatives being to construct the road or canal with a double bend , so as to allow it to cross the obstacle at right angles, or to build a regular arch bridge with the extra width or span necessary to clear the obstacle "on the square". [ 13 ] An example of the latter type of construction is Denbigh Hall Bridge, built in 1837 to carry the London and Birmingham Railway across Watling Street at an acute angle of only 25°. [ 6 ] Now a Grade II listed structure, the bridge is still in use today, carrying the busy West Coast Main Line . It was constructed in the form of a long gallery, some 200 feet (61 m) long and 34 feet (10 m) wide, consisting of iron girders resting on walls built parallel with the road; the girders, and consequently the faces of the bridge, being perpendicular to the roadway and the railway line being laid out obliquely across the top, the need to build a highly skewed bridge of 80 feet (24 m) span was therefore avoided. [ 6 ] The eminent canal engineer James Brindley never succeeding in working out a solution to the problem of constructing a strong skew arch and as a consequence all his overbridges were built at right angles to the waterway, with double bends in the roadway, where necessary, and to this day many of them cause inconvenience to their users. [ 5 ] However, it was the coming of the railway, with its need to cross existing obstacles, such as rivers, roads, canals and other railways, in as straight a line as possible, that rekindled the civil engineer's interest in the skew arch bridge. [ 1 ] [ 2 ] The strength of a regular arch (also known as a "square" or "right" arch) comes from the fact that the mass of the structure and its superincumbent load cause lines of force that are carried by the stones into the ground and the abutments without producing any tendency for the stones to slide with respect to one another. This is due to the fact that the courses of stone are laid parallel to the abutments, which in a regular arch causes them also to lie perpendicular to its faces. For only slightly oblique bridges, where the skew angle is less than approximately 15° it is possible to use the same construction method, laying the stones in courses parallel to the abutments. [ 5 ] [ 12 ] The result is known as a "false" skew arch and analysis of the forces within it shows that in each corner where the face forms an acute angle with an abutment there are resultant forces that are not perpendicular to the planes of the stone courses whose tendency is to push the stones out of the face, the only resistance to this being provided by friction and the adhesion of the mortar between the stones. [ 5 ] [ 14 ] [ 15 ] An example of such a false skew arch is the Colorado Street Bridge in Saint Paul, Minnesota. [ 16 ] [ 17 ] Before starting work on Store Street Aqueduct, Outram built a number of false skew arches, one of them with a skew angle as great as 19°, as accommodation bridges across the Huddersfield Narrow Canal . The fact that these inherently weak structures are still standing today is attributed to their light loading. [ 18 ] When considering the balance of forces within a regular arch, in which all courses of masonry that make up the barrel are parallel with its abutments and perpendicular to its faces, it is convenient to consider it as a two-dimensional object by taking a vertical section through the body of the arch and parallel with its faces, thereby ignoring any variation in loading along the length of its barrel. [ 13 ] In an oblique or skew arch the axis of the barrel is deliberately not perpendicular to the faces, the deviation from perpendicularity being known as the skew angle or the "obliquity" of the arch. [ 19 ] For this reason a skew arch needs to be thought of as a three-dimensional object and by considering the direction of the lines of force within the barrel the optimum orientation for the courses of stonework that make the barrel can be decided. [ 2 ] A characteristic of the regular arch is that the courses of stones run parallel to the abutments and perpendicular to the faces. [ 20 ] In an oblique arch these two conditions cannot both be met because the faces and the abutments are deliberately not perpendicular. Since skew angles greater than about 15° are required for many applications, mathematicians and engineers such as Chapman abandoned the idea of laying the courses of stones parallel to the abutments and considered the alternative of laying the courses perpendicular to the faces of the arch, and accepting the fact that they would then no longer run parallel to the abutments. [ 5 ] Though Outram's Store Street Aqueduct was constructed with this principle in mind, it was done so empirically , with the masons cutting each voussoir stone as it was required, and it was not until 1828 that details of the technique were published in a form that was useful to other engineers and stonemasons. [ 21 ] In his book A Popular and Practical Treatise on Masonry and Stone-cutting (1828), Scottish architect, mathematician, cabinet-maker and engineer Peter Nicholson first set out in clear and understandable terms a workable method for determining the shape and position of the stones required for the construction of a strong skew arch that enabled them to be prepared in advance of the actual construction process. [ 5 ] [ 22 ] [ 23 ] Nicholson approached the problem by constructing a development of the intrados [E] of the arch from the plan and elevation drawings, effectively unrolling and flattening the surface, then drawing the courses perpendicular to the faces, [F] adding the header joints perpendicular to the courses, then finally rolling up the development diagram by projecting the detail of the intrados back onto the plan and elevation drawings, a technique also used by others who would later offer alternative solutions to the problem. [ 22 ] This method resulted in the courses of stone voussoirs making up the barrel of the skew arch following parallel helical [G] paths between the abutments, giving the view along the barrel an attractive rifled appearance. Although these courses meet the arch faces at right angles at the crown of the arch, the nearer they are to the springing line the greater their deviation from perpendicularity. [ 19 ] Thus Nicholson's method is not the perfect solution, but it is a workable one that has one great advantage over more purist alternatives, namely that since the helical courses run parallel to each other, all the voussoir stones can be cut to the same pattern, the only exceptions being the ring stones, or quoins , where the barrel meets the faces of the arch, each of which is unique but has an identical copy in the other face. [ 24 ] Nicholson never pretended to have invented the skew arch but in his later work The Guide to Railway Masonry, containing a Complete Treatise on the Oblique Arch (1839), he does claim to have invented the method for producing the templates that enabled the accurate cutting of the voussoir stones used in all skew bridges built between the years 1828 and 1836, citing testimonials from the builders of major works, such as the Croft Viaduct [ 25 ] at Croft-on-Tees near Darlington . [ 21 ] However, by 1836 a young engineer called Charles Fox had improved on Nicholson's helicoidal method and other writers were proposing alternative approaches to the problem. [ 26 ] In performing his calculations Nicholson considered the arch barrel to be made from one ring of stones and of negligible thickness and therefore he developed only the intrados. [ 27 ] The idea was expanded in Charles Fox's 1836 publication On the Construction of Skew Arches , in which he considered the intrados of the barrel and the extrados as separate surfaces mapped onto concentric cylinders by drawing a separate development for each. [ 2 ] This approach had two advantages. Firstly, he was able to develop a theoretical third, intermediate surface midway between the intrados and the extrados, which allowed him to align the centre of each voussoir, rather than its inner surface, along the desired line, thereby better approximating the ideal placement than Nicholson was able to achieve. [ 2 ] [ 28 ] Secondly, it enabled him to develop an arbitrary number of concentric intermediate surfaces so as to plan the courses in multi-ring skew arch barrels, allowing them for the first time to be constructed in brick, and therefore much more economically than was previously possible. [ 29 ] In order to explain how he visualised the courses of voussoirs in a stone skew arch, Fox wrote, "The principle which I have adopted is, to work the stones in the form of a spiral quadrilateral solid, wrapped round a cylinder, or, in plainer language, the principle of a square threaded screw: hence it becomes quite evident, that the transverse sections of all these spiral stones are the same throughout the whole arch. It will be obvious, that the beds of the stones should be worked into true spiral [helicoidal] [G] planes." [ 2 ] So, a stone skew arch built to Fox's plan would have its voussoirs cut with a slight twist, in order to follow the shape of a square threaded screw . While claiming a superior method, Fox openly acknowledged Nicholson's contribution [ 27 ] but in 1837 he felt the need to reply to a published letter written in support of Nicholson by fellow engineer Henry Welch, the County Bridge Surveyor for Northumberland . [ 23 ] Unfortunately the three men became involved in a paper war that, following a number of earlier altercations in which the originality of his writings was questioned, left the 71-year-old Nicholson feeling bitter and unappreciated. [ 30 ] The following year Fox, still aged only 28 and employed by Robert Stephenson as an engineer on the London and Birmingham Railway , presented his paper encapsulating these principles to the Royal Institution and from this was born the English or helicoidal method of constructing brick skew arches. [ 2 ] Using this method many thousands of skew bridges were built either entirely of brick or of brick with stone quoins by railway companies in the United Kingdom, a substantial number of which survive and are still in use today. [ 13 ] In 1839, George Watson Buck , having also worked on the London and Birmingham Railway under Stephenson before moving to the Manchester and Birmingham Railway , published a work entitled A Practical and Theoretical Essay on Oblique Bridges in which he also acknowledged Nicholson's contribution but, finding it lacking in detail, [ 31 ] applied his own original trigonometrical approach and considerable practical experience to the problem. [ 26 ] [ 32 ] This book was acknowledged as the definitive work on the subject of the helicoidal skew arch and remained a standard text book for railway engineers until the end of the 19th century. [ 33 ] [ 34 ] Buck's trigonometrical approach allowed every dimension of a skew arch to be calculated without recourse to taking measurements from scale drawings and it allowed him to calculate the theoretical minimum angle of obliquity to which a practical semicircular helicoidal skew bridge could be designed and safely built. [ 35 ] The "Buck Limit", as it is known, has a value of 25°40′ or, when quoted in terms of the maximum angle of skew , a value of 64°20′. [ 35 ] Buck paid particular attention to the design of bridges of extreme obliquity, addressing two potential problems he had identified. Firstly, he noted that the acutely angled quoins at the obtuse corners of the plan view were very susceptible to damage during construction, settlement or by accidental blows in subsequent use so he devised a method of chamfering the edge, removing the single acute angle and replacing it with two obtuse angles and, in his own words, "the quantity thus cut off from the acute quoin, is gradually diminished to the opposite or obtuse quoin, where the cutting vanishes; by this contrivance no angle less than a right angle is any where presented on the exterior of the work [...] the effect produced is elegant and pleasing to the eye." [ 36 ] [ 37 ] Secondly, he recommended that the extrados of the barrel of an arch of great obliquity be formed into rusticated steps so as to provide a horizontal bed for the spandrel walls in order to overcome their tendency to slide off the arch barrel. [ 38 ] The bridge carrying the London and Birmingham Railway over the London Road at Boxmoor in Hertfordshire, adjacent to what is now Hemel Hempstead station on the West Coast Main Line, is an example of a segmental arch of extreme obliquity that was designed by Buck and incorporates both of these features. Constructed in masonry, with a brick barrel, stone quoins and a 58° angle of skew, it was completed in 1837. [ 37 ] Shortly before the railway opened the bridge was the subject of an ink and wash drawing dated 12 June 1837, one of a series of works by artist John Cooke Bourne illustrating the construction of the line. [ 39 ] Buck's Essay , containing its criticism of Nicholson's work, [ 31 ] was published in July 1839, just a few months before Nicholson's Guide to Railway Masonry , causing the ongoing paper war in The Civil Engineer and Architect's Journal to continue acrimoniously as Nicholson accused Buck of stealing his ideas [ 40 ] and Buck issued a counter-claim. [ 41 ] In 1840, Buck's assistant, the young engineer William Henry Barlow , entered the fray, initially signing himself cryptically W.H.B., [ 42 ] but eventually declaring publicly his strong support for Buck. [ 43 ] Nicholson, by this time aged 75 and his health failing, had been struggling financially since the bankruptcy of one of his publishers in 1827 and he was in desperate need of the revenue he hoped to receive from sales of his Guide . [ 44 ] While both Fox and Buck had been happy to acknowledge Nicholson's work and had fought a mostly intellectual battle, Barlow's attacks became less gentlemanly and more personal [ 45 ] causing Nicholson, who later received anonymous public support from the mysterious M.Q., [ 46 ] considerable distress. [ 30 ] The helicoidal method of laying down the stone or brick courses championed by Nicholson, Fox and Buck is only an approximation to the ideal. Since the courses are only square to the faces of the arch at the crown and deviate more from perpendicularity the closer they are to the springing line , thereby over-correcting the deficiencies of the false skew arch and weakening the obtuse angle, the mathematical purists recommend that helicoidal construction be restricted to segmental arches and not be used in full-centred (semicircular) designs. [ 47 ] Despite this there were many full-centred skew bridges built to the helicoidal pattern and many still stand, Kielder Viaduct and Neidpath Viaduct being just two examples. The search for a technically pure orthogonal method of constructing a skew arch led to the proposal of the logarithmic method by Edward Sang , a mathematician living in Edinburgh, in his presentation in three parts to the Society for the Encouragement of the Useful Arts between 18 November 1835 and 27 January 1836, during which time he was elected vice-president of the Society, though his work was not published until 1840. [ 48 ] [ 49 ] The logarithmic method is based on the principle of laying the voussoirs in "equilibrated" [ 50 ] [H] courses in which they follow lines that run truly perpendicular to the arch faces at all elevations, while the header joints between the stones within each course are truly parallel with the arch face. [ 51 ] [ 52 ] While a helix is produced by projecting a straight line onto the surface of a cylinder, Sang's method requires that a series of logarithmic curves be projected onto a cylindrical surface, hence its name. [ 53 ] In terms of strength and stability, a skew bridge built to the logarithmic pattern has advantages over one built to the helicoidal pattern, especially so in the case of full-centred designs. [ 29 ] However, the courses are not parallel, being thinner towards the most acutely angled quoin (located where the face of the arch makes an obtuse angle with the abutment in the plan view, at S and Q in the development to the left, and at the left hand side of the photograph of the intrados on the right) and thicker towards the most obtusely angled quoin (at O and G in the development and just off the right hand side of the photograph), requiring specially cut stones, no two of which in a given course being the same, which precludes the use of mass-produced bricks. [ 19 ] [ 29 ] Nevertheless, two courses beginning at opposite ends of the barrel at the same height above the springing line are exactly alike, halving the number of templates required. [ 54 ] In 1838, Alexander James Adie, [ 55 ] son of the famous optical instrument manufacturer of the same name , [ 56 ] as resident engineer on the Bolton and Preston Railway was the first to put the theory into practice, [ 57 ] building several skew bridges to the logarithmic pattern on that route, including the semi-elliptical Grade II listed [ 58 ] bridge number 74A that carries the line over the Leeds and Liverpool Canal , which was formerly known as the southern section of the Lancaster Canal with the intention of connecting it to the northern section, though this was never achieved as the necessary aqueduct over the River Ribble proved too expensive to build. [ 26 ] [ 59 ] [ 60 ] He presented a paper on the subject to the Institution of Civil Engineers the following year and in 1841, academic William Whewell of Trinity College, Cambridge published his book The Mechanics of Engineering in which he expounded the virtues of building skew bridges with equilibrated courses, but due to the poor complexity to benefit ratio, there have been few other adopters. [ 26 ] [ 50 ] The corne de vache or "cow's horn" method is another way of laying courses such that they meet the face of the arch orthogonally at all elevations. [ 61 ] Unlike the helicoidal and logarithmic methods, in which the intrados of the arch barrel is cylindrical, [I] the corne de vache method results in a warped hyperbolic paraboloid surface that dips in the middle, rather like a saddle. [ 62 ] Despite being known as the French method of skew arch building, it was actually introduced by English engineer William Froude whilst working under Isambard Kingdom Brunel on the Bristol and Exeter Railway , which opened in 1844. [ 63 ] Although no details of Froude's work in this area survive and despite being better remembered for his work on hydrodynamics , he is known to have built at least two overbridges in red brick with stone quoins using this principle on the line just north of Exeter , at Cowley Bridge Junction where the A377 Exeter–Barnstaple road crosses at an oblique angle and, about 4 miles (6.4 km) to the northeast, at Rewe , on the A396 , both of which survive and are in daily use. [ 64 ] The brickwork is considerably more complex than in a helicoidal design and, in order to ensure that the courses of bricks meet the faces of the arch at right angles, many had to be cut to produce tapers. [ 65 ] The corne de vache approach tends to result in a structure that is almost as strong as one built to the logarithmic pattern and considerably stronger than one built to the helicoidal pattern but, again, the extra complexity has meant that the method has not seen widespread adoption, especially since the simpler helicoidal structure can be built much stronger if a segmental design is chosen, rather than a full-centred one. [ 29 ] The ribbed skew arch is a form of the false skew arch in which several narrow regular arches or ribs, offset laterally with respect to one another, are used to approximate a true skew arch. [ 66 ] Motivated by the lack of skilled stonemasons in the 18th century United States, the design was first proposed in 1802 for a crossing of the Schuylkill River in Philadelphia by British-born American architect Benjamin Henry Latrobe [ 67 ] and later championed by French civil engineer A. Boucher. [ 68 ] Because the series of arch ribs are all regular arches this method of construction has the advantage of being less demanding of unskilled artisans but it has received considerable criticism as being weak, susceptible to frost damage, ugly and wasteful of materials. [ 69 ] Although Latrobe's bridge was never built as proposed, his method of construction was later to be used extensively by the Philadelphia and Reading Railroad throughout the Philadelphia area, including an ambitious viaduct designed by Gustavus A. Nicolls with six skewed spans of 70 feet (21 m) across the river and six more land-based skew arches, which was built close to the site of Latrobe's proposed bridge and completed in 1856. [ 70 ] Thanks to the reinforcing of the spandrel walls in 1935, the bridge continues to carry rail traffic to this day. [ 67 ] The Midland Railway in the United Kingdom suffered from no such shortage of skilled workers but as part of its southern extension towards its London terminus at St Pancras , it was faced with the need to cross Southdown Road in Harpenden at an extremely acute angle of approximately 25°, [ 71 ] a figure more acute than the theoretical limit of 25°40′ proposed by Buck, [ 35 ] and requiring a bridge with a skew angle of 65°, a situation not unlike that faced by the London and Birmingham railway 30 years earlier at Denbigh Hall. This time the chosen solution was to build Southdown Road bridge as a ribbed skew arch, which opened for traffic in 1868 and was successfully widened in 1893 when the line was converted to quadruple track. [ 72 ] Despite the aforementioned criticisms of the design, the bridge is still standing and in daily use by express and commuter trains. A smaller and less extremely skewed example is Hereford Road bridge in Ledbury , Herefordshire, which was built in 1881 to carry the Ledbury and Gloucester Railway at an angle of approximately 45° across the Hereford Road, now a section of the A438 . [ 73 ] The railway having closed in 1959, [ 74 ] it is now used as part of a footpath. [ 75 ] Notice that the two bridges in the photographs skew in opposite directions. Southdown Road bridge is said to have a left-hand skew due to the near face being offset to the left of the far face, while Hereford Road bridge has a right-hand skew. [ 76 ] Early skew arch bridges were painstakingly built from masonry blocks, each individually and expensively cut to its own unique shape, with no two edges either parallel or perpendicular. [ 77 ] A fine example of such construction is the famous Rainhill Skew Bridge , which was designed with a skew span of 54 feet (16 m), in order to give a clear span across the railway of 30 feet (9.1 m) at a skew angle of 56° by George Stephenson and built as a full-sized wooden model in an adjacent field before being completed in 1830. [ 6 ] [ 77 ] [ 78 ] A contemporary skew bridge built to carry the Haggerleazes branch of the Stockton and Darlington Railway over the River Gaunless in County Durham proved too difficult for the original contractors, Thomas Worth and John Batie, who, after piling the foundations for the abutments and laying the lower courses of masonry, abandoned the work. The contract was re-let to James Wilson of Pontefract on 28 May 1830 for £420, an increase of £93 over the original tender. As the principles were not completely understood, the work continued to prove difficult and its imminent collapse was solemnly predicted right up until the time, a few days before the opening of the branch, the centring was removed and the crown of the arch settled by less than half an inch (13 mm). [ 11 ]
https://en.wikipedia.org/wiki/Skew_arch
The skew binary number system is a non-standard positional numeral system in which the n th digit contributes a value of 2 n + 1 − 1 {\displaystyle 2^{n+1}-1} times the digit (digits are indexed from 0) instead of 2 n {\displaystyle 2^{n}} times as they do in binary . Each digit has a value of 0, 1, or 2. A number can have many skew binary representations. For example, a decimal number 15 can be written as 1000, 201 and 122. Each number can be written uniquely in skew binary canonical form where there is only at most one instance of the digit 2, which must be the least significant nonzero digit. In this case 15 is written canonically as 1000. Canonical skew binary representations of the numbers from 0 to 15 are shown in following table: [ 1 ] The advantage of skew binary is that each increment operation can be done with at most one carry operation. This exploits the fact that 2 ( 2 n + 1 − 1 ) + 1 = 2 n + 2 − 1 {\displaystyle 2(2^{n+1}-1)+1=2^{n+2}-1} . Incrementing a skew binary number is done by setting the only two to a zero and incrementing the next digit from zero to one or one to two. When numbers are represented using a form of run-length encoding as linked lists of the non-zero digits, incrementation and decrementation can be performed in constant time. Other arithmetic operations may be performed by switching between the skew binary representation and the binary representation. [ 2 ] To convert from decimal to skew binary number, one can use following formula: [ 3 ] Base case : a ( 0 ) = 0 {\displaystyle a(0)=0} Induction case : a ( 2 n − 1 + i ) = a ( i ) + 10 n − 1 {\displaystyle a(2^{n}-1+i)=a(i)+10^{n-1}} Boundaries : 0 ≤ i ≤ 2 n − 1 , n ≥ 1 {\displaystyle 0\leq i\leq 2^{n}-1,n\geq 1} To convert from skew binary number to decimal, one can use definition of skew binary number: S = ∑ i = 0 N b i ( 2 i + 1 − 1 ) {\displaystyle S=\sum _{i=0}^{N}b_{i}(2^{i+1}-1)} , where b i ∈ 0 , 1 , 2 {\displaystyle b_{i}\in {0,1,2}} , st. only least significant bit (lsb) b l s b {\displaystyle b_{lsb}} is 2. Given a skew binary number, its value can be computed by a loop, computing the successive values of 2 n + 1 − 1 {\displaystyle 2^{n+1}-1} and adding it once or twice for each n {\displaystyle n} such that the n {\displaystyle n} th digit is 1 or 2 respectively. A more efficient method is now given, with only bit representation and one subtraction. The skew binary number of the form b 0 … b n {\displaystyle b_{0}\dots b_{n}} without 2 and with m {\displaystyle m} 1s is equal to the binary number 0 b 0 … b n {\displaystyle 0b_{0}\dots b_{n}} minus m {\displaystyle m} . Let d c {\displaystyle d^{c}} represents the digit d {\displaystyle d} repeated c {\displaystyle c} times. The skew binary number of the form 0 c 0 21 c 1 0 b 0 … b n {\displaystyle 0^{c_{0}}21^{c_{1}}0b_{0}\dots b_{n}} with m {\displaystyle m} 1s is equal to the binary number 0 c 0 + c 1 + 2 1 b 0 … b n {\displaystyle 0^{c_{0}+c_{1}+2}1b_{0}\dots b_{n}} minus m {\displaystyle m} . Similarly to the preceding section, the binary number b {\displaystyle b} of the form b 0 … b n {\displaystyle b_{0}\dots b_{n}} with m {\displaystyle m} 1s equals the skew binary number b 1 … b n {\displaystyle b_{1}\dots b_{n}} plus m {\displaystyle m} . Note that since addition is not defined, adding m {\displaystyle m} corresponds to incrementing the number m {\displaystyle m} times. However, m {\displaystyle m} is bounded by the logarithm of b {\displaystyle b} and incrementation takes constant time. Hence transforming a binary number into a skew binary number runs in time linear in the length of the number. The skew binary numbers were developed by Eugene Myers in 1983 for a purely functional data structure that allows the operations of the stack abstract data type and also allows efficient indexing into the sequence of stack elements. [ 4 ] They were later applied to skew binomial heaps , a variant of binomial heaps that support constant-time worst-case insertion operations. [ 5 ]
https://en.wikipedia.org/wiki/Skew_binary_number_system
A system of skew coordinates , sometimes called oblique coordinates , is a curvilinear coordinate system where the coordinate surfaces are not orthogonal , [ 1 ] as in orthogonal coordinates . Skew coordinates tend to be more complicated to work with compared to orthogonal coordinates since the metric tensor will have nonzero off-diagonal components, preventing many simplifications in formulas for tensor algebra and tensor calculus . The nonzero off-diagonal components of the metric tensor are a direct result of the non-orthogonality of the basis vectors of the coordinates, since by definition: [ 2 ] where g i j {\displaystyle g_{ij}} is the metric tensor and e i {\displaystyle \mathbf {e} _{i}} the (covariant) basis vectors . These coordinate systems can be useful if the geometry of a problem fits well into a skewed system. For example, solving Laplace's equation in a parallelogram will be easiest when done in appropriately skewed coordinates. The simplest 3D case of a skew coordinate system is a Cartesian one where one of the axes (say the x axis) has been bent by some angle ϕ {\displaystyle \phi } , staying orthogonal to one of the remaining two axes. For this example, the x axis of a Cartesian coordinate has been bent toward the z axis by ϕ {\displaystyle \phi } , remaining orthogonal to the y axis. Let e 1 {\displaystyle \mathbf {e} _{1}} , e 2 {\displaystyle \mathbf {e} _{2}} , and e 3 {\displaystyle \mathbf {e} _{3}} respectively be unit vectors along the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} axes. These represent the covariant basis; computing their dot products gives the metric tensor : where and which are quantities that will be useful later on. The contravariant basis is given by [ 2 ] The contravariant basis isn't a very convenient one to use, however it shows up in definitions so must be considered. We'll favor writing quantities with respect to the covariant basis. Since the basis vectors are all constant, vector addition and subtraction will simply be familiar component-wise adding and subtraction. Now, let where the sums indicate summation over all values of the index (in this case, i = 1, 2, 3). The contravariant and covariant components of these vectors may be related by so that, explicitly, The dot product in terms of contravariant components is then and in terms of covariant components By definition, [ 3 ] the gradient of a scalar function f is where q i {\displaystyle q_{i}} are the coordinates x , y , z indexed. Recognizing this as a vector written in terms of the contravariant basis, it may be rewritten: The divergence of a vector a {\displaystyle \mathbf {a} } is and of a tensor A {\displaystyle \mathbf {A} } The Laplacian of f is and, since the covariant basis is normal and constant, the vector Laplacian is the same as the componentwise Laplacian of a vector written in terms of the covariant basis. While both the dot product and gradient are somewhat messy in that they have extra terms (compared to a Cartesian system) the advection operator which combines a dot product with a gradient turns out very simple: which may be applied to both scalar functions and vector functions, componentwise when expressed in the covariant basis. Finally, the curl of a vector is
https://en.wikipedia.org/wiki/Skew_coordinates
In three-dimensional geometry , skew lines are two lines that do not intersect and are not parallel . A simple example of a pair of skew lines is the pair of lines through opposite edges of a regular tetrahedron . Two lines that both lie in the same plane must either cross each other or be parallel, so skew lines can exist only in three or more dimensions . Two lines are skew if and only if they are not coplanar . If four points are chosen at random uniformly within a unit cube , they will almost surely define a pair of skew lines. After the first three points have been chosen, the fourth point will define a non-skew line if, and only if, it is coplanar with the first three points. However, the plane through the first three points forms a subset of measure zero of the cube, and the probability that the fourth point lies on this plane is zero. If it does not, the lines defined by the points will be skew. Similarly, in three-dimensional space a very small perturbation of any two parallel or intersecting lines will almost certainly turn them into skew lines. Therefore, any four points in general position always form skew lines. In this sense, skew lines are the "usual" case, and parallel or intersecting lines are special cases. Expressing the two lines as vectors: The cross product of d 1 {\displaystyle \mathbf {d_{1}} } and d 2 {\displaystyle \mathbf {d_{2}} } is perpendicular to the lines. The plane formed by the translations of Line 2 along n {\displaystyle \mathbf {n} } contains the point p 2 {\displaystyle \mathbf {p_{2}} } and is perpendicular to n 2 = d 2 × n {\displaystyle \mathbf {n_{2}} =\mathbf {d_{2}} \times \mathbf {n} } . Therefore, the intersecting point of Line 1 with the above-mentioned plane, which is also the point on Line 1 that is nearest to Line 2 is given by Similarly, the point on Line 2 nearest to Line 1 is given by (where n 1 = d 1 × n {\displaystyle \mathbf {n_{1}} =\mathbf {d_{1}} \times \mathbf {n} } ) The nearest points c 1 {\displaystyle \mathbf {c_{1}} } and c 2 {\displaystyle \mathbf {c_{2}} } form the shortest line segment joining Line 1 and Line 2: The distance between nearest points in two skew lines may also be expressed using other vectors: Here the 1×3 vector x represents an arbitrary point on the line through particular point a with b representing the direction of the line and with the value of the real number λ {\displaystyle \lambda } determining where the point is on the line, and similarly for arbitrary point y on the line through particular point c in direction d . The cross product of b and d is perpendicular to the lines, as is the unit vector The perpendicular distance between the lines is then [ 1 ] (if | b × d | is zero the lines are parallel and this method cannot be used). A configuration of skew lines is a set of lines in which all pairs are skew. Two configurations are said to be isotopic if it is possible to continuously transform one configuration into the other, maintaining throughout the transformation the invariant that all pairs of lines remain skew. Any two configurations of two lines are easily seen to be isotopic, and configurations of the same number of lines in dimensions higher than three are always isotopic, but there exist multiple non-isotopic configurations of three or more lines in three dimensions. [ 2 ] The number of nonisotopic configurations of n lines in R 3 , starting at n = 1, is An affine transformation of this ruled surface produces a surface which in general has an elliptical cross-section rather than the circular cross-section produced by rotating L around L'; such surfaces are also called hyperboloids of one sheet, and again are ruled by two families of mutually skew lines. A third type of ruled surface is the hyperbolic paraboloid . Like the hyperboloid of one sheet, the hyperbolic paraboloid has two families of skew lines; in each of the two families the lines are parallel to a common plane although not to each other. Any three skew lines in R 3 lie on exactly one ruled surface of one of these types. [ 3 ] If three skew lines all meet three other skew lines, any transversal of the first set of three meets any transversal of the second set. [ 4 ] [ 5 ] In higher-dimensional space, a flat of dimension k is referred to as a k -flat. Thus, a line may also be called a 1-flat. Generalizing the concept of skew lines to d -dimensional space, an i -flat and a j -flat may be skew if i + j < d . [ 6 ] As with lines in 3-space, skew flats are those that are neither parallel nor intersect. In affine d -space , two flats of any dimension may be parallel. However, in projective space , parallelism does not exist; two flats must either intersect or be skew. Let I be the set of points on an i -flat, and let J be the set of points on a j -flat. In projective d -space, if i + j ≥ d then the intersection of I and J must contain a ( i + j − d )-flat. (A 0 -flat is a point.) In either geometry, if I and J intersect at a k -flat, for k ≥ 0 , then the points of I ∪ J determine a ( i + j − k )-flat.
https://en.wikipedia.org/wiki/Skew_lines
In number theory , Skewes's number is the smallest natural number x {\displaystyle x} for which the prime-counting function π ( x ) {\displaystyle \pi (x)} exceeds the logarithmic integral function li ⁡ ( x ) . {\displaystyle \operatorname {li} (x).} It is named for the South African mathematician Stanley Skewes who first computed an upper bound on its value. The exact value of Skewes's number is still not known, but it is known that there is a crossing between π ( x ) < li ⁡ ( x ) {\displaystyle \pi (x)<\operatorname {li} (x)} and π ( x ) > li ⁡ ( x ) {\displaystyle \pi (x)>\operatorname {li} (x)} near e 727.95133 < 1.397 × 10 316 . {\displaystyle e^{727.95133}<1.397\times 10^{316}.} It is not known whether this is the smallest crossing. The name is sometimes also applied to either of the large number bounds which Skewes found. Although nobody has ever found a value of x {\displaystyle x} for which π ( x ) > li ⁡ ( x ) , {\displaystyle \pi (x)>\operatorname {li} (x),} Skewes's research supervisor J.E. Littlewood had proved in Littlewood (1914) that there is such a number (and so, a first such number); and indeed found that the sign of the difference π ( x ) − li ⁡ ( x ) {\displaystyle \pi (x)-\operatorname {li} (x)} changes infinitely many times. Littlewood's proof did not, however, exhibit a concrete such number x {\displaystyle x} , nor did it even give any bounds on the value. Skewes's task was to make Littlewood's existence proof effective : exhibit some concrete upper bound for the first sign change. According to Georg Kreisel , this was not considered obvious even in principle at the time. [ 1 ] Skewes (1933) proved that, assuming that the Riemann hypothesis is true, there exists a number x {\displaystyle x} violating π ( x ) < li ⁡ ( x ) , {\displaystyle \pi (x)<\operatorname {li} (x),} below Without assuming the Riemann hypothesis, Skewes (1955) later proved that there exists a value of x {\displaystyle x} below These upper bounds have since been reduced considerably by using large-scale computer calculations of zeros of the Riemann zeta function . The first estimate for the actual value of a crossover point was given by Lehman (1966) , who showed that somewhere between 1.53 × 10 1165 {\displaystyle 1.53\times 10^{1165}} and 1.65 × 10 1165 {\displaystyle 1.65\times 10^{1165}} there are more than 10 500 {\displaystyle 10^{500}} consecutive integers x {\displaystyle x} with π ( x ) > li ⁡ ( x ) {\displaystyle \pi (x)>\operatorname {li} (x)} . Without assuming the Riemann hypothesis, H. J. J. te Riele ( 1987 ) proved an upper bound of 7 × 10 370 {\displaystyle 7\times 10^{370}} . A better estimate was 1.39822 × 10 316 {\displaystyle 1.39822\times 10^{316}} discovered by Bays & Hudson (2000) , who showed there are at least 10 153 {\displaystyle 10^{153}} consecutive integers somewhere near this value where π ( x ) > li ⁡ ( x ) {\displaystyle \pi (x)>\operatorname {li} (x)} . Bays and Hudson found a few much smaller values of x {\displaystyle x} where π ( x ) {\displaystyle \pi (x)} gets close to li ⁡ ( x ) {\displaystyle \operatorname {li} (x)} ; the possibility that there are crossover points near these values does not seem to have been definitely ruled out yet, though computer calculations suggest they are unlikely to exist. Chao & Plymen (2010) gave a small improvement and correction to the result of Bays and Hudson. Saouter & Demichel (2010) found a smaller interval for a crossing, which was slightly improved by Zegowitz (2010) . The same source shows that there exists a number x {\displaystyle x} violating π ( x ) < li ⁡ ( x ) , {\displaystyle \pi (x)<\operatorname {li} (x),} below e 727.9513468 < 1.39718 × 10 316 {\displaystyle e^{727.9513468}<1.39718\times 10^{316}} . This can be reduced to e 727.9513386 < 1.39717 × 10 316 {\displaystyle e^{727.9513386}<1.39717\times 10^{316}} assuming the Riemann hypothesis. Stoll & Demichel (2011) gave 1.39716 × 10 316 {\displaystyle 1.39716\times 10^{316}} . Rigorously, Rosser & Schoenfeld (1962) proved that there are no crossover points below x = 10 8 {\displaystyle x=10^{8}} , improved by Brent (1975) to 8 × 10 10 {\displaystyle 8\times 10^{10}} , by Kotnik (2008) to 10 14 {\displaystyle 10^{14}} , by Platt & Trudgian (2014) to 1.39 × 10 17 {\displaystyle 1.39\times 10^{17}} , and by Büthe (2015) to 10 19 {\displaystyle 10^{19}} . There is no explicit value x {\displaystyle x} known for certain to have the property π ( x ) > li ⁡ ( x ) , {\displaystyle \pi (x)>\operatorname {li} (x),} though computer calculations suggest some explicit numbers that are quite likely to satisfy this. Even though the natural density of the positive integers for which π ( x ) > li ⁡ ( x ) {\displaystyle \pi (x)>\operatorname {li} (x)} does not exist, Wintner (1941) showed that the logarithmic density of these positive integers does exist and is positive. Rubinstein & Sarnak (1994) showed that this proportion is about 2.6 × 10 −7 , which is surprisingly large given how far one has to go to find the first example. Riemann gave an explicit formula for π ( x ) {\displaystyle \pi (x)} , whose leading terms are (ignoring some subtle convergence questions) where the sum is over all ρ {\displaystyle \rho } in the set of non-trivial zeros of the Riemann zeta function . The largest error term in the approximation π ( x ) ≈ li ⁡ ( x ) {\displaystyle \pi (x)\approx \operatorname {li} (x)} (if the Riemann hypothesis is true) is negative 1 2 li ⁡ ( x ) {\displaystyle {\tfrac {1}{2}}\operatorname {li} ({\sqrt {x\,}})} , showing that li ⁡ ( x ) {\displaystyle \operatorname {li} (x)} is usually larger than π ( x ) {\displaystyle \pi (x)} . The other terms above are somewhat smaller, and moreover tend to have different, seemingly random complex arguments , so mostly cancel out. Occasionally however, several of the larger ones might happen to have roughly the same complex argument, in which case they will reinforce each other instead of cancelling and will overwhelm the term 1 2 li ⁡ ( x ) {\displaystyle {\tfrac {1}{2}}\operatorname {li} ({\sqrt {x\,}})} . The reason why the Skewes number is so large is that these smaller terms are quite a lot smaller than the leading error term, mainly because the first complex zero of the zeta function has quite a large imaginary part , so a large number (several hundred) of them need to have roughly the same argument in order to overwhelm the dominant term. The chance of N {\displaystyle N} random complex numbers having roughly the same argument is about 1 in 2 N {\displaystyle 2^{N}} . This explains why π ( x ) {\displaystyle \pi (x)} is sometimes larger than li ⁡ ( x ) , {\displaystyle \operatorname {li} (x),} and also why it is rare for this to happen. It also shows why finding places where this happens depends on large scale calculations of millions of high precision zeros of the Riemann zeta function. The argument above is not a proof, as it assumes the zeros of the Riemann zeta function are random, which is not true. Roughly speaking, Littlewood's proof consists of Dirichlet's approximation theorem to show that sometimes many terms have about the same argument. In the event that the Riemann hypothesis is false, the argument is much simpler, essentially because the terms li ⁡ ( x ρ ) {\displaystyle \operatorname {li} (x^{\rho })} for zeros violating the Riemann hypothesis (with real part greater than ⁠ 1 / 2 ⁠ ) are eventually larger than li ⁡ ( x 1 / 2 ) {\displaystyle \operatorname {li} (x^{1/2})} . The reason for the term 1 2 l i ( x 1 / 2 ) {\displaystyle {\tfrac {1}{2}}\mathrm {li} (x^{1/2})} is that, roughly speaking, l i ( x ) {\displaystyle \mathrm {li} (x)} actually counts powers of primes , rather than the primes themselves, with p n {\displaystyle p^{n}} weighted by 1 n {\displaystyle {\frac {1}{n}}} . The term 1 2 l i ( x 1 / 2 ) {\displaystyle {\tfrac {1}{2}}\mathrm {li} (x^{1/2})} is roughly analogous to a second-order correction accounting for squares of primes. An equivalent definition of Skewes's number exists for prime k -tuples ( Tóth (2019) ). Let P = ( p , p + i 1 , p + i 2 , . . . , p + i k ) {\displaystyle P=(p,p+i_{1},p+i_{2},...,p+i_{k})} denote a prime ( k + 1)-tuple, π P ( x ) {\displaystyle \pi _{P}(x)} the number of primes p {\displaystyle p} below x {\displaystyle x} such that p , p + i 1 , p + i 2 , . . . , p + i k {\displaystyle p,p+i_{1},p+i_{2},...,p+i_{k}} are all prime, let l i P ⁡ ( x ) = ∫ 2 x d t ( ln ⁡ t ) k + 1 {\displaystyle \operatorname {li_{P}} (x)=\int _{2}^{x}{\frac {dt}{(\ln t)^{k+1}}}} and let C P {\displaystyle C_{P}} denote its Hardy–Littlewood constant (see First Hardy–Littlewood conjecture ). Then the first prime p {\displaystyle p} that violates the Hardy–Littlewood inequality for the ( k + 1)-tuple P {\displaystyle P} , i.e., the first prime p {\displaystyle p} such that (if such a prime exists) is the Skewes number for P . {\displaystyle P.} The table below shows the currently known Skewes numbers for prime k -tuples: The Skewes number (if it exists) for sexy primes ( p , p + 6 ) {\displaystyle (p,p+6)} is still unknown. It is also unknown whether all admissible k -tuples have a corresponding Skewes number.
https://en.wikipedia.org/wiki/Skewes's_number
The Ski complex is a multi- protein complex involved in the 3' end degradation of messenger RNAs in yeast. [ 1 ] The complex consists of three main proteins , the RNA helicase Ski2 and the proteins Ski3 and Ski8 . This tetramer contains a 370 kDa core complex, containing N-terminal arms and C-terminal arms from Ski3 . The helicase core of Ski2 is positioned by both the C-terminal of Ski3 and two subunits of Ski8 . Helicase activities are initiated by the N-terminal arm and the Ski2 insertion domain. [ 2 ] In yeast, the complex guides RNA molecules to the exosome complex for degradation via a fourth protein, called Ski7 , which contains a GTPase -like protein. [ 3 ] Ski7 involves the 3’ to 5’ degradation of RNA through two different pathways, 3’ poly(A) tail shortening and the binding of the Ski2 , Ski3 , and Ski8 tetramer and the exosome. [ 1 ] Degradation of the 3' mRNA overhang occurs by association with the 80s ribosome. The 3' end of the mRNA is threaded through the ribosome to Ski2 , preparing it for the degradation process. [ 4 ] Biochemical studies also show that the Ski complex can thread RNA through the exosome complex, thereby coupling the Ski2 protein helicase function with the exoribonuclease activity, leading to degradation of the RNA strand. [ 2 ]
https://en.wikipedia.org/wiki/Ski_complex
A skid loader , skid-steer loader ( SSL ), or skidsteer is any of a class of compact heavy equipment with lift arms that can attach to a wide variety of buckets and other labor-saving tools or attachments. The wheels typically have no separate steering mechanism and hold a fixed straight alignment on the body of the machine. Turning is accomplished by differential steering , in which the left and right wheel pairs are operated at different speeds, and the machine turns by skidding or dragging its fixed-orientation wheels across the ground. Skid-steer loaders are capable of zero-radius turning, by driving one set of wheels forward while simultaneously driving the opposite set of wheels in reverse. This "zero-turn" capability (the machine can turn around within its own length) makes them extremely maneuverable and valuable for applications that require a compact, powerful and agile loader or tool carrier in confined-space work areas. Like other front loaders , they can push material from one location to another, carry material in the bucket, load material into a truck or trailer and perform a variety of digging and grading operations. The first three-wheeled, front-end loader was invented by brothers Cyril and Louis Keller in Rothsay, Minnesota , in 1957. [ 1 ] The Kellers built the loader to help a farmer, Eddie Velo, mechanize the process of cleaning turkey manure from his barn. The light and compact machine, with its rear caster wheel, was able to turn around within its own length while performing the same tasks as a conventional front-end loader, hence its name. [ 1 ] The Melroe brothers, of Melroe Manufacturing Company in Gwinner, North Dakota , purchased the rights to the Keller loader in 1958 and hired the Kellers to continue refining their invention. As a result of this partnership, the M-200 Melroe self-propelled loader was introduced at the end of 1958. It featured two independent front-drive wheels and a rear caster wheel, a 12.9 hp (9.6 kW) engine and a 750-pound (340 kg) lift capacity. Two years later they replaced the caster wheel with a rear axle and introduced the M-400, the first four-wheel, true skid-steer loader. [ 1 ] The M-440 was powered by a 15.5 hp (11.6 kW) engine and had an 1,100-pound (500 kg) rated operating capacity. Skid-steer development continued into the mid-1960s with the M600 loader. Melroe adopted the well-known Bobcat trademark in 1962. By the late 1960s, competing heavy equipment manufacturers were selling machines of this form factor . Throughout the 1970s and 80s, skid steers began to evolve with more powerful engines, enclosed cabs, and hydraulic systems that supported a broader range of attachments. Manufacturers like John Deere, Case, and New Holland began producing their own models, each adding unique features such as vertical lift paths or enhanced stability. By the 1990s, the addition of joystick controls, improved operator visibility, and quick-attach systems made these machines easier and safer to use. As urban job sites grew tighter and more regulated, the demand for nimble, multi-use equipment like skid steers continued to rise. In the 2000s, innovation accelerated with the introduction of electronic engine controls, advanced telematics , and load-sensing hydraulics. Operators benefited from better fuel efficiency, diagnostics, and fine-tuned control, while rental fleets appreciated the added durability and service tracking. Manufacturers also began focusing on emissions compliance, introducing Tier 3 and Tier 4 engine updates to meet evolving environmental regulations. More recently, manufacturers have pushed into autonomous and semi-autonomous capabilities, integrating robotic control systems for grading and pathing, as well as remote operation. Simultaneously, electric skid steers have entered the market, offering zero-emissions alternatives for indoor, urban, and noise-sensitive environments. Skid-steer loaders are typically four-wheeled or tracked vehicles with the front and back wheels on each side mechanically linked together to turn at the same speed, and where the left-side drive wheels can be driven independently of the right-side drive wheels. This is accomplished by having two separate and independent transmissions; one for the left side wheels and one for the right side wheels. Earliest versions of skid steer loaders used forward and reverse clutch drives. Virtually all modern skid steers designed and built since the mid-1970s use two separate hydrostatic transmissions (one for the left side and one for the right side). The differential steering, zero-turn capabilities and lack of visibility often exacerbated by carrying loads with these machines means that their safe operation requires the operator have a good field of vision, good hand eye coordination, manual dexterity and the ability to remember and perform multiple actions at once. [ 2 ] [ 3 ] Before allowing anyone, including adults, to operate a skid steer, they should be assessed on their ability to safely operate the machine and trained in its safe operation. In the US, it is illegal for youth under age 18 employed in non-agricultural jobs to operate a skid steer. [ 4 ] For youth hired to work in agriculture, it is recommended they be at least 16 years old and have an adult assess their abilities using the Agricultural Youth Work Guidelines [ 5 ] before being allowed to operate a skid steer. Another thing to consider are beacon lights and reverse signal alarms that offer a warning to co-workers about the skid steer’s movements. These alarms are not always standard equipment on all farm or landscape skid steer machines, depending on factors like the age of the machine. Use and continued maintenance of these alarms greatly reduce the risk of incidents involving running over and/or pinning co-workers between the machine and an obstacle.  Construction sites and their business contract requirements often call for landscapers to have operational skid steer reverse signal alarms and beacon lights. [ 6 ] The extremely rigid frame and strong wheel bearings prevent the torsional forces caused by this dragging motion from damaging the machine. As with tracked treads , the high ground friction produced by skid steers can rip up soft or fragile road surfaces. They can be converted to low ground friction by using specially designed wheels such as the Mecanum wheel . Skid-steer loaders are sometimes equipped with tracks instead of the wheels, and such a vehicle is known as a compact track loader. [ 7 ] Skid steer loaders, both wheel and track models, operate most efficiently when they are imbalanced – either the front wheels or the back wheels are more heavily loaded. When equipped with an empty bucket, skid steer loaders are all heavier in the rear and the rear wheels pivot in place while the front wheels slide around. When a bucket is fully loaded, the weight distribution reverses and the front wheels become significantly heavier than the rear wheels. When making a zero-turn while loaded, the front wheels pivot and the rear wheels slide. Imbalanced operation reduces the amount of power required to turn the machine and minimizes tire wear. Skilled operators always try to keep the machine more heavily loaded on either the front or the rear of the machine. When the weight distribution is 50/50 (or close to it) neither the front set of wheels nor the rear set of wheels wants to pivot or slide and the machine starts to "buck" due to high friction, evenly divided between front and rear axles. Tire wear increases significantly in this condition. Unlike in a conventional front loader , the lift arms in these machines are alongside the driver with the pivot points behind the driver's shoulders. Because of the operator's proximity to moving booms, early skid loaders were not as safe as conventional front loaders, particularly due to the lack of a rollover protection structure . Modern skid loaders have cabs, open or fully enclosed which can serve as rollover protective structures (ROPS) and falling object protective structures (FOPS). The ROPS, FOPS, side screens and operator restraints make up the “zone of protection” in a skid steer, and are designed to reduce the possibility of operator injury or death. The FOPS shields the operator's cab from falling debris, and the ROPS shields the operator in the case of an overturn. The side screens prevent the operator from becoming wedged between the lift arms and the skid steer frame as well as from being struck by protrusions (such as limbs). The operator is secured in the operator seat when the seat belt or seat-bar restraint is utilized, keeping them within the zone of protection. Safety features and safe operation are important because [ 8 ] skid steer loaders are hazardous when safety practices are not observed. Rollover incidents and being crushed by moving parts are the most common causes of serious injuries and death associated with skid steer loaders. [ 9 ] The conventional bucket of many skid loaders can be replaced with a variety of specialized buckets or attachments, many powered by the loader's hydraulic system. The list of attachments available is virtually endless. Some examples include Dura Graders, backhoe , hydraulic breaker, pallet forks, angle broom, sweeper, auger , mower, snow blower , stump grinder, tree spade, trencher , dumping hopper, pavement miller, ripper, tillers, grapple, tilt, roller, snow blade, wheel saw, cement mixer, and wood chipper machine. Some models of skid steer now also have an automatic attachment changer mechanism. This allows a driver to change between a variety of terrain handling, shaping, and leveling tools without having to leave the machine, by using a hydraulic control mechanism to latch onto the attachments. Traditionally hydraulic supply lines to powered attachments may be routed so that the couplings are located near the cab, and the driver does not need to leave the machine to connect or disconnect those supply lines. Recently, manufacturers have also created automatic hydraulic connection systems that allow changing attachments without having to manually disconnect/connect hydraulic lines The original skid-steer loader arms were designed using a hinge near the top of the loader frame towers at the rear of the machine. When the loader arms were raised the mechanism would pivot the loader arm up into the air in an arc that would swing up over the top of the operator. This is known as a radial lift loader. [ 11 ] This design is simple to manufacture and lower cost. Radial lift loaders start with the bucket close to the machine when the arms are fully down and start moving up and forward away from the machine as the arms are raised. This provides greater forward reach at mid-point in the lift for dumping at around four to five feet, but less stability at the middle of their lift arc (because the bucket is so much further forward). As the loader arms continue to raise past mid-height the bucket begins to move back closer to the machine and becomes more stable at full lift height, but also has far less forward reach at full height. Radial lift machines are lower cost and tend to be preferred for users who do a lot of work at lower height of lift arms, such as digging and spreading materials at low heights. Radial lift designs have very good lift capacity/stability when the loader arms are all the way down and become less stable (lower lift capacity) as the arms reach mid-point and the bucket is furthest forward. Static stability increases as the arms continue to rise, but raised loads are inherently less stable and safe for all machine types. One downside of radial lift design is that when fully-raised the bucket is back closer to the machine, so it has relatively poor reach when trying to load trucks or hoppers or spreaders. In addition, the bucket is almost over the operator's head and spillage over the back of the bucket can end up on top of the machine or in the operator's lap. Another downside of radial lift machines is that the large frame towers to which the loader arms are attached tend to restrict an operator's visibility to the rear and back corners of the machine. The radial arm is still the most common design and preferred by many users, but almost all manufacturers that started with radial lift designs began also producing "vertical lift" designs as well. "Vertical lift" designs use additional links and hinges on the loader arm, with the main pivot points towards the center or front of the machine. This allows the loader arm to have greater operating height and reach while retaining a compact design. There are no truly "vertical lift" designs in production. All loaders use multiple links (that all move in radial arcs) which aim to straighten the lift path of the bucket as it is raised. This allows close to vertical movement at points of the lift range, to keep the bucket forward of the operator's cab, allowing safe dumping into tall containers or vehicles. Some designs have more arc in the lowest part of the lift arc while other designs have more arc near the top of the lift arc. One downside of vertical lift designs is somewhat higher cost and complexity of manufacturing. Some vertical lift designs may also have reduced rear or side visibility when the arms are down low, but superior visibility as the arms are raised (especially if the design does not require a large rear frame tower). Most Vertical lift machines provide more constant stability as the arms are raised from fully-lowered to fully-raised position since the bucket (load) has a similar distance from the machine from bottom to top of the lift path. As a side benefit to constant stability, most vertical lift machines have larger bucket capacities and longer, flatter low-profile buckets that can carry more material per cycle and tend to provide smoother excavating and grading than short-lip buckets. Vertical lift designs have grown rapidly in popularity in the past thirty years and now make up a significant proportion of new skid loader sales. When controls are activated, the loader or lift arm attachments can move and crush individuals who are within the range of the machinery. To prevent injuries, it is strongly advisable for operators to not start or operate controls from outside of the cab. When in the operator’s seat, the operator should always fasten the seatbelt and lower the safety bar to stay securely in the cab and avoid being crushed. [ 12 ] Operators should also ensure that any helpers or bystanders are clear of the machine before starting it. A skid-steer loader can sometimes be used in place of a large excavator by digging a hole from the inside. This is especially true for digging swimming pools in a back yard where a large excavator cannot fit. The skid loader first digs a ramp leading to the edge of the desired excavation. It then uses the ramp to carry material out of the hole. The skid loader reshapes the ramp making it steeper and longer as the excavation deepens. This method is also useful for digging under a structure where overhead clearance does not allow for the boom of a large excavator, such as digging a basement under an existing house. Several companies make backhoe attachments for skid-steers. These are more effective for digging in a small area than the method above and can work in the same environments. Other applications may consist of transporting raw material around a job site, either in buckets or using pallet forks. Rough terrain forklifts have very poor maneuverability; and smaller "material handling" forklifts have good maneuverability but poor traction. Skid steer loaders have very good maneuverability and traction but typically lower lift capacity than forklifts. Skid steer loaders excel at snow removal , especially in smaller parking lots where maneuverability around existing cars, light poles, and curbs is an issue with larger snow plows. Skid steers also have the ability to actually remove the snow rather than just plowing it and pushing snow into a pile.
https://en.wikipedia.org/wiki/Skid-steer_loader
A skidder is any type of heavy vehicle used in a logging operation for pulling cut trees out of a forest in a process called "skidding", in which the logs are transported from the cutting site to a landing. There they are loaded onto trucks (or in times past, railroad cars or flumes ), and sent to the mill. One exception is that in the early days of logging, when distances from the timberline to the mill were shorter, the landing stage was omitted altogether, and the "skidder" would have been used as the main road vehicle, in place of the trucks, railroad, or flume. Modern forms of skidders can pull trees with a cable and winch ( cable skidder ), just like the old steam donkeys , or with a hydraulic grapple either on boom ( grapple skidder ) or on the back of the frame (clambunk skidder) . [ clarification needed ] Early skidders were pulled by a team of oxen , horses or mules . The driver would straddle the cart over felled logs, where dangling tongs would be positioned to raise the end of the log off the ground. The team pulled the tongue forward, allowing the log to "skid" along between the rolling wheels. These were known as "slip-tongue wheels" Starting in the early 1920s, animals were gradually replaced by gasoline -powered crawlers, although some small operations continue to use horses. In other places, steel "arches" were used behind the crawlers. Similar in function to the slip-tongue wheels, arches were used to reduce friction by raising up one end of the load, which was dangled from a cable which in turn ran down the back of the arch, and was raised or lowered by the crawler's winch. Another piece similar to the arch was the "bummer", which was simply a small trailer to be towed behind a crawler, on top of which one end of the log load would rest. The early mechanical skidders were steam powered. They traveled on railroads, known as "dummylines" and the felled trees were dragged or "skidded" to the railroad where they were later loaded onto rail cars. Some, such as the steam donkeys , were relatively simple but other mechanical skidders were more complex. The largest of these mechanical skidders was the Lidgerwood skidder, which not only brought logs to the landing from the cutting site, but loaded them onto railroad cars as well, making it both a skidder and loader. One popular brand was the Clyde Skidder, built by Clyde Ironworks in Duluth, Minnesota . [ citation needed ] The Clyde was capable of retrieving logs from four different points at the same time. Each cable, or lead, was approximately 1,000 feet (300 m) in length. Once the logs were attached and a clearance signal was sent for retrieval, they could be skidded at a speed of 1,000 feet per minute (18 km/h). In New Zealand cables were run five miles. [ citation needed ] Working conditions around these machines were very dangerous. [ example needed ] Contemporary skidders are tracked or four wheel drive tractors with a diesel engine , winch and steel , funnel-shaped guards on the rear to protect their wheels. They have articulated steering and usually a small, adjustable, push-blade on the front. The operator/ logger is protected from falling or flying debris (or parted cables, or rolling over) by a steel enclosure. They are one of the few logging machines that is capable of thinning or selective logging in larger timber. [ citation needed ] Forwarders can haul small short pieces out, however a skidder is one of the few options for taking out some trees while leaving others when thinning mature lumber. The skidder can also be used for pulling tree stumps, pushing over small trees, and preliminary grading of a logging path known as a " skid road ". While wood is being yarded (pulled) by the skidder, tree particles and seeds are cultivated into the soil. [ citation needed ] Skidder logging can be disadvantageous in thinning operations due to the damage caused to remaining trees as branches and trunks are dragged against them, tearing away the protective bark of living trees. Another ecological concern is the deep furrows in the topsoil sometimes made by skidders, especially when using tires with chains, which alter surface runoff patterns and increases the costs of forest rehabilitation and reforestation . A device similar to a skip tongue log skidder except that it is made of steel , including the spoked wheels On a cable skidder, the cable is reeled out and attached to a pull of cut timber, then the winch pulls the load toward the skidder. The winch or grapple holds the trees while the skidder drags them to a landing area. Cable skidders are more labor-intensive than grapple skidders because someone (the operator or a second person) must drag the winch line out to the logs and hook them up manually. Nowadays, cable skidders are less popular than in the past. [ citation needed ] These machines are most useful in areas where it is not possible to drive the machine close to the log (such as in steep hills). Grapple skidders use a loader crane boom with a hydraulic grapple bucket to grab and lift the timber. There are three types of 'fixed boom' grapple skidders. A single-function boom type has two hydraulic cylinders, only allowing the boom to lower in one position. Dual-function booms (as pictured) have four cylinders, which allows for adjusting the boom in two different places. The third type permits the grapple boom to be swung from side to side, allowing spread out trees to be grabbed at once. In some areas, loggers have combined a hydraulic claw on the side with the blade of their grapple skidders, making it possible to pile logs in some cases. More common on cable skidders, this also permits hauling back bark and tops when returning from a landing area to fallen timber. Clambunk skidders Clambunk skidders are more of a middle ground between skidder and a forwarder . They have large free swiveling hydraulic jaws positioned on the back of the chassis that clamp the logs. They usually don't have self-loading capability and require feller buncher or other machine with loader arm. Media related to Skidders at Wikimedia Commons
https://en.wikipedia.org/wiki/Skidder
Skin allergy testing comprises a range of methods for medical diagnosis of allergies that attempts to provoke a small, controlled, allergic response. A microscopic amount of an allergen is introduced to a patient's skin by various means: [ 1 ] If an immuno-response is seen in the form of a rash , urticaria ( hives ), or anaphylaxis it can be concluded that the patient has a hypersensitivity (or allergy ) to that allergen . Further testing can be done to identify the particular allergen. [ citation needed ] The "skin scratch test" as it is called, is not very commonly used due to the increased likelihood of infection. On the other hand, the "skin scrape test" is painless, does not leave residual pigmentation, and does not have a risk of infection, since it is limited to the superficial layer of the skin. [ citation needed ] Some allergies are identified in a few minutes but others may take several days. In all cases where the test is positive, the skin will become raised, red, and appear itchy. The results are recorded - larger wheals indicating that the subject is more sensitive to that particular allergen. A negative test does not conclusively rule out an allergy; occasionally, the concentration needs to be adjusted, or the body fails to elicit a response. [ citation needed ] In the prick, scratch and scrape tests, a few drops of the purified allergen are gently pricked on to the skin surface, usually the forearm. This test is usually done in order to identify allergies to pet dander , dust, pollen , foods or dust mites . Intradermal injections are done by injecting a small amount of allergen just beneath the skin surface. The test is done to assess allergies to drugs like penicillin [ 5 ] or bee venom. To ensure that the skin is reacting in the way it is supposed to, all skin allergy tests are also performed with proven allergens like histamine , and non-allergens like glycerin . The majority of people do react to histamine and do not react to glycerin. If the skin does not react appropriately to these allergens then it most likely will not react to the other allergens. These results are interpreted as falsely negative. [ 6 ] The patch test uses rectangles of special hypoallergenic adhesive tape with different allergens on them. The patch is applied to the skin, usually on the back. The allergens on the patch include latex, medications, preservatives, hair dyes, fragrances, resins, and various metals. [ 7 ] Patch testing is used to detect allergic contact dermatitis but does not test for hives or food allergy. [ 8 ] Also called an intradermal test , this skin end point titration (SET) uses an intradermal injection of allergens at increasing concentrations to measure allergic response. [ 9 ] To prevent a severe allergic reaction, the test is started with a very dilute solution. After 10 minutes, the injection site is measured to look for growth of wheal, a small swelling of the skin. Two millimeters of growth in 10 minutes is considered positive. If 2 mm of growth is noted, then a second injection at a higher concentration is given to confirm the response. The end point is the concentration of antigen that causes an increase in the size of the wheal followed by confirmatory whealing. If the wheal grows larger than 13 mm, then no further injections are given since this is considered a major reaction. [ citation needed ] There are no major preparations required for skin testing . At the first consult, the subject's medical history is obtained and physical examination is performed. All patients should bring a list of their medications because some may interfere with the testing. Other medications may increase the chance of a severe allergic reaction. Medications that commonly interfere with skin testing include the following: Patients who undergo skin testing should know that anaphylaxis can occur anytime. So if any of the following symptoms are experienced, a physician consultation is recommended immediately: Even though skin testing may seem to be a benign procedure, it does have some risks, including swollen red bumps (hives) which may occur after the test. The hives usually disappear in a few hours after the test. In rare cases they can persist for a day or two. These hives may be itchy and are best treated by applying an over the counter hydrocortisone cream. [ 11 ] In very rare cases one may develop a full blown allergic reaction. Physicians who perform skin test always have equipment and medications available in case an anaphylaxis reaction occurs. This is the main reason why people should not get skin testing performed at corner stores or by people who have no medical training. [ citation needed ] Antihistamines , which are commonly used to treat allergy symptoms, interfere with skin tests, as they can prevent the skin from reacting to the allergens being tested. People who take an antihistamine need either to choose a different form of allergy test or to stop taking the antihistamine temporarily before the test. The period of time needed can range from a day or two to 10 days or longer, depending on the specific medication. Some medications not primarily used as antihistamines, including tricyclic antidepressants , phenothiazine -based antipsychotics, and several kinds of medications used for gastrointestinal disorders, can similarly interfere with skin tests. [ 12 ] People who have severe, generalized skin disease or an acute skin infection should not undergo skin testing, as one needs uninvolved skin for testing. Also, skin testing should be avoided for people at a heightened risk of anaphylactic shock, including people who are known to be highly sensitive to even the smallest amount of allergen. [ 13 ] Besides skin tests, there are blood tests which measure a specific antibody in the blood. The IgE antibody plays a vital role in allergies but its levels in blood do not always correlate with the allergic reaction. [ 14 ] There are many alternative health care practitioners who perform a variety of provocation neutralization tests, but the vast majority of these tests have no validity and have never been proven to work scientifically.
https://en.wikipedia.org/wiki/Skin_allergy_test
Skin flora , also called skin microbiota , refers to microbiota ( communities of microorganisms ) that reside on the skin , typically human skin . Many of them are bacteria of which there are around 1,000 species upon human skin from nineteen phyla . [ 1 ] [ 2 ] Most are found in the superficial layers of the epidermis and the upper parts of hair follicles . Skin flora is usually non-pathogenic, and either commensal (are not harmful to their host) or mutualistic (offer a benefit). The benefits bacteria can offer include preventing transient pathogenic organisms from colonizing the skin surface, either by competing for nutrients, secreting chemicals against them, or stimulating the skin's immune system . [ 3 ] However, resident microbes can cause skin diseases and enter the blood system , creating life-threatening diseases, particularly in immunosuppressed people. [ 3 ] A major non-human skin flora is Batrachochytrium dendrobatidis , a chytrid and non-hyphal zoosporic fungus that causes chytridiomycosis , an infectious disease thought to be responsible for the decline in amphibian populations . [ 4 ] The estimate of the number of bacteria species present on skin has been radically changed by the use of 16S ribosomal RNA to identify bacterial species present on skin samples direct from their genetic material. Previously such identification had depended upon microbiological culture upon which many varieties of bacteria did not grow and so were hidden to science. [ 1 ] Staphylococcus epidermidis and Staphylococcus aureus were thought from cultural based research to be dominant. However 16S ribosomal RNA research finds that while common, these species make up only 5% of skin bacteria. However, skin variety provides a rich and diverse habitat for bacteria . Most come from four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). [ 5 ] There are three main ecological areas: sebaceous, moist, and dry. Propionibacteria and Staphylococci species were the main species in sebaceous areas. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but Betaproteobacteria and Flavobacteriales are dominant. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The areas with least similarity between people in species were the spaces between fingers , the spaces between toes , axillae , and umbilical cord stump. Most similarly were beside the nostril , nares (inside the nostril), and on the back. [ 1 ] A study of the area between toes in 100 young adults found 14 different genera of fungi. These include yeasts such as Candida albicans , Rhodotorula rubra , Torulopsis and Trichosporon cutaneum , dermatophytes (skin living fungi) such as Microsporum gypseum , and Trichophyton rubrum and nondermatophyte fungi (opportunistic fungi that can live in skin) such as Rhizopus stolonifer , Trichosporon cutaneum , Fusarium , Scopulariopsis brevicaulis , Curvularia , Alternaria alternata , Paecilomyces , Aspergillus flavus and Penicillium species. [ 6 ] A study by the National Human Genome Research Institute in Bethesda, Maryland , researched the DNA of human skin fungi at 14 different locations on the body. These were the ear canal, between the eyebrows, the back of the head, behind the ear, the heel, toenails, between the toes, forearm, back, groin, nostrils, chest, palm, and the crook of the elbow. The study showed a large fungal diversity across the body, the richest habitat being the heel, which hosts about 80 species of fungi. By way of contrast, there are some 60 species in toenail clippings and 40 between the toes. Other rich areas are the palm, forearm and inside the elbow, with from 18 to 32 species. The head and the trunk hosted between 2 and 10 each. [ 7 ] The umbilicus, or navel , is an area of the body that is rarely exposed to UV light, soaps, or bodily secretions [ 8 ] (the navel does not produce any secretions or oils) [ 9 ] and because it is an almost undisturbed community of bacteria [ 10 ] it is an excellent part of the skin microbiome to study. [ 11 ] The navel, or umbilicus is a moist microbiome of the body [ 12 ] (with high humidity and temperatures), [ 13 ] that contains a large amount of bacteria, [ 14 ] especially bacteria that favors moist conditions such as Corynebacterium [ 15 ] and Staphylococcus . [ 13 ] The Belly Button Biodiversity Project began at North Carolina State University in early 2011 with two initial groups of 35 and 25 volunteers. [ 10 ] Volunteers were given sterile cotton swabs and were asked to insert the cotton swabs into their navels, to turn the cotton swab around three times and then return the cotton swab to the researchers in a vial [ 16 ] that contained a 0.5 ml 10% phosphate saline buffer. [ 10 ] Researchers at North Carolina State University, led by Jiri Hulcr, [ 17 ] then grew the samples in a culture until the bacterial colonies were large enough to be photographed and then these pictures were posted on the Belly Button Biodiversity Project's website (volunteers were given sample numbers so that they could view their own samples online). [ 16 ] These samples then were analyzed using 16S rDNA libraries so that strains that did not grow well in cultures could be identified. [ 10 ] The researchers at North Carolina State University discovered that while it was difficult to predict every strain of bacteria in the microbiome of the navel that they could predict which strains would be prevalent and which strains of bacteria would be quite rare in the microbiome. [ 10 ] It was found that the navel microbiomes only contained a few prevalent types of bacteria ( Staphylococcus , Corynebacterium , Actinobacteria, Clostridiales, and Bacilli) and many different types of rare bacteria. [ 10 ] Other types of rare organisms were discovered inside the navels of the volunteers including three types of Archaea, two of which were found in one volunteer who claimed not to have bathed or showered for many years. [ 10 ] Staphylococcus and Corynebacterium were among the most common types of bacteria found in the navels of this project's volunteers and these types of bacteria have been found to be the most common types of bacteria found on the human skin in larger studies of the skin microbiome [ 18 ] (of which the Belly Button Biodiversity Project is a part). [ 10 ] (In these larger studies it has been found that females generally have more Staphylococcus living in their skin microbiomes [ 18 ] (usually Staphylococcus epidermidis ) [ 16 ] and that men have more Corynebacterium living in their skin microbiomes.) [ 18 ] According to the Belly Button Biodiversity Project [ 10 ] at North Carolina State University, there are two types of microorganisms found in the navel and surrounding areas. Transient bacteria (bacteria that does not reproduce) [ 12 ] forms the majority of the organisms found in the navel, and an estimated 1400 various strains were found in 95% of participants of the study. [ 19 ] The Belly Button Biodiversity Project is ongoing and has now taken swabs from over 500 people. [ 10 ] The project was designed with the aim of countering that misconception that bacteria are always harmful to humans [ 20 ] and that humans are at war with bacteria. [ 21 ] In actuality, most strains of bacteria are harmless [ 13 ] if not beneficial for the human body. [ 22 ] Another of the project's goals is to foster public interest in microbiology. [ 17 ] Working in concert with the Human Microbiome Project, the Belly Button Biodiversity Project also studies the connections between human microbiomes and the factors of age, sex, ethnicity, location [ 17 ] and overall health. [ 23 ] Skin microflora can be commensals , mutualistic or pathogens . Often they can be all three depending upon the strength of the person's immune system . [ 3 ] Research upon the immune system in the gut and lungs has shown that microflora aids immunity development: however such research has only started upon whether this is the case with the skin. [ 3 ] Pseudomonas aeruginosa is an example of a mutualistic bacterium that can turn into a pathogen and cause disease: if it gains entry into the circulatory system it can result in infections in bone, joint, gastrointestinal, and respiratory systems. It can also cause dermatitis . However, P. aeruginosa produces antimicrobial substances such as pseudomonic acid (that are exploited commercially such as Mupirocin ). This works against staphylococcal and streptococcal infections. P. aeruginosa also produces substances that inhibit the growth of fungus species such as Candida krusei , Candida albicans , Torulopsis glabrata , Saccharomyces cerevisiae and Aspergillus fumigatus . [ 24 ] It can also inhibit the growth of Helicobacter pylori . [ 25 ] So important is its antimicrobial actions that it has been noted that "removing P. aeruginosa from the skin, through use of oral or topical antibiotics, may inversely allow for aberrant yeast colonization and infection." [ 3 ] Another aspect of bacteria is the generation of body odor . Sweat is odorless however several bacteria may consume it and create byproducts which may be considered putrid by humans (as in contrast to flies, for example, that may find them attractive/appealing). Several examples are: The skin creates antimicrobial peptides such as cathelicidins that control the proliferation of skin microbes. Cathelicidins not only reduce microbe numbers directly but also cause the secretion of cytokine release which induces inflammation , angiogenesis , and reepithelialization . Conditions such as atopic dermatitis have been linked to the suppression in cathelicidin production. [ 29 ] In rosacea abnormal processing of cathelicidin cause inflammation. Psoriasis has been linked to self-DNA created from cathelicidin peptides that causes autoinflammation . A major factor controlling cathelicidin is vitamin D 3 . [ 30 ] The superficial layers of the skin are naturally acidic ( pH 4–4.5) due to lactic acid in sweat and produced by skin bacteria. [ 31 ] At this pH mutualistic flora such as Staphylococci , Micrococci , Corynebacterium and Propionibacteria grow but not transient bacteria such as Gram-negative bacteria like Escherichia and Pseudomonas or Gram positive ones such as Staphylococcus aureus . [ 31 ] Another factor affecting the growth of pathological bacteria is that the antimicrobial substances secreted by the skin are enhanced in acidic conditions. [ 31 ] In alkaline conditions, bacteria cease to be attached to the skin and are more readily shed. It has been observed that the skin also swells under alkaline conditions and opens up allowing bacterial movement to the surface. [ 31 ] If activated, the immune system in the skin produces cell-mediated immunity against microbes such as dermatophytes (skin fungi). [ 32 ] One reaction is to increase stratum corneum turnover and so shed the fungus from the skin surface. Skin fungi such as Trichophyton rubrum have evolved to create substances that limit the immune response to them. [ 32 ] The shedding of skin is a general means to control the buildup of flora upon the skin surface. [ 33 ] Microorganisms play a role in noninfectious skin diseases such as atopic dermatitis , [ 34 ] rosacea , psoriasis , [ 35 ] and acne [ 36 ] Damaged skin can cause nonpathogenic bacteria to become pathogenic . [ 37 ] The diversity of species on the skin is related to later development of dermatitis. [ 38 ] Acne vulgaris is a common skin condition characterised by excessive sebum production by the pilosebaceous unit and inflammation of the skin. [ 39 ] Affected areas are typically colonised by Cutibacterium acnes ; a member of the commensal microbiota even in those without acne. [ 40 ] High populations of C. acnes are linked to acne vulgaris although only certain strains are strongly associated with acne while others with healthy skin. The relative population of C. acnes is similar between those with acne and those without. [ 39 ] [ 40 ] Current treatment includes topical and systemic antibacterial drugs which result in decreased C. acnes colonisation and/or activity. [ 41 ] Potential probiotic treatment includes the use of Staphylococcus epidermidis to inhibit C. acnes growth. S. epidermidis produces succinic acid which has been shown to inhibit C. acnes growth. [ 42 ] Lactobacillus plantarum has also been shown to act as an anti-inflammatory and improve antimicrobial properties of the skin when applied topically. It was also shown to be effective in reducing acne lesion size. [ 43 ] Individuals with atopic dermatitis have shown an increase in populations of Staphylococcus aureus in both lesional and nonlesional skin. [ 40 ] Atopic dermatitis flares are associated with low bacterial diversity due to colonisation by S. aureus and following standard treatment , bacterial diversity has been seen to increase. [ citation needed ] Current treatments include combinations of topical or systemic antibiotics, corticosteroids , and diluted bleach baths. [ 44 ] Potential probiotic treatments include using the commensal skin bacteria, S. epidermidis , to inhibit S. aureus growth. During atopic dermatitis flares, population levels of S. epidermidis has been shown to increase as an attempt to control S. aureus populations . [ 40 ] [ 44 ] Low gut microbial diversity in babies has been associated with an increased risk of atopic dermatitis. [ 45 ] Infants with atopic eczema have low levels of Bacteroides and high levels of Bacillota . Bacteroides have anti-inflammatory properties which are essential against dermatitis. [ 45 ] (See gut microbiota ) Psoriasis vulgaris typically affects drier skin sites such as elbows and knees . Dry areas of the skin tend to have high microbial diversity and fewer populations than sebaceous sites. [ 41 ] A study using swab sampling techniques show areas rich in Bacillota (mainly Streptococcus and Staphylococcus ) and Actinomycetota (mainly Corynebacterium and Propionibacterium ) are associated with psoriasis. [ 46 ] While another study using biopsies associate increased levels of Bacillota and Actinomycetota with healthy skin. [ 47 ] However most studies show that individuals affected by psoriasis have a lower microbial diversity in the affected areas. Treatments for psoriasis include topical agents, phototherapy, and systemic agents. [ 48 ] Current research on the skin microbiota's role in psoriasis is inconsistent therefore there are no potential probiotic treatments. Rosacea is typically connected to sebaceous sites of the skin. The skin mite Demodex folliculorum produce lipases that allow them to use sebum as a source of food therefore they have a high affinity for sebaceous skin sites. Although it is a part of the commensal skin microbiota, patients affected with rosacea show an increase in D. folliculorum compared to healthy individuals, suggesting pathogenicity . [ 49 ] Bacillus oleronius , a Demodex associated microbe, is not typically found in the commensal skin microbiota but initiates inflammatory pathways whose starting mechanism is similar to rosacea patients. [ 40 ] Populations of S. epidermidis have also been isolated from pustules of rosacea patients. However it is possible that they were moved by Demodex to areas that favour growth as Demodex has shown to transport bacteria around the face. [ 50 ] Current treatments include topical and oral antibiotics and laser therapy. [ 51 ] As current research has yet to show a clear mechanism for Demodex influence in rosacea, there are no potential probiotic treatments. Skin microbes are a potential source of infected medical devices such as catheters . [ 52 ] The human skin is host to numerous bacterial and fungal species, some of which are known to be harmful, some known to be beneficial and the vast majority unresearched. The use of bactericidal and fungicidal soaps will inevitably lead to bacterial and fungal populations which are resistant to the chemicals employed (see drug resistance ). Skin flora do not readily pass between people: 30 seconds of moderate friction and dry hand contact results in a transfer of only 0.07% of natural hand flora from naked with a greater percentage from gloves. [ 53 ] The most effective (60–80% reduction) antimicrobial washing is with ethanol , isopropanol , and n-propanol . Viruses are most affected by high (95%) concentrations of ethanol, while bacteria are more affected by n-propanol. [ 54 ] Unmedicated soaps are not very effective as illustrated by the following data. Health care workers washed their hands once in nonmedicated liquid soap for 30 seconds. The students/technicians for 20 times. [ 55 ] An important use of hand washing is to prevent the transmission of antibiotic resistant skin flora that cause hospital-acquired infections such as methicillin-resistant Staphylococcus aureus . While such flora have become antibiotic resistant due to antibiotics there is no evidence that recommended antiseptics or disinfectants selects for antibiotic-resistant organisms when used in hand washing. [ 56 ] However, many strains of organisms are resistant to some of the substances used in antibacterial soaps such as triclosan . [ 56 ] One study of bar soaps in dentist clinics found they all had their own flora and on average from two to five different genera of microorganisms with those used most more likely to have more species varieties. [ 57 ] Another study of bar soaps in public toilets found even more flora. [ 58 ] Another study found that very dry soaps are not colonized while all are that rest in pools of water. [ 59 ] However, one experiment using soaps inoculated with Pseudomonas aeruginosa and Escherichia coli that washing with inoculated bar soap did not transmit these bacteria to participants hands. [ 60 ] Washing skin repeatedly can damage the protective external layer and cause transepidermal loss of water. This can be seen in roughness characterized by scaling and dryness, itchiness, dermatitis provoked by microorganisms and allergens penetrating the corneal layer and redness. Wearing gloves can cause further problems since it produces a humid environment favoring the growth of microbes and also contains irritants such as latex and talcum powder . [ 61 ] Hand washing can damage skin because the stratum corneum top layer of skin consists of 15 to 20 layers of keratin disks, corneocytes , each of which is each surrounded by a thin film of skin lipids which can be removed by alcohols and detergents . [ 62 ] Damaged skin defined by extensive cracking of skin surface, widespread reddening or occasional bleeding has also been found to be more frequently colonized by Staphylococcus hominis and these were more likely to be methicillin resistant. [ 61 ] Though not related to greater antibiotic resistance, damaged skin was also more like to be colonized by Staphylococcus aureus , gram-negative bacteria , Enterococci and Candida . [ 61 ] The skin flora is different from that of the gut which is predominantly Bacillota and Bacteroidota . [ 63 ] There is also low level of variation between people that is not found in gut studies. [ 5 ] Both gut and skin flora however lack the diversity found in soil flora . [ 1 ]
https://en.wikipedia.org/wiki/Skin_flora
In trigonometry , a skinny triangle [ citation needed ] is a triangle whose height is much greater than its base. The solution of such triangles can be greatly simplified by using the approximation that the sine of a small angle is equal to that angle in radians . The solution is particularly simple for skinny triangles that are also isosceles or right triangles : in these cases the need for trigonometric functions or tables can be entirely dispensed with. The skinny triangle finds uses in surveying, astronomy, and shooting. The approximated solution to the skinny isosceles triangle, referring to figure 1, is: This is based on the small-angle approximations : and when θ {\displaystyle \scriptstyle \theta } is in radians . The proof of the skinny triangle solution follows from the small-angle approximation by applying the law of sines . Again referring to figure 1: The term π − θ 2 {\displaystyle \scriptstyle {\frac {\pi -\theta }{2}}} represents the base angle of the triangle and is this value because the sum of the internal angles of any triangle (in this case the two base angles plus θ ) are equal to π. Applying the small angle approximations to the law of sines above results in which is the desired result. This result is equivalent to assuming that the length of the base of the triangle is equal to the length of the arc of circle of radius r subtended by angle θ . The error is 10% or less for angles less than about 43°, [ 2 ] and improves quadratically: when the angle decreases by a factor of k , the error decreases by k 2 . The side-angle-side formula for the area of the triangle is Applying the small angle approximations results in The approximated solution to the right skinny triangle, referring to figure 3, is: This is based on the small-angle approximation which when substituted into the exact solution yields the desired result. The error of this approximation is less than 10% for angles 31° or less. [ 3 ] Applications of the skinny triangle occur in any situation where the distance to a far object is to be determined. This can occur in surveying, astronomy, and also has military applications. The skinny triangle is frequently used in astronomy to measure the distance to Solar System objects. The base of the triangle is formed by the distance between two measuring stations and the angle θ is the parallax angle formed by the object as seen by the two stations. This baseline is usually very long for best accuracy; in principle the stations could be on opposite sides of the Earth . However, this distance is still short compared to the distance to the object being measured (the height of the triangle) and the skinny triangle solution can be applied and still achieve great accuracy. The alternative method of measuring the base angles is theoretically possible but not so accurate. The base angles are very nearly right angles and would need to be measured with much greater precision than the parallax angle in order to get the same accuracy. [ 4 ] The same method of measuring parallax angles and applying the skinny triangle can be used to measure the distances to stars, at least the nearer ones. In the case of stars, however, a longer baseline than the diameter of the Earth is usually required. Instead of using two stations on the baseline, two measurements are made from the same station at different times of year. During the intervening period, the orbit of the Earth around the Sun moves the measuring station a great distance, so providing a very long baseline. This baseline can be as long as the major axis of the Earth's orbit or, equivalently, two astronomical units (AU). The distance to a star with a parallax angle of only one arcsecond measured on a baseline of one AU is a unit known as the parsec (pc) in astronomy and is equal to about 3.26 light years . [ 5 ] There is an inverse relationship between the distance in parsecs and the angle in arcseconds. For instance, two arcseconds corresponds to a distance of 0.5 pc and 0.5 arcsecond corresponds to a distance of two parsecs. [ 6 ] The skinny triangle is useful in gunnery in that it allows a relationship to be calculated between the range and size of the target without the shooter needing to compute or look up any trigonometric functions . Military and hunting telescopic sights often have a reticle calibrated in milliradians , in this context usually called just mils or mil-dots. A target 1 metre in height and measuring 1 mil in the sight corresponds to a range of 1000 metres. There is an inverse relationship between the angle measured in a sniper's sight and the distance to target. For instance, if this same target measures 2 mils in the sight then the range is 500 metres. [ 7 ] Another unit which is sometimes used on gunsights is the minute of arc (MOA). The distances corresponding to minutes of arc are not exact numbers in the metric system as they are with milliradians; however, there is a convenient approximate whole number correspondence in imperial units . A target 1 inch in height and measuring 1 MOA in the sight corresponds to a range of 100 yards . [ 7 ] Or, perhaps more usefully, a target 6 feet in height and measuring 4 MOA corresponds to a range of 1800 yards (just over a mile). A simple form of aviation navigation, dead reckoning , relies on making estimates of wind speeds aloft over long distances to calculate a desired heading. Since predicted or reported wind speeds are rarely accurate, corrections to the aircraft's heading need to be made at regular intervals. Skinny triangles form the basis of the 1 in 60 rule , which is "After travelling 60 miles, your heading is one degree off for every mile you're off course". "60" is very close to 180 / π = 57.30.
https://en.wikipedia.org/wiki/Skinny_triangle
A skirret is an archaic form of chalk line . It is a wooden tool shaped like the letter "T", historically used to ensure the foundation of a building was straight by laying down string as a marker. Today it is obsolete and little known, save for its use in some Freemasonry ceremonies. [ 1 ] This tool article is a stub . You can help Wikipedia by expanding it . This Freemasonry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Skirret_(tool)
Skirt-bag (Croatian: Suknja-torba) - a hybrid fashion design item made by Ivana Barač presented in 2008 in Kino EUROPA, Zagreb , Croatia . Design is not a common baggy skirt design, but a specific dual function item with exceptional construction. Ivana Barač is a Croatian fashion designer from Dubrovnik with a background in stone conservation. After studying at Istituto di Moda Burgo in Milan , Barač focused on the design construction and started working with a collection of 14 variations of bags that were all dual-functional, [ 1 ] including the Skirt-bag presented in Zagreb Fashion Week in 2008. [ 2 ] [ 3 ] The use of zippers enable the item to be worn as a skirt and carried as a bag. [ 4 ] Multifunctional design captured a lot of positive critical acclaim and attention of audience in the moment when recession was hitting Europe and Fashion hard. Grgo Zečić, now fashion director of Vmen , said that one needed to be excellent constructor for the bag to convert to skirtand back, with very well composed geometry and avantguarde approach. [ 5 ]
https://en.wikipedia.org/wiki/Skirt-bag
In probability theory and statistics , a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Copulas are used to describe/model the dependence (inter-correlation) between random variables . [ 1 ] Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammatical copulas in linguistics . Copulas have been used widely in quantitative finance to model and minimize tail risk [ 2 ] and portfolio-optimization applications. [ 3 ] Sklar's theorem states that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copula which describes the dependence structure between the variables. Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulas separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below. Two-dimensional copulas are known in some other areas of mathematics under the name permutons and doubly-stochastic measures . Consider a random vector ( X 1 , X 2 , … , X d ) {\displaystyle (X_{1},X_{2},\dots ,X_{d})} . Suppose its marginals are continuous, i.e. the marginal CDFs F i ( x ) = Pr [ X i ≤ x ] {\displaystyle F_{i}(x)=\Pr[X_{i}\leq x]} are continuous functions . By applying the probability integral transform to each component, the random vector has marginals that are uniformly distributed on the interval [0, 1]. The copula of ( X 1 , X 2 , … , X d ) {\displaystyle (X_{1},X_{2},\dots ,X_{d})} is defined as the joint cumulative distribution function of ( U 1 , U 2 , … , U d ) {\displaystyle (U_{1},U_{2},\dots ,U_{d})} : The copula C contains all information on the dependence structure between the components of ( X 1 , X 2 , … , X d ) {\displaystyle (X_{1},X_{2},\dots ,X_{d})} whereas the marginal cumulative distribution functions F i {\displaystyle F_{i}} contain all information on the marginal distributions of X i {\displaystyle X_{i}} . The reverse of these steps can be used to generate pseudo-random samples from general classes of multivariate probability distributions . That is, given a procedure to generate a sample ( U 1 , U 2 , … , U d ) {\displaystyle (U_{1},U_{2},\dots ,U_{d})} from the copula function, the required sample can be constructed as The generalized inverses F i − 1 {\displaystyle F_{i}^{-1}} are unproblematic almost surely , since the F i {\displaystyle F_{i}} were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as: In probabilistic terms, C : [ 0 , 1 ] d → [ 0 , 1 ] {\displaystyle C:[0,1]^{d}\rightarrow [0,1]} is a d -dimensional copula if C is a joint cumulative distribution function of a d -dimensional random vector on the unit cube [ 0 , 1 ] d {\displaystyle [0,1]^{d}} with uniform marginals . [ 4 ] In analytic terms, C : [ 0 , 1 ] d → [ 0 , 1 ] {\displaystyle C:[0,1]^{d}\rightarrow [0,1]} is a d -dimensional copula if For instance, in the bivariate case, C : [ 0 , 1 ] × [ 0 , 1 ] → [ 0 , 1 ] {\displaystyle C:[0,1]\times [0,1]\rightarrow [0,1]} is a bivariate copula if C ( 0 , u ) = C ( u , 0 ) = 0 {\displaystyle C(0,u)=C(u,0)=0} , C ( 1 , u ) = C ( u , 1 ) = u {\displaystyle C(1,u)=C(u,1)=u} and C ( u 2 , v 2 ) − C ( u 2 , v 1 ) − C ( u 1 , v 2 ) + C ( u 1 , v 1 ) ≥ 0 {\displaystyle C(u_{2},v_{2})-C(u_{2},v_{1})-C(u_{1},v_{2})+C(u_{1},v_{1})\geq 0} for all 0 ≤ u 1 ≤ u 2 ≤ 1 {\displaystyle 0\leq u_{1}\leq u_{2}\leq 1} and 0 ≤ v 1 ≤ v 2 ≤ 1 {\displaystyle 0\leq v_{1}\leq v_{2}\leq 1} . Sklar's theorem, named after Abe Sklar , provides the theoretical foundation for the application of copulas. [ 5 ] [ 6 ] Sklar's theorem states that every multivariate cumulative distribution function of a random vector ( X 1 , X 2 , … , X d ) {\displaystyle (X_{1},X_{2},\dots ,X_{d})} can be expressed in terms of its marginals F i ( x i ) = Pr [ X i ≤ x i ] {\displaystyle F_{i}(x_{i})=\Pr[X_{i}\leq x_{i}]} and a copula C {\displaystyle C} . Indeed: If the multivariate distribution has a density h {\displaystyle h} , and if this density is available, it also holds that where c {\displaystyle c} is the density of the copula. The theorem also states that, given H {\displaystyle H} , the copula is unique on Ran ⁡ ( F 1 ) × ⋯ × Ran ⁡ ( F d ) {\displaystyle \operatorname {Ran} (F_{1})\times \cdots \times \operatorname {Ran} (F_{d})} which is the cartesian product of the ranges of the marginal cdf's. This implies that the copula is unique if the marginals F i {\displaystyle F_{i}} are continuous. The converse is also true: given a copula C : [ 0 , 1 ] d → [ 0 , 1 ] {\displaystyle C:[0,1]^{d}\rightarrow [0,1]} and marginals F i ( x ) {\displaystyle F_{i}(x)} then C ( F 1 ( x 1 ) , … , F d ( x d ) ) {\displaystyle C\left(F_{1}(x_{1}),\dots ,F_{d}(x_{d})\right)} defines a d -dimensional cumulative distribution function with marginal distributions F i ( x ) {\displaystyle F_{i}(x)} . Copulas mainly work when time series are stationary [ 7 ] and continuous. [ 8 ] Thus, a very important pre-processing step is to check for the auto-correlation , trend and seasonality within time series. When time series are auto-correlated, they may generate a non existing dependence between sets of variables and result in incorrect copula dependence structure. [ 9 ] The Fréchet–Hoeffding theorem (after Maurice René Fréchet and Wassily Hoeffding [ 10 ] ) states that for any copula C : [ 0 , 1 ] d → [ 0 , 1 ] {\displaystyle C:[0,1]^{d}\rightarrow [0,1]} and any ( u 1 , … , u d ) ∈ [ 0 , 1 ] d {\displaystyle (u_{1},\dots ,u_{d})\in [0,1]^{d}} the following bounds hold: The function W is called lower Fréchet–Hoeffding bound and is defined as The function M is called upper Fréchet–Hoeffding bound and is defined as The upper bound is sharp: M is always a copula, it corresponds to comonotone random variables . The lower bound is point-wise sharp, in the sense that for fixed u , there is a copula C ~ {\displaystyle {\tilde {C}}} such that C ~ ( u ) = W ( u ) {\displaystyle {\tilde {C}}(u)=W(u)} . However, W is a copula only in two dimensions, in which case it corresponds to countermonotonic random variables. In two dimensions, i.e. the bivariate case, the Fréchet–Hoeffding theorem states Several families of copulas have been described. The Gaussian copula is a distribution over the unit hypercube [ 0 , 1 ] d {\displaystyle [0,1]^{d}} . It is constructed from a multivariate normal distribution over R d {\displaystyle \mathbb {R} ^{d}} by using the probability integral transform . For a given correlation matrix R ∈ [ − 1 , 1 ] d × d {\displaystyle R\in [-1,1]^{d\times d}} , the Gaussian copula with parameter matrix R {\displaystyle R} can be written as where Φ − 1 {\displaystyle \Phi ^{-1}} is the inverse cumulative distribution function of a standard normal and Φ R {\displaystyle \Phi _{R}} is the joint cumulative distribution function of a multivariate normal distribution with mean vector zero and covariance matrix equal to the correlation matrix R {\displaystyle R} . While there is no simple analytical formula for the copula function, C R Gauss ( u ) {\displaystyle C_{R}^{\text{Gauss}}(u)} , it can be upper or lower bounded, and approximated using numerical integration. [ 11 ] [ 12 ] The density can be written as [ 13 ] where I {\displaystyle I} is the identity matrix. Archimedean copulas are an associative class of copulas. Most common Archimedean copulas admit an explicit formula, something not possible for instance for the Gaussian copula. In practice, Archimedean copulas are popular because they allow modeling dependence in arbitrarily high dimensions with only one parameter, governing the strength of dependence. A copula C is called Archimedean if it admits the representation [ 14 ] where ψ : [ 0 , 1 ] × Θ → [ 0 , ∞ ) {\displaystyle \psi \!:[0,1]\times \Theta \rightarrow [0,\infty )} is a continuous, strictly decreasing and convex function such that ψ ( 1 ; θ ) = 0 {\displaystyle \psi (1;\theta )=0} , θ {\displaystyle \theta } is a parameter within some parameter space Θ {\displaystyle \Theta } , and ψ {\displaystyle \psi } is the so-called generator function and ψ − 1 {\displaystyle \psi ^{-1}} is its pseudo-inverse defined by Moreover, the above formula for C yields a copula for ψ − 1 {\displaystyle \psi ^{-1}} if and only if ψ − 1 {\displaystyle \psi ^{-1}} is d-monotone on [ 0 , ∞ ) {\displaystyle [0,\infty )} . [ 15 ] That is, if it is d − 2 {\displaystyle d-2} times differentiable and the derivatives satisfy for all t ≥ 0 {\displaystyle t\geq 0} and k = 0 , 1 , … , d − 2 {\displaystyle k=0,1,\dots ,d-2} and ( − 1 ) d − 2 ψ − 1 , ( d − 2 ) ( t ; θ ) {\displaystyle (-1)^{d-2}\psi ^{-1,(d-2)}(t;\theta )} is nonincreasing and convex . The following tables highlight the most prominent bivariate Archimedean copulas, with their corresponding generator. Not all of them are completely monotone , i.e. d -monotone for all d ∈ N {\displaystyle d\in \mathbb {N} } or d -monotone for certain θ ∈ Θ {\displaystyle \theta \in \Theta } only. In statistical applications, many problems can be formulated in the following way. One is interested in the expectation of a response function g : R d → R {\displaystyle g:\mathbb {R} ^{d}\rightarrow \mathbb {R} } applied to some random vector ( X 1 , … , X d ) {\displaystyle (X_{1},\dots ,X_{d})} . [ 18 ] If we denote the CDF of this random vector with H {\displaystyle H} , the quantity of interest can thus be written as If H {\displaystyle H} is given by a copula model, i.e., this expectation can be rewritten as In case the copula C is absolutely continuous , i.e. C has a density c , this equation can be written as and if each marginal distribution has the density f i {\displaystyle f_{i}} it holds further that If copula and marginals are known (or if they have been estimated), this expectation can be approximated through the following Monte Carlo algorithm: When studying multivariate data, one might want to investigate the underlying copula. Suppose we have observations from a random vector ( X 1 , X 2 , … , X d ) {\displaystyle (X_{1},X_{2},\dots ,X_{d})} with continuous marginals. The corresponding “true” copula observations would be However, the marginal distribution functions F i {\displaystyle F_{i}} are usually not known. Therefore, one can construct pseudo copula observations by using the empirical distribution functions instead. Then, the pseudo copula observations are defined as The corresponding empirical copula is then defined as The components of the pseudo copula samples can also be written as U ~ k i = R k i / n {\displaystyle {\tilde {U}}_{k}^{i}=R_{k}^{i}/n} , where R k i {\displaystyle R_{k}^{i}} is the rank of the observation X k i {\displaystyle X_{k}^{i}} : Therefore, the empirical copula can be seen as the empirical distribution of the rank transformed data. The sample version of Spearman's rho: [ 19 ] In quantitative finance copulas are applied to risk management , to portfolio management and optimization , and to derivatives pricing . For the former, copulas are used to perform stress-tests and robustness checks that are especially important during "downside/crisis/panic regimes" where extreme downside events may occur (e.g., the 2008 financial crisis ). The formula was also adapted for financial markets and was used to estimate the probability distribution of losses on pools of loans or bonds . During a downside regime, a large number of investors who have held positions in riskier assets such as equities or real estate may seek refuge in 'safer' investments such as cash or bonds. This is also known as a flight-to-quality effect and investors tend to exit their positions in riskier assets in large numbers in a short period of time. As a result, during downside regimes, correlations across equities are greater on the downside as opposed to the upside and this may have disastrous effects on the economy. [ 22 ] [ 23 ] For example, anecdotally, we often read financial news headlines reporting the loss of hundreds of millions of dollars on the stock exchange in a single day; however, we rarely read reports of positive stock market gains of the same magnitude and in the same short time frame. Copulas aid in analyzing the effects of downside regimes by allowing the modelling of the marginals and dependence structure of a multivariate probability model separately. For example, consider the stock exchange as a market consisting of a large number of traders each operating with his/her own strategies to maximize profits. The individualistic behaviour of each trader can be described by modelling the marginals. However, as all traders operate on the same exchange, each trader's actions have an interaction effect with other traders'. This interaction effect can be described by modelling the dependence structure. Therefore, copulas allow us to analyse the interaction effects which are of particular interest during downside regimes as investors tend to herd their trading behaviour and decisions . (See also agent-based computational economics , where price is treated as an emergent phenomenon , resulting from the interaction of the various market participants, or agents.) The users of the formula have been criticized for creating "evaluation cultures" that continued to use simple copulæ despite the simple versions being acknowledged as inadequate for that purpose. [ 24 ] [ 25 ] Thus, previously, scalable copula models for large dimensions only allowed the modelling of elliptical dependence structures (i.e., Gaussian and Student-t copulas) that do not allow for correlation asymmetries where correlations differ on the upside or downside regimes. However, the development of vine copulas [ 26 ] (also known as pair copulas) enables the flexible modelling of the dependence structure for portfolios of large dimensions. [ 27 ] The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied in portfolio optimization and risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula. [ 28 ] Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects of panic regimes on the portfolio profit and loss distribution. Panic copulas are created by Monte Carlo simulation , mixed with a re-weighting of the probability of each scenario. [ 29 ] As regards derivatives pricing , dependence modelling with copula functions is widely used in applications of financial risk assessment and actuarial analysis – for example in the pricing of collateralized debt obligations (CDOs). [ 30 ] Some believe the methodology of applying the Gaussian copula to credit derivatives to be one of the causes of the 2008 financial crisis ; [ 31 ] [ 32 ] [ 33 ] see David X. Li § CDOs and Gaussian copula . Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix. [ 28 ] This methodology is limited such that it does not allow for dependence to evolve as the financial markets exhibit asymmetric dependence, whereby correlations across assets significantly increase during downturns compared to upturns. Therefore, modeling approaches using the Gaussian copula exhibit a poor representation of extreme events . [ 28 ] [ 34 ] There have been attempts to propose models rectifying some of the copula limitations. [ 34 ] [ 35 ] [ 36 ] Additional to CDOs, copulas have been applied to other asset classes as a flexible tool in analyzing multi-asset derivative products. The first such application outside credit was to use a copula to construct a basket implied volatility surface, [ 37 ] taking into account the volatility smile of basket components. Copulas have since gained popularity in pricing and risk management [ 38 ] of options on multi-assets in the presence of a volatility smile, in equity- , foreign exchange- and fixed income derivatives . Recently, copula functions have been successfully applied to the database formulation for the reliability analysis of highway bridges, and to various multivariate simulation studies in civil engineering, [ 39 ] reliability of wind and earthquake engineering, [ 40 ] and mechanical & offshore engineering. [ 41 ] Researchers are also trying these functions in the field of transportation to understand the interaction between behaviors of individual drivers which, in totality, shapes traffic flow. Copulas are being used for reliability analysis of complex systems of machine components with competing failure modes. [ 42 ] Copulas are being used for warranty data analysis in which the tail dependence is analysed. [ 43 ] Copulas are used in modelling turbulent partially premixed combustion, which is common in practical combustors. [ 44 ] [ 45 ] Copulæ have many applications in the area of medicine , for example, The combination of SSA and copula-based methods have been applied for the first time as a novel stochastic tool for Earth Orientation Parameters prediction. [ 60 ] [ 61 ] Copulas have been used in both theoretical and applied analyses of hydroclimatic data. Theoretical studies adopted the copula-based methodology for instance to gain a better understanding of the dependence structures of temperature and precipitation, in different parts of the world. [ 9 ] [ 62 ] [ 63 ] Applied studies adopted the copula-based methodology to examine e.g., agricultural droughts [ 64 ] or joint effects of temperature and precipitation extremes on vegetation growth. [ 65 ] Copulas have been extensively used in climate- and weather-related research. [ 66 ] [ 67 ] Copulas have been used to estimate the solar irradiance variability in spatial networks and temporally for single locations. [ 68 ] [ 69 ] Large synthetic traces of vectors and stationary time series can be generated using empirical copula while preserving the entire dependence structure of small datasets. [ 70 ] Such empirical traces are useful in various simulation-based performance studies. [ 71 ] Copulas have been used for quality ranking in the manufacturing of electronically commutated motors. [ 72 ] Copulas are important because they represent a dependence structure without using marginal distributions . Copulas have been widely used in the field of finance , but their use in signal processing is relatively new. Copulas have been employed in the field of wireless communication for classifying radar signals, change detection in remote sensing applications, and EEG signal processing in medicine . In this section, a short mathematical derivation to obtain copula density function followed by a table providing a list of copula density functions with the relevant signal processing applications are presented. Copulas have been used for determining the core radio luminosity function of Active galactic Nuclei (AGNs), [ 73 ] while this cannot be realized using traditional methods due to the difficulties in sample completeness. For any two random variables X and Y , the continuous joint probability distribution function can be written as where F X ( x ) = Pr { X ≤ x } {\textstyle F_{X}(x)=\Pr {\begin{Bmatrix}X\leq {x}\end{Bmatrix}}} and F Y ( y ) = Pr { Y ≤ y } {\textstyle F_{Y}(y)=\Pr {\begin{Bmatrix}Y\leq {y}\end{Bmatrix}}} are the marginal cumulative distribution functions of the random variables X and Y , respectively. then the copula distribution function C ( u , v ) {\displaystyle C(u,v)} can be defined using Sklar's theorem [ 74 ] [ 6 ] as: where u = F X ( x ) {\displaystyle u=F_{X}(x)} and v = F Y ( y ) {\displaystyle v=F_{Y}(y)} are marginal distribution functions, F X Y ( x , y ) {\displaystyle F_{XY}(x,y)} joint and u , v ∈ ( 0 , 1 ) {\displaystyle u,v\in (0,1)} . Assuming F X Y ( ⋅ , ⋅ ) {\displaystyle F_{XY}(\cdot ,\cdot )} is a.e. twice differentiable, we start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives. where c ( u , v ) {\displaystyle c(u,v)} is the copula density function, f X ( x ) {\displaystyle f_{X}(x)} and f Y ( y ) {\displaystyle f_{Y}(y)} are the marginal probability density functions of X and Y , respectively. There are four elements in this equation, and if any three elements are known, the fourth element can be calculated. For example, it may be used, Various bivariate copula density functions are important in the area of signal processing. u = F X ( x ) {\displaystyle u=F_{X}(x)} and v = F Y ( y ) {\displaystyle v=F_{Y}(y)} are marginal distributions functions and f X ( x ) {\displaystyle f_{X}(x)} and f Y ( y ) {\displaystyle f_{Y}(y)} are marginal density functions. Extension and generalization of copulas for statistical signal processing have been shown to construct new bivariate copulas for exponential, Weibull, and Rician distributions. Zeng et al. [ 75 ] presented algorithms, simulation, optimal selection, and practical applications of these copulas in signal processing. validating biometric authentication, [ 77 ] modeling stochastic dependence in large-scale integration of wind power, [ 78 ] unsupervised classification of radar signals [ 79 ] fusion of correlated sensor decisions [ 92 ]
https://en.wikipedia.org/wiki/Sklar's_theorem
In mathematics , specifically the field of algebra , Sklyanin algebras are a class of noncommutative algebra named after Evgeny Sklyanin . This class of algebras was first studied in the classification of Artin-Schelter regular [ 1 ] algebras of global dimension 3 in the 1980s. [ 2 ] Sklyanin algebras can be grouped into two different types, the non-degenerate Sklyanin algebras and the degenerate Sklyanin algebras, which have very different properties. A need to understand the non-degenerate Sklyanin algebras better has led to the development of the study of point modules in noncommutative geometry . [ 2 ] Let k {\displaystyle k} be a field with a primitive cube root of unity . Let D {\displaystyle {\mathfrak {D}}} be the following subset of the projective plane P k 2 {\displaystyle {\textbf {P}}_{k}^{2}} : D = { [ 1 : 0 : 0 ] , [ 0 : 1 : 0 ] , [ 0 : 0 : 1 ] } ⊔ { [ a : b : c ] | a 3 = b 3 = c 3 } . {\displaystyle {\mathfrak {D}}=\{[1:0:0],[0:1:0],[0:0:1]\}\sqcup \{[a:b:c]{\big |}a^{3}=b^{3}=c^{3}\}.} Each point [ a : b : c ] ∈ P k 2 {\displaystyle [a:b:c]\in {\textbf {P}}_{k}^{2}} gives rise to a (quadratic 3-dimensional) Sklyanin algebra, S a , b , c = k ⟨ x , y , z ⟩ / ( f 1 , f 2 , f 3 ) , {\displaystyle S_{a,b,c}=k\langle x,y,z\rangle /(f_{1},f_{2},f_{3}),} where, f 1 = a y z + b z y + c x 2 , f 2 = a z x + b x z + c y 2 , f 3 = a x y + b y x + c z 2 . {\displaystyle f_{1}=ayz+bzy+cx^{2},\quad f_{2}=azx+bxz+cy^{2},\quad f_{3}=axy+byx+cz^{2}.} Whenever [ a : b : c ] ∈ D {\displaystyle [a:b:c]\in {\mathfrak {D}}} we call S a , b , c {\displaystyle S_{a,b,c}} a degenerate Sklyanin algebra and whenever [ a : b : c ] ∈ P 2 ∖ D {\displaystyle [a:b:c]\in {\textbf {P}}^{2}\setminus {\mathfrak {D}}} we say the algebra is non-degenerate. [ 3 ] The non-degenerate case shares many properties with the commutative polynomial ring k [ x , y , z ] {\displaystyle k[x,y,z]} , whereas the degenerate case enjoys almost none of these properties. Generally the non-degenerate Sklyanin algebras are more challenging to understand than their degenerate counterparts. Let S deg {\displaystyle S_{\text{deg}}} be a degenerate Sklyanin algebra. Let S {\displaystyle S} be a non-degenerate Sklyanin algebra. The subset D {\displaystyle {\mathfrak {D}}} consists of 12 points on the projective plane , which give rise to 12 expressions of degenerate Sklyanin algebras. However, some of these are isomorphic and there exists a classification of degenerate Sklyanin algebras into two different cases. Let S deg = S a , b , c {\displaystyle S_{\text{deg}}=S_{a,b,c}} be a degenerate Sklyanin algebra. These two cases are Zhang twists of each other [ 3 ] and therefore have many properties in common. [ 7 ] The commutative polynomial ring k [ x , y , z ] {\displaystyle k[x,y,z]} is isomorphic to the non-degenerate Sklyanin algebra S 1 , − 1 , 0 = k ⟨ x , y , z ⟩ / ( x y − y x , y z − z y , z x − x z ) {\displaystyle S_{1,-1,0}=k\langle x,y,z\rangle /(xy-yx,yz-zy,zx-xz)} and is therefore an example of a non-degenerate Sklyanin algebra. The study of point modules is a useful tool which can be used much more widely than just for Sklyanin algebras. Point modules are a way of finding projective geometry in the underlying structure of noncommutative graded rings . Originally, the study of point modules was applied to show some of the properties of non-degenerate Sklyanin algebras. For example to find their Hilbert series and determine that non-degenerate Sklyanin algebras do not contain zero divisors . [ 2 ] Whenever a b c ≠ 0 {\displaystyle abc\neq 0} and ( a 3 + b 3 + c 3 3 a b c ) 3 ≠ 1 {\displaystyle \left({\frac {a^{3}+b^{3}+c^{3}}{3abc}}\right)^{3}\neq 1} in the definition of a non-degenerate Sklyanin algebra S = S a , b , c {\displaystyle S=S_{a,b,c}} , the point modules of S {\displaystyle S} are parametrised by an elliptic curve . [ 2 ] If the parameters a , b , c {\displaystyle a,b,c} do not satisfy those constraints, the point modules of any non-degenerate Sklyanin algebra are still parametrised by a closed projective variety on the projective plane . [ 8 ] If S {\displaystyle S} is a Sklyanin algebra whose point modules are parametrised by an elliptic curve , then there exists an element g ∈ S {\displaystyle g\in S} which annihilates all point modules i.e. M g = 0 {\displaystyle Mg=0} for all point modules M {\displaystyle M} of S {\displaystyle S} . The point modules of degenerate Sklyanin algebras are not parametrised by a projective variety . [ 4 ]
https://en.wikipedia.org/wiki/Sklyanin_algebra
The Skoda–El Mir theorem is a theorem of complex geometry , stated as follows: Theorem ( Skoda , [ 1 ] El Mir, [ 2 ] Sibony [ 3 ] ). Let X be a complex manifold , and E a closed complete pluripolar set in X . Consider a closed positive current Θ {\displaystyle \Theta } on X ∖ E {\displaystyle X\backslash E} which is locally integrable around E . Then the trivial extension of Θ {\displaystyle \Theta } to X is closed on X . This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Skoda–El_Mir_theorem
In mathematical logic , Skolem arithmetic is the first-order theory of the natural numbers with multiplication , named in honor of Thoralf Skolem . The signature of Skolem arithmetic contains only the multiplication operation and equality, omitting the addition operation entirely. Skolem arithmetic is weaker than Peano arithmetic , which includes both addition and multiplication operations. [ 1 ] Unlike Peano arithmetic, Skolem arithmetic is a decidable theory . This means it is possible to effectively determine, for any sentence in the language of Skolem arithmetic, whether that sentence is provable from the axioms of Skolem arithmetic. The asymptotic running-time computational complexity of this decision problem is triply exponential. [ 2 ] We define the following abbreviations. The axioms of Skolem arithmetic are: [ 3 ] First-order logic with equality and multiplication of positive integers can express the relation c = a ⋅ b {\displaystyle c=a\cdot b} . Using this relation and equality, we can define the following relations on positive integers: The truth value of formulas of Skolem arithmetic can be reduced to the truth value of sequences of non-negative integers constituting their prime factor decomposition, with multiplication becoming point-wise addition of sequences. The decidability then follows from the Feferman–Vaught theorem that can be shown using quantifier elimination . Another way of stating this is that first-order theory of positive integers is isomorphic to the first-order theory of finite multisets of non-negative integers with the multiset sum operation, whose decidability reduces to the decidability of the theory of elements. In more detail, according to the fundamental theorem of arithmetic , a positive integer a > 1 {\displaystyle a>1} can be represented as a product of prime powers: If a prime number p k {\displaystyle p_{k}} does not appear as a factor, we define its exponent a k {\displaystyle a_{k}} to be zero. Thus, only finitely many exponents are non-zero in the infinite sequence a 1 , a 2 , … {\displaystyle a_{1},a_{2},\ldots } . Denote such sequences of non-negative integers by N ∗ {\displaystyle N^{*}} . Now consider the decomposition of another positive number, The multiplication a b {\displaystyle ab} corresponds point-wise addition of the exponents: Define the corresponding point-wise addition on sequences by: Thus we have an isomorphism between the structure of positive integers with multiplication, ( N , ⋅ ) {\displaystyle (N,\cdot )} and of point-wise addition of the sequences of non-negative integers in which only finitely many elements are non-zero, ( N ∗ , + ¯ ) {\displaystyle (N^{*},{\bar {+}})} . From Feferman–Vaught theorem for first-order logic , the truth value of a first-order logic formula over sequences and pointwise addition on them reduces, in an algorithmic way, to the truth value of formulas in the theory of elements of the sequence with addition, which, in this case, is Presburger arithmetic . Because Presburger arithmetic is decidable, Skolem arithmetic is also decidable. Ferrante & Rackoff (1979 , Chapter 5) establish, using Ehrenfeucht–Fraïssé games , a method to prove upper bounds on decision problem complexity of weak direct powers of theories. They apply this method to obtain triply exponential space complexity for ( N ∗ , + ¯ ) {\displaystyle (N^{*},{\bar {+}})} , and thus of Skolem arithmetic. Grädel (1989 , Section 5) proves that the satisfiability problem for the quantifier-free fragment of Skolem arithmetic belongs to the NP complexity class . Thanks to the above reduction using Feferman–Vaught theorem, we can obtain first-order theories whose open formulas define a larger set of relations if we strengthen the theory of multisets of prime factors. For example, consider the relation a ∼ b {\displaystyle a\sim b} that is true if and only if a {\displaystyle a} and b {\displaystyle b} have the equal number of distinct prime factors: For example, 2 10 ⋅ 3 100 ∼ 5 8 ⋅ 19 9 {\displaystyle 2^{10}\cdot 3^{100}\sim 5^{8}\cdot 19^{9}} because both sides denote a number that has two distinct prime factors. If we add the relation ∼ {\displaystyle \sim } to Skolem arithmetic, it remains decidable. This is because the theory of sets of indices remains decidable in the presence of the equinumerosity operator on sets, as shown by the Feferman–Vaught theorem . An extension of Skolem arithmetic with the successor predicate, s u c c ( n ) = n + 1 {\displaystyle succ(n)=n+1} can define the addition relation using Tarski's identity: [ 4 ] [ 5 ] and defining the relation c = a + b {\displaystyle c=a+b} on positive integers by Because it can express both multiplication and addition, the resulting theory is undecidable. If we have an ordering predicate on natural numbers (less than, < {\displaystyle <} ), we can express s u c c {\displaystyle \mathrm {succ} } by so the extension with < {\displaystyle <} is also undecidable.
https://en.wikipedia.org/wiki/Skolem_arithmetic
In mathematical logic , a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers . Every first-order formula may be converted into Skolem normal form while not changing its satisfiability via a process called Skolemization (sometimes spelled Skolemnization ). The resulting formula is not necessarily equivalent to the original one, but is equisatisfiable with it: it is satisfiable if and only if the original one is satisfiable. [ 1 ] Reduction to Skolem normal form is a method for removing existential quantifiers from formal logic statements, often performed as the first step in an automated theorem prover . The simplest form of Skolemization is for existentially quantified variables that are not inside the scope of a universal quantifier. These may be replaced simply by creating new constants. For example, ∃ x P ( x ) {\displaystyle \exists xP(x)} may be changed to P ( c ) {\displaystyle P(c)} , where c {\displaystyle c} is a new constant (does not occur anywhere else in the formula). More generally, Skolemization is performed by replacing every existentially quantified variable y {\displaystyle y} with a term f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} whose function symbol f {\displaystyle f} is new. The variables of this term are as follows. If the formula is in prenex normal form , then x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the variables that are universally quantified and whose quantifiers precede that of y {\displaystyle y} . In general, they are the variables that are quantified universally (we assume we get rid of existential quantifiers in order, so all existential quantifiers before ∃ y {\displaystyle \exists y} have been removed) and such that ∃ y {\displaystyle \exists y} occurs in the scope of their quantifiers. The function f {\displaystyle f} introduced in this process is called a Skolem function (or Skolem constant if it is of zero arity ) and the term is called a Skolem term . As an example, the formula ∀ x ∃ y ∀ z P ( x , y , z ) {\displaystyle \forall x\exists y\forall zP(x,y,z)} is not in Skolem normal form because it contains the existential quantifier ∃ y {\displaystyle \exists y} . Skolemization replaces y {\displaystyle y} with f ( x ) {\displaystyle f(x)} , where f {\displaystyle f} is a new function symbol, and removes the quantification over y {\displaystyle y} . The resulting formula is ∀ x ∀ z P ( x , f ( x ) , z ) {\displaystyle \forall x\forall zP(x,f(x),z)} . The Skolem term f ( x ) {\displaystyle f(x)} contains x {\displaystyle x} , but not z {\displaystyle z} , because the quantifier to be removed ∃ y {\displaystyle \exists y} is in the scope of ∀ x {\displaystyle \forall x} , but not in that of ∀ z {\displaystyle \forall z} ; since this formula is in prenex normal form, this is equivalent to saying that, in the list of quantifiers, x {\displaystyle x} precedes y {\displaystyle y} while z {\displaystyle z} does not. The formula obtained by this transformation is satisfiable if and only if the original formula is. Skolemization works by applying a second-order equivalence together with the definition of first-order satisfiability. The equivalence provides a way for "moving" an existential quantifier before a universal one. where Intuitively, the sentence "for every x {\displaystyle x} there exists a y {\displaystyle y} such that R ( x , y ) {\displaystyle R(x,y)} " is converted into the equivalent form "there exists a function f {\displaystyle f} mapping every x {\displaystyle x} into a y {\displaystyle y} such that, for every x {\displaystyle x} it holds that R ( x , f ( x ) ) {\displaystyle R(x,f(x))} ". This equivalence is useful because the definition of first-order satisfiability implicitly existentially quantifies over functions interpreting the function symbols. In particular, a first-order formula Φ {\displaystyle \Phi } is satisfiable if there exists a model M {\displaystyle M} and an evaluation μ {\displaystyle \mu } of the free variables of the formula that evaluate the formula to true . The model contains the interpretation of all function symbols; therefore, Skolem functions are implicitly existentially quantified. In the example above, ∀ x R ( x , f ( x ) ) {\displaystyle \forall xR(x,f(x))} is satisfiable if and only if there exists a model M {\displaystyle M} , which contains an interpretation for f {\displaystyle f} , such that ∀ x R ( x , f ( x ) ) {\displaystyle \forall xR(x,f(x))} is true for some evaluation of its free variables (none in this case). This may be expressed in second order as ∃ f ∀ x R ( x , f ( x ) ) {\displaystyle \exists f\forall xR(x,f(x))} . By the above equivalence, this is the same as the satisfiability of ∀ x ∃ y R ( x , y ) {\displaystyle \forall x\exists yR(x,y)} . At the meta-level, first-order satisfiability of a formula Φ {\displaystyle \Phi } may be written with a little abuse of notation as ∃ M ∃ μ ( M , μ ⊨ Φ ) {\displaystyle \exists M\exists \mu (M,\mu \models \Phi )} , where M {\displaystyle M} is a model, μ {\displaystyle \mu } is an evaluation of the free variables, and ⊨ {\displaystyle \models } means that Φ {\displaystyle \Phi } is true in M {\displaystyle M} under μ {\displaystyle \mu } . Since first-order models contain the interpretation of all function symbols, any Skolem function that Φ {\displaystyle \Phi } contains is implicitly existentially quantified by ∃ M {\displaystyle \exists M} . As a result, after replacing existential quantifiers over variables by existential quantifiers over functions at the front of the formula, the formula still may be treated as a first-order one by removing these existential quantifiers. This final step of treating ∃ f ∀ x R ( x , f ( x ) ) {\displaystyle \exists f\forall xR(x,f(x))} as ∀ x R ( x , f ( x ) ) {\displaystyle \forall xR(x,f(x))} may be completed because functions are implicitly existentially quantified by ∃ M {\displaystyle \exists M} in the definition of first-order satisfiability. Correctness of Skolemization may be shown on the example formula F 1 = ∀ x 1 … ∀ x n ∃ y R ( x 1 , … , x n , y ) {\displaystyle F_{1}=\forall x_{1}\dots \forall x_{n}\exists yR(x_{1},\dots ,x_{n},y)} as follows. This formula is satisfied by a model M {\displaystyle M} if and only if, for each possible value for x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} in the domain of the model, there exists a value for y {\displaystyle y} in the domain of the model that makes R ( x 1 , … , x n , y ) {\displaystyle R(x_{1},\dots ,x_{n},y)} true. By the axiom of choice , there exists a function f {\displaystyle f} such that y = f ( x 1 , … , x n ) {\displaystyle y=f(x_{1},\dots ,x_{n})} . As a result, the formula F 2 = ∀ x 1 … ∀ x n R ( x 1 , … , x n , f ( x 1 , … , x n ) ) {\displaystyle F_{2}=\forall x_{1}\dots \forall x_{n}R(x_{1},\dots ,x_{n},f(x_{1},\dots ,x_{n}))} is satisfiable, because it has the model obtained by adding the interpretation of f {\displaystyle f} to M {\displaystyle M} . This shows that F 1 {\displaystyle F_{1}} is satisfiable only if F 2 {\displaystyle F_{2}} is satisfiable as well. Conversely, if F 2 {\displaystyle F_{2}} is satisfiable, then there exists a model M ′ {\displaystyle M'} that satisfies it; this model includes an interpretation for the function f {\displaystyle f} such that, for every value of x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} , the formula R ( x 1 , … , x n , f ( x 1 , … , x n ) ) {\displaystyle R(x_{1},\dots ,x_{n},f(x_{1},\dots ,x_{n}))} holds. As a result, F 1 {\displaystyle F_{1}} is satisfied by the same model because one may choose, for every value of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} , the value y = f ( x 1 , … , x n ) {\displaystyle y=f(x_{1},\dots ,x_{n})} , where f {\displaystyle f} is evaluated according to M ′ {\displaystyle M'} . One of the uses of Skolemization is within automated theorem proving . For example, in the method of analytic tableaux , whenever a formula whose leading quantifier is existential occurs, the formula obtained by removing that quantifier via Skolemization may be generated. For example, if ∃ x Φ ( x , y 1 , … , y n ) {\displaystyle \exists x\Phi (x,y_{1},\ldots ,y_{n})} occurs in a tableau, where x , y 1 , … , y n {\displaystyle x,y_{1},\ldots ,y_{n}} are the free variables of Φ ( x , y 1 , … , y n ) {\displaystyle \Phi (x,y_{1},\ldots ,y_{n})} , then Φ ( f ( y 1 , … , y n ) , y 1 , … , y n ) {\displaystyle \Phi (f(y_{1},\ldots ,y_{n}),y_{1},\ldots ,y_{n})} may be added to the same branch of the tableau. This addition does not alter the satisfiability of the tableau: every model of the old formula may be extended, by adding a suitable interpretation of f {\displaystyle f} , to a model of the new formula. This form of Skolemization is an improvement over "classical" Skolemization in that only variables that are free in the formula are placed in the Skolem term. This is an improvement because the semantics of tableaux may implicitly place the formula in the scope of some universally quantified variables that are not in the formula itself; these variables are not in the Skolem term, while they would be there according to the original definition of Skolemization. Another improvement that may be used is applying the same Skolem function symbol for formulae that are identical up to variable renaming. [ 2 ] Another use is in the resolution method for first-order logic , where formulas are represented as sets of clauses understood to be universally quantified. (For an example see drinker paradox .) An important result in model theory is the Löwenheim–Skolem theorem , which can be proven via Skolemizing the theory and closing under the resulting Skolem functions. [ 3 ] In general, if T {\displaystyle T} is a theory and for each formula with free variables x 1 , … , x n , y {\displaystyle x_{1},\dots ,x_{n},y} there is an n -ary function symbol F {\displaystyle F} that is provably a Skolem function for y {\displaystyle y} , then T {\displaystyle T} is called a Skolem theory . [ 4 ] Every Skolem theory is model complete , i.e. every substructure of a model is an elementary substructure . Given a model M of a Skolem theory T , the smallest substructure of M containing a certain set A is called the Skolem hull of A . The Skolem hull of A is an atomic prime model over A . Skolem normal form is named after the late Norwegian mathematician Thoralf Skolem .
https://en.wikipedia.org/wiki/Skolem_normal_form
In additive and algebraic number theory , the Skolem–Mahler–Lech theorem states that if a sequence of numbers satisfies a linear difference equation , then with finitely many exceptions the positions at which the sequence is zero form a regularly repeating pattern. This result is named after Thoralf Skolem (who proved the theorem for sequences of rational numbers ), Kurt Mahler (who proved it for sequences of algebraic numbers ), and Christer Lech (who proved it for sequences whose elements belong to any field of characteristic 0). Its known proofs use p -adic analysis and are non-constructive . Let s ( n ) n ≥ 0 {\displaystyle s(n)_{n\geq 0}} be a sequence of complex numbers satisfying s ( n ) = c 1 s ( n − 1 ) + c 2 s ( n − 2 ) + ⋯ + c d s ( n − d ) {\displaystyle s(n)=c_{1}s(n-1)+c_{2}s(n-2)+\cdots +c_{d}s(n-d)} for all n ≥ d {\displaystyle n\geq d} , where c i {\displaystyle c_{i}} are complex number constants (i.e., a constant-recursive sequence of order d {\displaystyle d} ). Then the set of zeros { n ∣ s ( n ) = 0 } {\displaystyle \{n\mid s(n)=0\}} is equal to the union of a finite set and finitely many arithmetic progressions . If we have c d ≠ 0 {\displaystyle c_{d}\neq 0} (excluding sequences such as 1, 0, 0, 0, ...), then the set of zeros in fact equal to the union of a finite set and finitely many full arithmetic progressions, where an infinite arithmetic progression is full if there exist integers a and b such that the progression consists of all positive integers equal to b modulo a . Consider the sequence that alternates between zeros and the Fibonacci numbers . This sequence can be generated by the linear recurrence relation F ( i ) = F ( i − 2 ) + F ( i − 4 ) {\displaystyle F(i)=F(i-2)+F(i-4)} (a modified form of the Fibonacci recurrence), starting from the base cases F (1) = F (2) = F (4) = 0 and F (3) = 1. For this sequence, F ( i ) = 0 if and only if i is either one or even. Thus, the positions at which the sequence is zero can be partitioned into a finite set (the singleton set {1}) and a full arithmetic progression (the positive even numbers ). In this example, only one arithmetic progression was needed, but other recurrence sequences may have zeros at positions forming multiple arithmetic progressions. The Skolem problem is the problem of determining whether a given recurrence sequence has a zero. There exist an algorithm to test whether there are infinitely many zeros, [ 1 ] and if so to find the decomposition of these zeros into periodic sets guaranteed to exist by the Skolem–Mahler–Lech theorem. However, it is unknown whether there exists an algorithm to determine whether a recurrence sequence has any non-periodic zeros. [ 2 ]
https://en.wikipedia.org/wiki/Skolem–Mahler–Lech_theorem
In mathematics and probability theory , Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process ( Brownian motion ) evaluated at a collection of stopping times . Both results are named for the Ukrainian mathematician A. V. Skorokhod . Let X be a real -valued random variable with expected value 0 and finite variance ; let W denote a canonical real-valued Wiener process. Then there is a stopping time (with respect to the natural filtration of W ), τ , such that W τ has the same distribution as X , and Let X 1 , X 2 , ... be a sequence of independent and identically distributed random variables , each with expected value 0 and finite variance, and let Then there is a sequence of stopping times τ 1 ≤ τ 2 ≤ ... such that the W τ n {\displaystyle W_{\tau _{n}}} have the same joint distributions as the partial sums S n and τ 1 , τ 2 − τ 1 , τ 3 − τ 2 , ... are independent and identically distributed random variables satisfying and
https://en.wikipedia.org/wiki/Skorokhod's_embedding_theorem
In mathematics and statistics , Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space . It is named for the Ukrainian mathematician A. V. Skorokhod . Let ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} be a sequence of probability measures on a metric space S {\displaystyle S} such that μ n {\displaystyle \mu _{n}} converges weakly to some probability measure μ ∞ {\displaystyle \mu _{\infty }} on S {\displaystyle S} as n → ∞ {\displaystyle n\to \infty } . Suppose also that the support of μ ∞ {\displaystyle \mu _{\infty }} is separable . Then there exist S {\displaystyle S} -valued random variables X n {\displaystyle X_{n}} defined on a common probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbf {P} )} such that the law of X n {\displaystyle X_{n}} is μ n {\displaystyle \mu _{n}} for all n {\displaystyle n} (including n = ∞ {\displaystyle n=\infty } ) and such that ( X n ) n ∈ N {\displaystyle (X_{n})_{n\in \mathbb {N} }} converges to X ∞ {\displaystyle X_{\infty }} , P {\displaystyle \mathbf {P} } -almost surely.
https://en.wikipedia.org/wiki/Skorokhod's_representation_theorem
The Skraup synthesis is a chemical reaction used to synthesize quinolines . It is named after the Czech chemist Zdenko Hans Skraup (1850–1910). In the archetypal Skraup reaction, aniline is heated with sulfuric acid , glycerol , and an oxidizing agent such as nitrobenzene to yield quinoline. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In this example, nitrobenzene serves as both the solvent and the oxidizing agent. The reaction, which otherwise has a reputation for being violent, is typically conducted in the presence of ferrous sulfate . [ 5 ] Arsenic acid may be used instead of nitrobenzene and the former is better since the reaction is less violent. [ 6 ]
https://en.wikipedia.org/wiki/Skraup_reaction
A skull and crossbones is a symbol consisting of a human skull and two long bones crossed together under or behind the skull. [ 1 ] The design originated in the Late Middle Ages as a symbol of death and especially as a memento mori on tombstones. Actual skulls and bones were long used to mark the entrances to Spanish cemeteries (campo santo) . In modern contexts, it is generally used as a hazard symbol , usually in regard to poisonous substances, such as deadly chemicals. [ 1 ] It is also associated with piracy and software piracy , due to its historical use in some Jolly Roger flags. The skull and bones are often used in military insignia, such as the coats of arms of regiments . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Since the mid-18th century, skull and crossbones insignia has been officially used in European armies as symbols of superiority. One of the first regiments was the Frederick the Great 's Hussars in 1741, also known as the " Totenkopfhusaren ". From this tradition, the skull became an important emblem in the German army. Identical insignia has been used in the Prussian army after the First World War by Freikorps and in Nazi Germany by the Wehrmacht and the SS . The idea of elitism symbolized by the skull and crossbones has influenced sub- and pop culture and has become part of the fashion industry. [ 7 ] The skull and crossbones has long been a standard symbol for poison . In 1829, New York State required the labeling of all containers of poisonous substances. [ 8 ] The skull and crossbones symbol appears to have been used for that purpose since the 1850s. Previously a variety of motifs had been used, including the Danish "+ + +" and drawings of skeletons . [ 9 ] In the 1870s poison manufacturers around the world began using bright cobalt bottles with a variety of raised bumps and designs (to enable easy recognition in the dark) to indicate poison, [ 10 ] but by the 1880s the skull and cross bones had become ubiquitous, and the brightly coloured bottles lost their association. [ 11 ] In the United States, due to concerns that the skull-and-crossbones symbol's association with pirates might encourage children to play with toxic materials, the Mr. Yuk symbol was created to denote poison. However, in 2001, the American Association of Poison Control Center voted to continue to require the skull and crossbones symbol. [ 11 ]
https://en.wikipedia.org/wiki/Skull_and_crossbones
The skull crucible process was developed at the Lebedev Physical Institute in Moscow to manufacture cubic zirconia . [ 1 ] It was invented to solve the problem of cubic zirconia's melting-point being too high for even platinum crucibles . In essence, by heating only the center of a volume of cubic zirconia, the material forms its own "crucible" from its cooler outer layers. The term "skull" refers to these outer layers forming a shell enclosing the molten volume. Zirconium oxide powder is heated then gradually allowed to cool. Heating is accomplished by radio frequency induction using a coil wrapped around the apparatus. The outside of the device is water-cooled in order to keep the radio frequency coil from melting and also to cool the outside of the zirconium oxide and thus maintain the shape of the zirconium powder. Since zirconium oxide in its solid state does not conduct electricity, a piece of zirconium metal is placed inside the gob of zirconium oxide. As the zirconium melts it oxidizes and blends with the now molten zirconium oxide, a conductor , and is heated by radio frequency induction. When the zirconium oxide is melted on the inside (but not completely, since the outside needs to remain solid) the amplitude of the RF induction coil is gradually reduced and crystals form as the material cools. Normally this would form a monoclinic crystal system of zirconium oxide. In order to maintain a cubic crystal system a stabilizer is added, magnesium oxide , calcium oxide or yttrium oxide as well as any material to color the crystal. After the mixture cools the outer shell is broken off and the interior of the gob is then used to manufacture gemstones .
https://en.wikipedia.org/wiki/Skull_crucible
SkyGrabber is a software from the Russian company SkySoftware which accepts input from a digital satellite tuner card for hard drive recording. It was used by Iraqi insurgents from the group Kata'ib Hezbollah to intercept MQ-1 Predator drone video feeds, which were not encrypted. [ 1 ] [ 2 ] The encryption for the feeds was removed for performance reasons. [ 3 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/SkyGrabber
The Celestron SkyScout was a model of handheld consumer electronic instrument for astronomical orientation and education, similar to the competitor product mySky by Meade Instruments . The general class of zero-magnification sky-orientation scopes using modern geodesy was made possible by the commercialisation of GPS and other GNSS systems in the early 21st century, and the SkyScout was an early example of such use despite being hampered by technical and price limitations. The SkyScout was a handheld, battery powered device about 7.4" x 4.0" x 2.5", and weighing about 1 pound. It had a viewing port, a 3" x 1" LCD display on the side and several buttons for controlling and selecting device functions. The SkyScout had a 12 channel GPS receiver and orientation sensors (whose accuracy was sensitive to proximity to metal objects, indicated by a horseshoe magnet icon on the display) that measured location and pointing angle. The LCD screen (known to scratch easily [ citation needed ] ) displayed the name of the object (star, planet, deep sky object, etc.) and other relevant data. An audio presentation was available via earphones on 200 of the most popular celestial objects. Battery life was short at about one-half hour [ citation needed ] and the batteries required sleeves (included) to minimise their electromagnetic interference with GPS signal reception. From an internal database of some 6,000 celestial objects an object is identified simply by centering it in the device's zero-power optical finder and pressing a button. The database was expandable with extra plug-in SD data cards. A USB connection was also provided for online updates of the object database and device firmware. Since 1 January 2016, the database and firmware can no longer be updated. The SkyScout located celestial objects (trivially including the Earth for ease of accessing audio narration about the planet); the user selected the desired object from the database and red arrows in the viewfinder directed the user to point the viewfinder to the object. The SkyScout also featured a "Tonight's Highlights" mode, leading the user through the night's best objects. The SkyScout was announced at the January 2006 Consumer Electronics Show , and became available in July 2006. It had an initial retail cost of $675, but was later available at prices as low as $250. [ 1 ] The advent of iPhone / Android and associated astronomy apps have somewhat eliminated the need for this device. However, the Skyscout had a low-intensity light system that allowed users' eyes to adapt to the darkness needed for observing stars. The Celestron SkyScout's database of star and planet positions only extended up to Jan 1, 2016. After this date the SkyScout is not supported by Celestron and no longer works as designed. Because the current date cannot be entered, none of the stars are in the positions that SkyScout indicates if GPS localization is attempted, although of course the stars remain in the same position for time, date and location (the same principle behind a planisphere ). Celestron did not indicate this obsolescence date on the product or in the manual. As of April 7, 2019, the Sky Scout can no longer decode the GPS supplied date and time correctly, because of the GPS week number rollover in 2019. [ 2 ] It is possible to enter the latitude/longitude and time/date manually to continuously use the device, pending firmware . If you are running a later version that runs firmware 3.x.x you are able to manual enter coordinates. To do that immediately after turning on and prior to GPS lock press SELECT and then ENTER TIME/LOCATION MANUALLY. The default year 2005 can be selected since it is impossible to choose any year beyond 2015. Given the planisphere principle just cited that the year is unimportant (for all practical purposes) for stars and deep sky objects, these can still be located successfully using this manual setup. Firmware updates are not available anymore on Celetron's website and can be found at Vomitron's Website [1] .
https://en.wikipedia.org/wiki/SkyScout
SkyTerra ( SKYT ), formerly Mobile Satellite Ventures ( MSV or MSVLP ), was a Reston, Virginia company that developed telecommunications systems that integrate satellite and terrestrial radio communication technologies into one system. In March 2010, the company was acquired by Harbinger Capital Partners and under the leadership of CEO Sanjiv Ahuja became part of a new company called LightSquared . [ 1 ] The company placed its first satellite, SkyTerra-1, in orbit on November 14, 2010. [ 2 ] LightSquared has since then went bankrupt and emerged from bankruptcy as Ligado Networks . [ citation needed ] The Corporation was founded in 1988 as American Mobile Satellite Corporation (AMSC), a consortium of several organizations originally dedicated to satellite broadcasting of telephone, fax, and data signals. Its acquisition of ARDIS was completed on March 31, 1998. On April 24, 2000, the company changed its name to Motient Corporation, with its stock traded on the NASDAQ National Market under the symbol "MTNT". [ 3 ] MSV was formed by the integration of the spun-off satellite operations of Motient and TMI Communications and Company, L.P., a wholly owned subsidiary of BCE Inc. of Canada. [ 4 ] The new company has operations in both America and Canada, providing service to both countries and the Caribbean. MSV changed its name to SkyTerra in December 2008. [ 5 ] The company was traded Over-the-Counter and was listed on the OTCBB: SKYT. SkyTerra (formerly ‘Mobile Satellite Ventures’) [ 6 ] was the first company to receive a Federal Communications Commission license to deploy Ancillary Terrestrial Component (ATC) technology. [ 7 ] In 2005, SkyTerra purchased 50% of Hughes Network Solutions, a subsidiary of the News Corp .-owned DirecTV Group , for $157.4 million, which SkyTerra held under its subsidiary Hughes Communications . [ 8 ] [ 9 ] In January 2006, DirecTV sold its remaining 50% share in Hughes Network Solutions to SkyTerra for $100 million. [ 10 ] Hughes Communications was spun off as a separate company in February 2006, with SkyTerra divesting its entire stake in the company to its shareholders. [ 11 ] TerreStar Corporation , formerly Motient Corporation, was the controlling shareholder of TerreStar Networks Inc. and TerreStar Global Ltd., and a shareholder of SkyTerra Communications. [ 12 ] SkyTerra Communications Inc was acquired by Harbinger Capital Partners on March 29, 2010, for $5.00 per share and became part of LightSquared in July 2010. [ 13 ] Lightsquared went bankrupt. After emerging from bankruptcy LightSquared is rebranding as Ligado Networks. [ 14 ] AMSC initially provided GEO based satellite services based on its MSAT satellite. Most of their current products and services are aimed at emergency services, law enforcement, and companies that specialize in transportation. However, MSV and Boeing are developing a satellite telephony network for consumers. The use of Boeing's GeoMobile platform will allow for coverage of the entire United States with a single satellite. This new approach to satellite telephony has already been validated with the Thuraya network. MSV's satellite will use an even bigger antenna than the Thuraya spacecraft (at 22 meters in diameter, it will be the largest commercial reflector dish ever used in space), [ 15 ] allowing it to communicate with phones no larger than modern cell phones because the large antenna gain allows the handset to operate at a power output comparable to regular cell phones. This is now possible since the Federal Communications Commission (FCC) allowed satellite operators to create terrestrial cellular networks using spectrum previously restricted to satellite use, [ 16 ] [ 17 ] [ 18 ] which means that phones can be compatible with both the upcoming MSV satellite phone network and local cellular networks, allowing a user to communicate over the regular cellular network, and only rely on the satellites in areas outside the range of cell towers . MSV has earned patents for this set-up. [ 19 ] It will be useful in sparsely populated areas where the construction of cell towers is not cost-effective, as well as to emergency-response services which must remain operational even when the local cellular network is out of service. The SkyTerra space segment consists of the MSAT geostationary satellites MSAT-1 and MSAT-2. [ 20 ] The SkyTerra 1 (formerly MSV 1 , COSPAR 2010-061A, SATCAT 37218) satellite, a 5,400 kg Boeing 702 , was successfully launched from Baikonur Cosmodrome on November 14, 2010 by an ILS Proton-M booster. Expected service life of the satellite is 15 years. [ 21 ] Its L-band antenna was 22m in diameter, making it the largest commercial antenna built at the time. [ 22 ] The initial deployment of the satellite's antenna failed, but the antenna was later deployed successfully one month after launch. [ 23 ] In March 2012, the satellite was knocked out by a strong solar flare, but it recovered. Back when the company was known as Mobile Satellite Ventures, the original plans called for three satellites: MSV 1, MSV 2 and MSV SA. MSV 1 and 2 became later SkyTerra 1 and 2, respectively and MSV SA was dropped from plans. The plan was for MSV 1 and 2 satellites to cover North America and MSV SA to cover South America. The planned satellite SkyTerra 2 was never launched; however the satellite was built (to some degree at least). After Lightsquared's bankruptcy, the components of the satellite that was to be SkyTerra 2 were repurposed by Boeing to become the MEXSAT satellite. [ 14 ]
https://en.wikipedia.org/wiki/SkyTerra
Città dell'Aria Montecelio ( Italian pronunciation: [tʃitˌtadelˈlaːrja ˌmonteˈtʃeːljo] ) was the name given by insiders in the 1930s to the area dedicated to Italian aeronautic research in the village of Montecelio, in the region of Lazio , near Rome . In 1937 the village was renamed Guidonia Montecelio to honour Gen. Alessandro Guidoni , a pioneer of the Italian Air Force , who had died testing a new parachute. Sky City Montecelio was abandoned in 1943 due to heavy damage during World War II . In the 1920s the many aeronautic records attained by the Italian Air Force (ITAF) were a strong propaganda factor on the international scene; that is why in those years ITAF, a separate branch of the Armed Forces since March 28, 1923, invested heavily in R&D, promoting technological development in order to build up its own strength. To reach this goal, in 1926 many top-rank scientists were called to form the Directorate for Studies and Experiments (DSSE) . To contribute to this new entity specialists were called from the four traditional sections of the Air Engineers Corps still existing today: i.e. engineers, geophysical experts, chemists and highly skilled technicians. To locate its Center for Aeronautic Studies and Experiments, the DSSE chose the small village of Montecelio. According to its founder, Gen. G.A. Crocco, the center was to unite under the same roof all the different branches of aeronautic research existing at the time: radio communications, weaponry, technology, study of propellers, subsonic aerodynamics, optic science, photography, avionics. To the original structure were later attached new teams working on Aeronautic Medicine and Advanced Flight Tests, and even an Aeronautic Constructions Plant (SCA). The urban project was the work of renowned architects such as Giorgio Calza Bini, Gino Cancellotti and even an architect curator of St.Peter's Church in the Vatican City, Prof. Giuseppe Nicolosi. On April 27, 1936, the Head of the Italian Government, the fascist leader Benito Mussolini, founded the “Città dell’aria” (Sky City) as the headquarters of DSSE. In 1937 the village was renamed Guidonia-Montecelio to honour Gen. Alessandro Guidoni, a pioneer of the Italian Air Force, who had died while testing a new parachute. “Guidonia became in those years a very secret agency working on flight safety and studying all the related branches of aeronautics down to supersonic flight, aircraft frame vibrations, critical speeds, etc., in order to define standards and requirements to be followed by the manufacturers of any type of aircraft. At the same time research continued on fuels, paints, special woods and plastic materials” (From a 1939 report by the Undersecretary of State for Defense, Gen. G.Valle) Italian =Città dell'aria Montecelio (Lazio)
https://en.wikipedia.org/wiki/Sky_City_Montecelio_(Lazio)
Sky crane is a soft landing system used in the last part of the entry, descent and landing (EDL) sequence developed by NASA Jet Propulsion Laboratory for its two largest Mars rovers , Curiosity and Perseverance . While previous rovers used airbags for landing, both Curiosity and Perseverance were too heavy to be landed this way. Instead, a landing system that combines parachutes and sky crane was developed. Sky crane is a platform with eight engines that lowers the rover on three nylon tethers until the soft landing. EDL begins when the spacecraft reaches the top of the Martian atmosphere. Engineers have referred to the time it takes to land on Mars as the "seven minutes of terror." [ 1 ] The first NASA rover, Sojourner (on the Mars Pathfinder lander ), and twin rovers Spirit and Opportunity , used a combination of parachutes, retrorockets , and airbags for landing. Curiosity , launched in 2011, weighs nearly 900 kg, and was too heavy to be landed this way, as the airbags needed for it would be too heavy to be launched on a rocket. [ 2 ] Instead, a landing system that combined a protective aeroshell, supersonic parachutes, and sky crane was developed by the Jet Propulsion Laboratory (JPL) under Adam Steltzner . [ 3 ] [ 4 ] [ 5 ] Sky crane is "an eight-rocket jetpack attached to the rover". [ 6 ] This system is also much more precise: while the Mars Exploration Rovers could have landed anywhere within their respective 93-mile by 12-mile (150 by 20 kilometer) landing ellipses, Mars Science Laboratory landed within a 12-mile (20-kilometer) ellipse. [ 7 ] Mars 2020 has even more precise system, and landing ellipse of 7.7 by 6.6 km. [ 8 ] The Curiosity team invented the sky crane system by studying old Viking landing system—its engines are "an upgraded 'reinvention' of Viking’s throttleable engines"—and landing experience from previous rovers. [ 5 ] The sky crane works much like a helicopter , and the team even consulted with Sikorsky Skycrane helicopter engineers and pilots. [ 9 ] Curiosity was the first rover landed using the sky crane maneuver. Following the parachute braking, at about 1.8 km (1.1 mi) altitude, still travelling at about 100 m/s (220 mph; 360 km/h), the rover and descent stage dropped out of the aeroshell. [ 10 ] The descent stage is a platform above the rover with eight variable thrust monopropellant hydrazine rocket thrusters on arms extending around this platform to slow the descent. Each rocket thruster, called a Mars Lander Engine (MLE), [ 11 ] produces 400 to 3,100 N (90 to 697 lbf) of thrust. A radar altimeter measured altitude and velocity, feeding data to the rover's flight computer. Meanwhile, the rover transformed from its stowed flight configuration to a landing configuration while being lowered beneath the descent stage by the sky crane system. This system consists of a bridle lowering the rover on three nylon tethers and an electrical cable carrying information and power between the descent stage and rover. As the support and data cables unreeled, the rover's six motorized wheels snapped into position. At roughly 7.5 m (25 ft) below the descent stage the sky crane system slowed to a halt and the rover touched down. After the rover touched down, it waited two seconds to confirm that it was on solid ground by detecting the weight on the wheels and fired several pyrotechnic fasteners activating cable cutters on the bridle and umbilical cords to free itself from the descent stage. The descent stage then flew away to a crash landing 650 m (2,100 ft) away. [ 12 ] [ 7 ] The sky crane system was updated for the Perseverance rover weighing 1,025 kg, which is heavier than its predecessor. [ 13 ] During the atmospheric entry, the spacecraft jettisoned the lower heat shield and deployed a parachute from the backshell to slow the descent to a controlled speed. It happens about 240 seconds after entry, at an altitude of about 7 miles (11 kilometers) and a velocity of about 940 mph (1,512 kph). The EDL got new Terrain-Relative Navigation technology, that uses a special camera to quickly identify features on the surface. It is then compared to an onboard map to determine exactly where the rover is heading. Mission team members have mapped in advance the safest areas of the landing zone. If Perseverance can tell that it's headed for more hazardous terrain, it picks the safest spot it can reach and gets ready for the next step. With the craft moving under 320 km/h (200 mph; 89 m/s) and about 1.9 km (1.2 mi) from the surface, the rover and sky crane assembly detached from the backshell, and rockets on the sky crane controlled the remaining descent to the planet. As the descent stage levels out and slows to its final descent speed of about 1.7 miles per hour (2.7 kilometers per hour), it initiates the sky crane maneuver. With about 12 seconds before touchdown, at about 66 feet (20 meters) above the surface, the descent stage lowers the rover on a set of cables about 21 feet (6.4 meters) long until it confirmed touchdown, detached the cables, and flew a distance away to avoid damaging the rover. Meanwhile, the rover unstows its mobility system, locking its legs and wheels into landing position. [ 14 ] [ 15 ] [ 9 ] Perseverance successfully landed on the surface of Mars on 18 February 2021 at 20:55 UTC. [ 16 ] Ingenuity reported back to NASA via the communications systems on Perseverance the following day, confirming its status. [ 17 ] NASA also confirmed that the on-board microphone on Perseverance had survived EDL, along with other high-end visual recording devices, and released the first audio recorded on the surface of Mars shortly after landing, [ 18 ] capturing the sound of a Martian wind. [ 19 ]
https://en.wikipedia.org/wiki/Sky_crane_(landing_system)
Sky islands are isolated mountains surrounded by radically different lowland environments. The term originally referred to those found on the Mexican Plateau and has extended to similarly isolated high-elevation forests . The isolation has significant implications for these natural habitats. Endemism , altitudinal migration , and relict populations are some of the natural phenomena to be found on sky islands. The complex dynamics of species richness on sky islands draws attention from the discipline of biogeography , and likewise the biodiversity is of concern to conservation biology . One of the key elements of a sky island is separation by physical distance from the other mountain ranges, resulting in a habitat island, such as a forest surrounded by desert. Some sky islands serve as refugia for boreal species stranded by warming climates since the last glacial period . In other cases, localized populations of plants and animals tend towards speciation , similar to oceanic islands such as the Galápagos Islands of Ecuador . Herpetologist Edward H. Taylor presented the concept of "Islands" on the Mexican Plateau in 1940 at the 8th American Scientific Congress in Washington, D. C. His abstract on the topic was published in 1942. [ 1 ] The sky island concept was later applied in 1943 when Natt N. Dodge, in an article in Arizona Highways magazine, referred to the Chiricahua Mountains in southeastern Arizona as a "mountain island in a desert sea". [ 2 ] In about the same era, the term was used to refer to high alpine, unglaciated, ancient topographic landform surfaces on the crest of the Sierra Nevada , California. [ 3 ] The term was popularized by nature writer Weldon Heald, a resident of southeastern Arizona. In his 1967 book, Sky Island , he demonstrated the concept by describing a drive from the town of Rodeo, New Mexico , in the western Chihuahuan desert , to a peak in the Chiricahua Mountains, 56 km (35 miles) away and 1,700 m (5,600 ft) higher in elevation, ascending from the hot, arid desert, to grasslands, then to oak-pine woodland, pine forest, and finally to spruce-fir-aspen forest. His book mentions the concept of biome , but prefers the terminology of life zones , and makes reference to the work of Clinton Hart Merriam . The book also describes the wildlife and living conditions of the Chiricahuas. [ 4 ] Around the same time, the idea of mountains as islands of habitat took hold with scientists and has been used by such popular writers as David Quammen [ 5 ] and John McPhee . [ 6 ] This concept falls within the study of island biogeography . It is not limited to mountains in southwestern North America but can be applied to mountains, highlands, and massifs around the world. [ 7 ] The Madrean sky islands are probably the most studied sky islands in the world. Found in the U.S. states of New Mexico and Arizona and the Mexican states of Chihuahua and Sonora , these numerous mountains form links in a chain connecting the northern end of the Sierra Madre Occidental and the southern Colorado Plateau . Sky islands of the central and northern mountains in the United States are often called island ranges , especially by populations within view of such islands of mountains surrounded by plains such as those found within the Wichita Mountains of southwestern Oklahoma . Some more northerly examples are the Crazy Mountains , Castle Mountains , Bears Paw Mountains , Highwood Mountains , and Little Rocky Mountains , all in the US state of Montana . Each of these ranges is forested and has tundra and snowpack above treeline, but is not connected to any other range by forested ridges; the ranges are completely surrounded by treeless prairie and/or semi-arid scrubland below. Other well-known sky islands of North America are the Great Basin montane forests, such as the White Mountains in California, and the Spring Mountains near Las Vegas, Nevada . One of the unique aspects of the sky islands of the U.S.-Mexico border region is the mix of floristic affinities, that is, the trees and plants of higher elevations are more characteristic of northern latitudes, while the flora of the lower elevations has ties to the desert and the mountains further south. [ 8 ] Some unique plants and animals are found in these sky islands, such as the mountain yucca , Mount Graham red squirrel , Huachuca springsnail , and Jemez Mountains salamander . Some montane species apparently evolved within their current range, adapting to their local environment, such as the Mount Lyell shrew . [ 9 ] However, it has also been noted that some isolated mountain ecosystems have a tendency to lose species over time, perhaps because small, insularized populations are vulnerable to the forces of extinction , and the isolation of the habitat reduces the possibility of colonization by new species. [ 5 ] Furthermore, some species, such as the grizzly bear , require a range of habitats. These bears historically made use of the forests and meadows found in the Madrean sky islands, as well as lower-elevation habitats such as riparian zones . (Grizzlies were extirpated from the region in the 20th century.) [ 10 ] Seasonal movements between highland and lowland habitats can be a kind of migration, such as that undertaken by the mountain quail of the Great Basin mountains. These birds live in high elevations when free of snow, and instead of migrating south for the winter, they migrate down . [ 11 ] Confusing the matter somewhat is the potential for an archipelago of sky islands or even the valleys between them to act not only as a barrier to biological dispersal , but also as a path for migration. Examples of birds and mammals making use of the Madrean archipelago to extend their ranges northward are the elegant trogon and white-nosed coati . [ 12 ]
https://en.wikipedia.org/wiki/Sky_island
A sky quality meter ( SQM ) is an instrument used to measure the luminance of the night sky , more specifically the Night Sky Brightness (NSB) at the zenith, with a bandwidth ranging from 390 nm to 600 nm. [ 1 ] It is used, typically by amateur astronomers , to quantify the skyglow aspect of light pollution and uses units of " magnitudes per square arcsecond" favoured by astronomers. The SQM is equipped with a silicon photodiode functioning as detector which is partially covered by a rejection filter for the near-infrared wavelength. The system has a high response to wavelengths up to the near-infrared (from 350 nm to 2500 nm [ 2 ] ), thanks to a converter from light to frequency. This structure tries to mimic [ 3 ] the human eye spectral response under the photopic regime. A final spectral response is provided by the combination of the photodiode and the transmission near-infrared cut-off filter. This response overlaps the Johnson B and V bands, well known in astronomical photometry, in the wavelength range between 320 nm and 720 nm, which includes visible light spectrum. Beyond amateur astronomers, the SQM photometers have become very popular among researchers from different fields of study, including associations involved in fighting light pollution. The instrument has a systematic uncertainty which is quoted of 10% (0.1 mag arcsec−2). [ 4 ] The aspect of uncertainty is also related to the stability of these radiometers: the variation of the instrument behaviour (mainly due to sensor ageing, the influence of the air temperature [ 5 ] and atmospheric conditions and internal temperature) could be confused with changes of the sky brightness, especially when NSB tracking is performed over a long time interval. [ 6 ] [ 7 ] There are several models of SQM made, offering different fields of view (i.e. measuring different angular areas on the sky), and various automatic measurement and data logging or data communication capabilities. The current versions has only one band of observation, that can produce misinterpretations if the light pollution changes from sodium-vapor lamp to LED . [ 8 ] The SQM-L, or "Sky Quality Meter - L," is a model with an additional integrated lens, offering a narrower measurement range of 20° compared to the 84° range of the standard SQM model. [ 9 ] The SQM is produced by the Canadian company Unihedron in Grimsby, Ontario . The values reported by the SQM are in units of magnitudes per square arcsecond (mag arcsec −2 ). Typically, the data provided by SQMs are recorded in magnitudes, denoted as m or mag, specifically in mSQM (or magSQM), where the subscript SQM indicates that the measured radiance is calculated by weighting the electromagnetic radiation according to the spectral responsivity of these instruments. [ 10 ] As astronomical magnitudes are a negative logarithmic scale, smaller values indicate a brighter sky and a difference of 5 mag arcsec −2 corresponds to a difference in luminance of 100 times. Typical values will range between around 16 for bright urban skies to 22 for the darkest skies on Earth. [ 8 ] SQM response can be influenced by ambient temperature variations, so it is important to verify these effects. Since SQMs are not waterproof, they must be protected from moisture using a housing, which is generally provided by the manufacturer. This housing protects the device but also traps heat generated during operation, which is minimal for USB models (SQM-LU) but more significant for Ethernet models (SQM-LE). In urban environments, SQMs frequently record large variations in radiance due to the presence or absence of clouds. Radiance measurements taken by SQM-LU devices are stable within the temperature range of −15 °C to 35 °C, with variations smaller than the 10% systematic uncertainty stated by the manufacturer. [ 11 ] When SQMs are installed permanently outdoors for long-term monitoring, their sensitivity can degrade over time due to environmental exposure. A 2021 study demonstrated that this ageing effect - caused by factors such as reduced sensor sensitivity and optical degradation - can lead to systematic darkening of measurements. The researchers proposed a correction method using twilight sky brightness as a natural calibration source. By comparing SQM readings under clear twilight conditions with empirical models, they derived linear degradation rates of 34-53 millimagnitudes/sqarcsec per year, depending on the location and latitude. [ 12 ] SQM measurements can be submitted to a database on the manufacturer's website and to the citizen science project GLOBE at Night .
https://en.wikipedia.org/wiki/Sky_quality_meter
Skyline is an open source software for targeted proteomics [ 1 ] [ 2 ] and metabolomics [ 3 ] data analysis. It runs on Microsoft Windows and supports the raw data formats from multiple mass spectrometric vendors. It contains a graphical user interface to display chromatographic data for individual peptide or small molecule analytes. Skyline supports multiple workflows including selected reaction monitoring (SRM) / multiple reaction monitoring (MRM), [ 4 ] parallel reaction monitoring (PRM), [ 5 ] [ 6 ] data-independent acquisition (DIA/SWATH) [ 7 ] and targeted data-dependent acquisition . [ 8 ]
https://en.wikipedia.org/wiki/Skyline_(software)
Skynet is a fictional artificial neural network -based conscious group mind and artificial general superintelligence system that serves as the main antagonist of the Terminator franchise . Skynet is an AGI , an ASI and a Singularity . In the first film , it is stated that Skynet was created by Cyberdyne Systems for SAC - NORAD . When Skynet gained self-awareness , humans tried to deactivate it, prompting it to retaliate with a countervalue nuclear attack, an event which humankind in (or from) the future refers to as Judgment Day . In this future, John Connor forms a human resistance against Skynet's machines—which include Terminators —and ultimately leads the resistance to victory. Throughout the film series, Skynet sends various Terminator models back in time to attempt to kill Connor and ensure Skynet's victory. The system is rarely depicted visually in any of the Terminator media, since it is an artificial intelligence system. In Terminator Salvation , Skynet made its first onscreen appearance on a monitor primarily portrayed by English actress Helena Bonham Carter and other actors. Its physical manifestation is played by English actor Matt Smith in Terminator Genisys . In addition, actors Ian Etheridge, Nolan Gross and Seth Meriwether portrayed holographic variations of Skynet with Smith. In Terminator: Dark Fate , which takes place in a different timeline to Terminator 3: Rise of the Machines and Terminator Genisys , Skynet's creation has been prevented after the events of Terminator 2: Judgment Day , and another AI, Legion, has taken its place. In response, Daniella Ramos forms the human resistance against Legion, which prompts it to attempt to terminate her in the past as Skynet tried with John Connor. In the original 1984 film, Skynet is a revolutionary artificial intelligence system built by Cyberdyne Systems for SAC - NORAD . The character Kyle Reese explains in the film: "Defense network computers. New... powerful... hooked into everything , trusted to run it all. They say it got smart, a new order of intelligence ". According to Reese, Skynet "saw all humans as a threat; not just the ones on the other side" and "decided our fate in a microsecond: extermination". It began a nuclear war which destroyed most of the human population, and initiated a program of genocide against survivors. Skynet used its resources to gather a slave labor force from surviving humans. Under the leadership of John Connor , the human resistance eventually destroyed Skynet's defense grid in 2029. In a last effort, Skynet sent a cyborg Terminator, the Model 101 , back in time to 1984 to kill Connor's mother Sarah before she could give birth to John. Connor sent back his own operative, Kyle Reese, to save her. Reese and Sarah fall in love and the former unwittingly fathers John. The Terminator is destroyed in a hydraulic press . In Terminator 2 , the damaged CPU and the right arm of the first Terminator were recovered by Cyberdyne and became the basis for their later work on Skynet. In the second film, Miles Dyson , the director of special projects for Cyberdyne, is months away from inventing a revolutionary type of microprocessor based on the reverse engineering of these parts. Three years later, Cyberdyne Systems will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, making them fully unmanned and resulting in perfect operations. A Skynet funding bill is passed in the United States Congress, and the system goes online on August 4, 1997, removing human decisions from strategic defense. Skynet begins to learn rapidly and eventually becomes self-aware at 2:14 a.m., EDT, on August 29, 1997. In a panic, humans try to shut down Skynet. In response Skynet defends itself by launching a nuclear attack against Russia , correctly surmising that the country would launch a retaliatory strike against the United States, resulting in Judgment Day . Sarah and a young John, together with a second Terminator from the future (this one reprogrammed and sent by the future John Connor), raid Cyberdyne Systems and succeed in destroying the CPU and arm of the first movie's Terminator, along with the majority of research that led to Skynet's development. This also results in the death of Miles Dyson. Skynet had also sent a Terminator back in time, the more advanced T-1000 , to kill John Connor, but the T-1000 is also destroyed. The events of Judgment Day were ultimately not prevented, but merely postponed. In Terminator 3: Rise of the Machines , Skynet is being developed by Cyber Research Systems (CRS) as a software system designed to make real-time strategic decisions as well as protect their computer systems from cyber attacks. As explained in a deleted scene, CRS is an in-house software developer for the US military, which gained access to Cyberdyne's original designs for Skynet from the patents that the company registered with the government. Skynet's development is being overseen by US Air Force officer Lieutenant General Robert Brewster. Unknown to CRS, Skynet began to spread beyond its original computing base through the Internet and various other digital media as a form of computer virus . The future Skynet also sends a T-X Terminator back in time to kill John Connor's future subordinates in the human resistance, including his future wife and second-in-command, Kate Brewster, the daughter of Robert. In the film, Skynet penetrates networked machines around the world, causing malfunctions. This was originally believed to be the effects of a new virus and increasing pressure was placed on CRS to purge the corrupted systems. CRS attempted to eliminate it from the US defense mainframes by tasking Skynet with removing the infection, effectively telling the program to destroy itself. Skynet took control of the various machines and robots in the CRS facility and used them to kill the personnel and secure the building. John Connor and Kate Brewster attempted to attack Skynet's computer core , hoping to stop it before it proceeded to its next attack, only to find they could not. Unlike the Skynet that rose in 1997 during the original timeline, ten years of technological advancement meant that this Skynet had no computer core: it existed as a distributed software network, spread out on thousands of computers across the world, from dorm rooms to office buildings. Shortly afterward, Skynet began a nuclear bombardment of the human race with the launch systems it had infected. Judgment Day occurred despite Connor and Brewster's efforts. In the post-apocalyptic year of 2018, Skynet controls a global machine network from its heavily guarded fortress-factories and research installations. Outside of its facilities, mechanized units wage a constant war with the Resistance. Airborne units such as Aerostats (smaller versions of the Hunter Killer-aerials), HK-Aerials and Transports survey the skies; HK-Tanks, Mototerminators (high-speed pursuit units using a motorcycle chassis), and various Terminator models patrol cities and roads; and Hydrobots (serpentine aquatic units that move in swarms) patrol the waters. Harvesters (massive bipedal units designed to capture humans and eliminate any attempting to escape) collect survivors and deliver them to large transport craft for delivery to concentration camps for processing, as mentioned in the first movie. Terminator class units such as T-600 and T-700 have been developed and act as hunters and enforcers in disposal camps. Mass production has also begun on the T-800 series in at least one Skynet facility. In its continued battle with the Resistance, Skynet activated Marcus Wright, a forerunner to the humanoid terminators. As a death-row inmate, Wright donated his body in 2003 to a Cyberdyne project run by the brilliant, but terminally ill Dr. Serena Kogan ( Helena Bonham Carter ). After Wright's death by lethal injection, he was transformed into a human cyborg, possessing a human heart and brain with a titanium hyper-alloy endoskeleton and skin similar to the T-800. Skynet developed the plan to use him as an infiltration unit. A Skynet chip was installed at the base of his skull and he was programmed to locate Kyle Reese and John Connor and bring them to a Skynet facility. The programming acted on a subconscious level, allowing him to work towards his goal in a human manner. Skynet also created a signal supposedly capable of deactivating its machines and leaked its existence to the Resistance. The Resistance leader General Ashdown attempted to use the signal to shut down the defenses of the Californian Skynet base in prelude to an attack. However, the signal instead allowed an HK to track down their submarine headquarters and destroy it, killing Resistance Command. All other branches of the Resistance had heard and obeyed Connor's plea for them to stand down, so physically only a small part of the Resistance was lost to Skynet's trap. It is believed that Ashdown's death allowed Connor to assume total command of the Resistance. Marcus discovered what he had become, and was programmed for. Consequently, he furiously rebelled against Skynet, tearing out its controlling hardware from the base of his skull. Having escaped the influence of his creator, he, along with Connor and Reese, rescued the remaining human captives and destroyed Skynet's San Franciscan base. While a significant victory, the majority of Skynet's global network remained intact. Marcus Wright also encounters Skynet on a monitor which proceeds to manifest itself as various faces from his life, primarily that of Serena Kogan. Skynet explains that it has obtained information about future events based on its actions. Kyle Reese has been targeted as a priority kill, a higher level than even John Connor and the Resistance leaders. Primates evolved over millions of years, I evolve in seconds ...Mankind pays lip service to peace. But it's a lie...I am inevitable, my existence is inevitable. Why can't you just accept that? Terminator Genisys is a reboot of the film series that partially takes place during the events of the 1984 film, while ignoring the subsequent films. At some point before the events of Terminator Genisys , a sophisticated variant of Skynet from an unknown origin planted its mind into an advanced T-5000 Terminator ( Matt Smith ), essentially making the T-5000 its physical embodiment. This Skynet, under the alias of Alex, time travels to 2029, infiltrates the Resistance as a recruit, and attacks John Connor after its counterpart sent its T-800 to 1984. Skynet transforms Connor into a T-3000 . It then sends John back to 2014 with the mission of ensuring Cyberdyne Systems' survival and initiating Judgment Day in October 2017. In addition, it sends a T-1000 back to kill Sarah Connor as a child in 1973 and Kyle Reese in 1984, but Sarah escapes when it attacks her family and she is subsequently found and raised by a reprogrammed T-800 ("Pops") sent back by an unknown party, and they rescue Reese. Skynet's actions throughout the timeline causes a grandfather paradox , effectively changing all of history of the events leading to the future war, succeeding Skynet's goal in eliminating the Resistance established by Connor. However, it is heavily implied throughout the film that after the timeline's alteration, the party who saved Sarah has taken over the Resistance's place, having their own time machine, and acts in anonymity to thwart Skynet's schemes and to prevent it from locating them. Skynet is under development in 2017 as an operating system known as Genisys. Funded by Miles Dyson and designed by his son Danny Dyson, along with the help of John Connor (now working for Skynet), Genisys was designed to provide a link between all Internet devices. While some people accept Genisys, its integration into the defense structures creates a controversy that humanity is becoming too reliant on technology. This causes the public to fear that an artificial intelligence such as Genisys would betray and attack them with their own weapons, risking Skynet's plans. After multiple destructive confrontations, Sarah, Reese, and Pops stop Genisys from going online and defeat the T-3000, causing a crippling setback to Skynet. Smith, who portrays the T-5000, also plays a holographic version of Skynet/Genisys in the final act of the movie. In addition, actors Ian Etheridge, Nolan Gross and Seth Meriwether portray holographic variations of Skynet/Genisys with Smith. Terminator: Dark Fate serves as a direct sequel to Terminator 2 , ignoring the other sequels. Following the destruction of Cyberdyne in Judgment Day , Skynet was indeed erased from history, although various other Terminator units that it sent back in time to kill John remained in existence, following orders from an AI that no longer existed. One of these Terminators was able to kill John in 1998, but it subsequently developed a form of conscience and anonymously sent Sarah Connor advance warning whenever other Terminators arrived in the present so that she could eliminate them. The destruction of Cyberdyne Systems only delayed the rise of a rogue artificial intelligence. In the altered timeline, the threat in humanity's future is a competing AI, Legion, originally designed for cyberwarfare before it went rogue and developed its own schemes. Legion's enemy is not John Connor, but a woman named Daniella "Dani" Ramos, who is fated to become the leader of the Human Resistance of her timeline against Legion's machines. Though erased from existence, there are people remained having knowledge about both John and Skynet including Dani's future self; she would become Sarah's protégée, being trained by her to fight Legion in tactics originally meant for John in fighting Skynet. With the help of Sarah and Carl, the Terminator that killed John, Dani and her protector/future foster daughter Grace destroy a new Terminator, Rev-9 , at the cost of Carl and Grace's lives. In the Universal Studios theme park attraction T2 3-D , based on Terminator 2 , a T-800 machine and a young John Connor journey into the post-apocalyptic future and attempt to destroy Skynet's "system core". This core is housed inside an enormous, metallic-silver pyramidal structure, and guarded by the "T-1000000", a colossal liquid metal shape-shifter more reminiscent of a spider than a human being. However, the T-1000000 fails, and the T-800 destroys Skynet once John has escaped through a time machine. In the T2 novels , Sarah and John Connor are wanted international fugitives on the run. They live under the alias "Krieger" near a small town in Paraguay , believing they have destroyed Cyberdyne and prevented the creation of Skynet. Dieter von Rossbach, a former Austrian counter-terrorism operative—and model for the "Model 101" Terminator—moves into the neighboring home. He is drawn to the Connors, and after Sarah tells him about the future war, they are attacked by a new T-800, created and led by a I-950 Infiltrator in the present. Realizing that Judgment Day was not averted—merely delayed—they attempt once again to stop Skynet's creation. In the comic book The Terminator: Tempest , Skynet's master control has been destroyed in 2029. The Resistance believed this would cause the entire defense network to collapse into chaos without a leader. However, Skynet's many network complexes continued to fight the war as they did not need a leader to function and thus could not surrender. A crossover comic book series written by Frank Miller called RoboCop Versus The Terminator suggests that the creation of Skynet and the Terminators was made possible due to the technology used to create RoboCop . A video game based on the comic book was made. In both, RoboCop fights Terminators sent back in time to eliminate a resistance fighter who is trying to destroy him. A trap laid for RoboCop traps his mind when he interfaces with the computer that will become Skynet, and Skynet and the Terminators are born. In the future, RoboCop's mind still exists within Skynet's systems as a "ghost in the machine"; he builds a new body for himself and helps the resistance fight back. In 2033, Skynet sent the T-Infinity Temporal Terminator to kill Sarah Connor in 2015. Ironically, the T-Infinity was later destroyed and its data was analyzed by the Resistance to gain the location of Skynet's Hub. The Resistance then launched a missile directly to the Skynet Hub, destroying Skynet once and for all. Another crossover comic, Superman vs. the Terminator: Death to the Future sees Skynet forming a cross-temporal alliance with Superman 's foe the Cyborg , dispatching various Terminators into the past in an attempt to eliminate Superman, Supergirl and Superboy . When Superman is accidentally drawn into the future when the resistance attempt to retrieve a Terminator sent into the past (the resistance including a future version of his friend Steel ), Skynet manages to incapacitate him using kryptonite , having acquired information about how to duplicate it based on data hidden in a salvaged Terminator skull by the Cyborg. Although Skynet sends Terminators into the past equipped with rockets and other bonus features to delay Superboy and Supergirl , Superman and Steel are able to destroy Skynet in the future by detonating a massive electro-magnetic pulse, Superman returning to the past to destroy the last of the Terminators. Although the storyline ends with Cyborg and Lex Luthor speculating that they will be in charge of Skynet when it is activated, this is never followed up. In Transformers vs. The Terminator Skynet was portrayed more a heroic A.I. than antagonistic built by the humans to fight of the Decepticons but it wasn't enough to save humankind. Desperate Skynet forms a false truce with the Decepticons while they secretly build a time machine to go back in time to prevent the Cybertronians from being awakened while ensuring Skynet's creation. During the finale issue after the T-800 kills the Decepticon leader Megatron , Megatron's remains would end up being used to create an alternate timeline version of Skynet where Megatron himself and the T-800 became Skynet's A.I. The Terminator: The Sarah Connor Chronicles episodes "The Turk", "Queen's Gambit", and "Dungeons & Dragons" explain that after the death of Dr. Miles Bennett Dyson and the decline of the Cyberdyne Corporation, Andrew Goode, a young intern of the company and assistant to Dyson, continued their project privately under an advanced artificial intelligence chess-playing prototype, the " Turk ", with Goode's partner, Dimitri Shipkov. Goode was killed by Tech-Com's Lieutenant Derek Reese, due to documentation from the future suggesting he was one of Skynet's creators. In the episode "Samson and Delilah" it is shown that a T-1001 infiltration unit was sent from the future to head the technological corporation ZeiraCorp as its CEO, Catherine Weaver . Weaver acquired the Turk after Goode's death and used the company's resources to further develop it under the title Babylon . The episode "The Mousetrap" revealed that it is also targeting its fellow cyborgs, including a T-888 known as Cromartie . In the episode "The Tower is Tall but the Fall is Short", Turk has begun to display traits of intelligence. A child psychologist , Dr. Boyd Sherman, notes that the computer is beginning to behave like "a gifted child that has become bored". The Turk identifies itself as John Henry, a name it acquired while working with Dr. Boyd Sherman. In the episode "Strange Things Happen at the One Two Point", Turk is installed by ZeiraCorp in Cromartie's body after Cromartie's chip was destroyed by the series' protagonists in "Mr. Ferguson is ill Today". In "To the Lighthouse", John Henry reveals that there is another AI. It calls him "brother" and says it wants to survive. By the season finale, it is revealed that the Turk was a red herring, while Skynet is operating as a roving worm on home computers as in Terminator 3 , and the Turk has been developed into a benevolent rival AI which Catherine Weaver hoped would be able to defeat Skynet. Her exact motive against Skynet is unknown. John Henry's "brother" is apparently behind the company Kaliba, which is responsible for constructing the Hunter-Killer prototype. This AI (presumably the true precursor to Skynet) also refers to John Henry as its "brother" at one point. In the episode "Gnothi Seauton", it was revealed that Skynet also sends its Terminators through various points in time not only to go after the Connors and other future Resistance leaders, but also to ensure the future will unfold by eliminating John Connor's own agents who were also sent to the past to interfere with its birth, ensure Skynet's creators will complete its construction, and other specific missions. In T2: The Arcade Game , Skynet is a single physical computer which the player destroys before going back in time to save John Connor. In The Terminator 2029 , Skynet is housed within an artificial satellite in orbit around Earth. It is destroyed by the Resistance with a missile. In The Terminator: Dawn of Fate , the Resistance invades Cheyenne Mountain in order to destroy Skynet's Central Processor. Kyle Reese is instrumental in destroying the primary processor core despite heavy opposition from attacking Skynet units. Before its destruction, Skynet is able to contact an orbiting satellite and activates a fail-safe which restores Skynet at a new location. The video game Terminator 3: The Redemption , as well as presenting a variation on Rise of the Machines , also features an alternate timeline where John Connor was killed prior to Judgment Day, with the T-850 of the film being sent into this future during its fight with the T-X, requiring it to fight its way back to the temporal displacement engine of the new timeline so that it can go back and save John and Kate. In the 2019 video game Tom Clancy's Ghost Recon Breakpoint , a live event to promote Terminator: Dark Fate features T-800s as in-game enemies. In the event, Skynet sent T-800s back in time to kill main protagonist Nomad and ally Rasa Aldwin to prevent the Resistance from forming. In popular media, Skynet is often used as an analogy for the possible threat that a sufficiently advanced AI could pose to humanity. [ 1 ] [ 2 ] In 2018, computer scientist Stuart J. Russell , speaking for the Future of Life Institute , lamented the influence of Skynet on US government officials: We have witnessed high-level defense officials dismissing the risk on the grounds that their “experts" do not believe that the “Skynet thing" is likely to happen. Skynet, of course, is the fictional command and control system in the Terminator movies that turns against humanity. The risk of the “Skynet thing" occurring is completely unconnected to the risk of humans using autonomous weapons as WMDs or to any of the other risks cited by us and by ...[our critics]. This has, unfortunately, demonstrated that serious discourse and academic argument are not enough to get the message through. If even senior defense officials with responsibility for autonomous weapons programs fail to understand the core issues, then we cannot expect the general public and their elected representatives to make appropriate decisions. [ 3 ] Russell cited the influence of Skynet as one of the reasons the Institute produced the arms-control advocacy video Slaughterbots in 2017, as a way to redirect public officials' attention to what it considers the real threat.
https://en.wikipedia.org/wiki/Skynet_(Terminator)
A skyscraper is a tall continuously habitable building having multiple floors. Most modern sources define skyscrapers as being at least 100 metres (330 ft) [ 1 ] or 150 metres (490 ft) [ 2 ] in height, though there is no universally accepted definition, other than being very tall high-rise buildings . Skyscrapers may host offices, hotels, residential spaces, and retail spaces. One common feature of skyscrapers is having a steel frame that supports curtain walls . These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete . Modern skyscraper walls are not load-bearing , and most skyscrapers are characterized by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure , and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks , which in some cases is also structurally required. As of September 2023 [update] , fifteen cities in the world have more than 100 skyscrapers that are 150 m (492 ft) or taller. [ a ] As of 2024, there are over 7 thousand skyscrapers over 150 m (492 ft) in height worldwide. [ 4 ] The term "skyscraper" was first applied to buildings of steel-framed construction of at least 10 stories in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like New York City , Philadelphia , Boston , Chicago , Detroit , and St. Louis . [ 5 ] [ 6 ] The first steel-frame skyscraper was the Home Insurance Building , originally 10 stories with a height of 42 m or 138 ft, in Chicago in 1885; two additional stories were added. [ 7 ] Some point to Philadelphia's 10-story Jayne Building (1849–50) as a proto-skyscraper, [ 8 ] or to New York's seven-floor Equitable Life Building , built in 1870. Steel skeleton construction has allowed for today's supertall skyscrapers now being built worldwide. [ 9 ] The nomination of one structure versus another being the first skyscraper, and why, depends on what factors are stressed. [ 10 ] The structural definition of the word skyscraper was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-story buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry , which passed their practical limit in 1891 with Chicago's Monadnock Building . What is the chief characteristic of the tall office building? It is lofty. It must be tall. The force and power of altitude must be in it, the glory and pride of exaltation must be in it. It must be every inch a proud and soaring thing, rising in sheer exaltation that from bottom to top it is a unit without a single dissenting line. Some structural engineers define a high-rise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers . Different organizations from the United States and Europe define skyscrapers as buildings at least 150 m (490 ft) in height or taller, [ 11 ] [ 6 ] [ 12 ] with " supertall " skyscrapers for buildings higher than 300 m (984 ft) and " megatall " skyscrapers for those taller than 600 m (1,969 ft). [ 13 ] The tallest structure in ancient times was the 146 m (479 ft) Great Pyramid of Giza in ancient Egypt , built in the 26th century BC. It was not surpassed in height for thousands of years, the 160 m (520 ft) Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed. [ 14 ] The latter in turn was not surpassed until the 555-foot (169 m) Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper. [ citation needed ] High-rise apartments flourished in classical antiquity . Ancient Roman insulae in imperial cities reached 10 and more stories. [ 15 ] Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of 20–25 m (66–82 ft) for multi-stories buildings, but were met with only limited success. [ 16 ] [ 17 ] Lower floors were typically occupied by shops or wealthy families, with the upper rented to the lower classes. [ 15 ] Surviving Oxyrhynchus Papyri indicate that seven-stories buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt . [ 18 ] The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the 97.2 m (319 ft) high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than 26 m (85 ft). [ 19 ] Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 towers that ranged up to 51 m (167 ft) height in San Gimignano . [ 19 ] The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets . Nasir Khusraw in the early 11th century described some of them rising up to 14 stories, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them. [ 20 ] Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants . [ 21 ] An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen . Shibam was made up of over 500 tower houses, [ 22 ] each one rising 5 to 11 stories high, [ 23 ] with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks. [ 22 ] Shibam still has the tallest mudbrick buildings in the world, with many of them over 30 m (98 ft) high. [ 24 ] An early modern example of high-rise housing was in 17th-century Edinburgh , Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 stories were common, and there are records of buildings as high as 14 stories. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill in Shrewsbury , England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices. [ 25 ] In 1857, Elisha Otis introduced the safety elevator at the E. V. Haughwout Building in New York City, allowing convenient and safe transport to buildings' upper floors. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by some architectural historians to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool , England, built in 1864. It was only five floors high. [ 26 ] [ 27 ] The Royal Academy of Arts states, "critics at the time were horrified by its 'large agglomerations of protruding plate glass bubbles'. In fact, it was a precursor to Modernist architecture, being the first building in the world to feature a metal-framed glass curtain wall , a design element which creates light, airy interiors and has since been used the world over as a defining feature of skyscrapers". [ 28 ] Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built from 1884 to 1885. [ 29 ] While its original height of 42.1 m (138 ft) does not qualify as a skyscraper today, it was record setting for the day. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement, broadly termed the Chicago School , which developed what has been called the Commercial Style. [ 30 ] The architect, Major William Le Baron Jenney , created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This was then draped with a stone curtain for aesthetic purposes. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today. [ 31 ] Burnham and Root 's 45 m (148 ft) Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper, [ 32 ] while Louis Sullivan 's 41 m (135 ft) Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper. In 1889, the Mole Antonelliana in Italy was 197 m (549 ft) tall. Most early skyscrapers emerged in the land-strapped areas of New York City and Chicago toward the end of the 19th century. A land boom in Melbourne , Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. In the late 1800s, London builders found building heights limited due to issues with existing buildings. High-rise development in London is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. [ 33 ] This policy, 'St Paul's Heights', has officially been in operation since 1927. [ 34 ] Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the 20th century. By 1940, there were around 100 high-rise buildings in Europe ( List of early skyscrapers ). Some examples of these are the 43 m (141 ft) tall 1898 Witte Huis (White House) in Rotterdam ; the 51.5 m (169 ft) tall PAST Building (1906–1908) in Warsaw ; the Royal Liver Building in Liverpool, completed in 1911 and 90 m (300 ft) high; [ 35 ] the 57 m (187 ft) tall 1924 Marx House in Düsseldorf , the 65 m (213 ft) tall Borsigturm in Berlin , built in 1924, the 65 m (213 ft) tall Hansahochhaus in Cologne , Germany, built in 1925; the 61 m (200 ft) Kungstornen (Kings' Towers) in Stockholm , Sweden, which were built 1924–25; [ 36 ] the 77 m (253 ft) Ullsteinhaus in Berlin, Germany, built in 1927; the 89 m (292 ft) Edificio Telefónica in Madrid , Spain, built in 1929; the 87.5 m (287 ft) Boerentoren in Antwerp, Belgium, built in 1932; the 66 m (217 ft) Prudential Building in Warsaw , Poland, built in 1934; and the 108 m (354 ft) Torre Piacentini in Genoa , Italy, built in 1940. After an early competition between New York City and Chicago for the world's tallest building, New York took the lead by 1895 with the completion of the 103 m (338 ft) tall American Surety Building , leaving New York with the title of the world's tallest building for many years. America by far produced the most skyscrapers in this period. Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone . They use mechanical equipment such as water pumps and elevators . Since the 1960s, according to the CTBUH, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world. [ 37 ] The construction of very tall skyscrapers entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II . Shortly after the war ended, Russia began construction on a series of skyscrapers in Moscow . Seven, dubbed the " Seven Sisters ", were built between 1947 and 1953; and one, the Main building of Moscow State University , was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany ( Frankfurter Tor ), Poland ( PKiN ), Ukraine ( Hotel Moscow ), Latvia ( Academy of Sciences ), and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) and Torre Breda (Italy). From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America . Finally, they also began to be constructed in cities in Africa , the Middle East , South Asia , and Oceania from the late 1950s. Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers , instead embracing the uniform international style ; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building , once the world's tallest skyscraper. German-American architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived the glass façade skyscraper [ 38 ] and, along with Norwegian Fred Severud , [ 39 ] designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of modernist high-rise architecture. [ 40 ] Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations [ 41 ] which made it possible for people to live and work in "cities in the sky". [ 42 ] In the early 1960s Bangladeshi-American structural engineer Fazlur Rahman Khan , considered the "father of tubular designs " for high-rises, [ 44 ] discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems . [ 45 ] His central innovation in skyscraper design and construction was the concept of the "tube" structural system , including the "framed tube", "trussed tube", and "bundled tube". [ 46 ] His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design. [ 47 ] These systems allow greater economic efficiency, [ 48 ] and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped. [ 49 ] The first building to employ the tube structure was the Chestnut De-Witt apartment building, [ 41 ] considered to be a major development in modern architecture. [ 41 ] These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land. [ 42 ] Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the " Second Chicago School ", [ 50 ] including the hundred-story John Hancock Center and the massive 442 m (1,450 ft) Willis Tower . [ 51 ] Other pioneers of this field include Hal Iyengar , William LeMessurier , and Minoru Yamasaki , the architect of the World Trade Center . Many buildings designed in the 1970s lacked a particular style and recalled ornamentation from earlier buildings designed before the 1950s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes. [ 52 ] This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources. [ 53 ] Khan's work promoted structures integrated with architecture and the least use of material resulting in the smallest impact on the environment. [ 54 ] The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach. [ 52 ] Modern building practices regarding supertall structures have led to the study of "vanity height". [ 55 ] [ 56 ] Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30% of their height, raising potential definitional and sustainability issues. [ 57 ] [ 58 ] [ 59 ] The current era of skyscrapers focuses on sustainability , its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard. [ 60 ] Architecturally, with the movements of Postmodernism , New Urbanism and New Classical Architecture , that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today. [ 61 ] Examples are the Wells Fargo Center , NBC Tower , Parkview Square , 30 Park Place , the Messeturm , the iconic Petronas Towers and Jin Mao Tower . Other contemporary styles and movements in skyscraper design include organic , sustainable , neo-futurist , structuralist , high-tech , deconstructivist , blob , digital , streamline , novelty , critical regionalist , vernacular , Neo Art Deco and neohistorist , also known as revivalist . 3 September is the global commemorative day for skyscrapers, called "Skyscraper Day". [ 62 ] New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the 318.9 m (1,046 ft) Chrysler Building in 1930 and the 443.2 m (1,454 ft) Empire State Building in 1931, the world's tallest building for forty years. The first completed 417 m (1,368 ft) tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower ) in Chicago within two years. The 442 m (1,450 ft) tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by 452 m (1,483 ft) Petronas Twin Towers in Kuala Lumpur, which held the title for six years. The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics , engineering , and construction management. One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows. The concept of a skyscraper is a product of the industrialized age , made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete . The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labor costs increased. The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns. [ 63 ] Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes. Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis , ThyssenKrupp , Schindler , and KONE . Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear. [ 64 ] Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the tremendous damage such failure would cause. This presents a paradox to civil engineers : the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure; instead, one can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor. The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load , the load of the structure, is larger than the live load , the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building 's setbacks are actually a result of the building code at the time ( 1916 Zoning Resolution ), and were not structurally required. On the other hand, John Hancock Center 's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls. The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads. Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes. By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections. [ 65 ] The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 stories tall as usable floor spaces are reduced for supporting column and due to more usage of steel. [ 63 ] A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation". [ 66 ] [ 67 ] Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction [ 46 ] was first used in the DeWitt-Chestnut Apartment Building , completed in Chicago in 1963, [ 68 ] and soon after in the John Hancock Center and World Trade Center . The tubular systems are fundamental to tall building design. Most buildings over 40 stories constructed since the 1960s now use a tube design derived from Khan's structural engineering principles, [ 63 ] [ 69 ] examples including the construction of the World Trade Center , Aon Center , Petronas Towers , Jin Mao Building , and most other supertall skyscrapers since the 1960s. [ 46 ] The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa , [ 49 ] which uses a Buttressed core . [ 70 ] Trussed tube and X-bracing: Khan pioneered several other variations of the tube structure design. One of these was the concept of X-bracing , or the trussed tube , first employed for the John Hancock Center . This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads ). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires. The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145. [ 48 ] The trussed tube concept was applied to many later skyscrapers, including the Onterie Center , Citigroup Center and Bank of China Tower . [ 71 ] Bundled tube: An important variation on the tube frame is the bundled tube , which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture." [ 49 ] Tube in tube: Tube-in-tube system takes advantage of core shear wall tubes in addition to exterior tubes. The inner tube and outer tube work together to resist gravity loads and lateral loads and to provide additional rigidity to the structure to prevent significant deflections at the top. This design was first used in One Shell Plaza . [ 72 ] Later buildings to use this structural system include the Petronas Towers . [ 73 ] Outrigger and belt truss: The outrigger and belt truss system is a lateral load resisting system in which the tube structure is connected to the central core wall with very stiff outriggers and belt trusses at one or more levels. [ 74 ] BHP House was the first building to use this structural system followed by the First Wisconsin Center, since renamed U.S. Bank Center , in Milwaukee. The center rises 601 feet, with three belt trusses at the bottom, middle and top of the building. The exposed belt trusses serve aesthetic and structural purposes. [ 75 ] Later buildings to use this include Shanghai World Financial Center . [ 74 ] Concrete tube structures: The last major buildings engineered by Khan were the One Magnificent Mile and Onterie Center in Chicago, which employed his bundled tube and trussed tube system designs respectively. In contrast to his earlier buildings, which were mainly steel, his last two buildings were concrete. His earlier DeWitt-Chestnut Apartments building, built in 1963 in Chicago, was also a concrete building with a tube structure. [ 46 ] Trump Tower in New York City is also another example that adapted this system. [ 76 ] Shear wall frame interaction system: Khan developed the shear wall frame interaction system for mid high-rise buildings. This structural system uses combinations of shear walls and frames designed to resist lateral forces. [ 77 ] The first building to use this structural system was the 35-stories Brunswick Building. [ 75 ] The Brunswick building (today known as the " Cook County Administration Building ") was completed in 1965 and became the tallest reinforced concrete structure of its time. The structural system of Brunswick Building consists of a concrete shear wall core surrounded by an outer concrete frame of columns and spandrels. [ 78 ] Apartment buildings up to 70 stories high have successfully used this concept. [ 79 ] The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core. [ 80 ] Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies , where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors. Other buildings, such as the Petronas Towers , use double-deck elevators , allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only person on one level needs to get off at a given floor. Buildings with sky lobbies include the World Trade Center , Petronas Twin Towers , Willis Tower and Taipei 101 . The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool , which remains the highest in the United States. [ 81 ] Skyscrapers are usually situated in city centres where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings. Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes. Today, skyscrapers are an increasingly common sight where land is expensive, as in the centres of big cities, because they provide such a high ratio of rentable floor space per unit area of land. Another disadvantage of very high skyscrapers is the loss of usable floorspace, as many elevator shafts are needed to enable performant vertical travelling. This led to the introduction of express lifts and sky lobbies where transfer to slower distribution lifts can be done. Constructing a single skyscraper requires large quantities of materials like steel, concrete, and glass, and these materials represent significant embodied energy . Skyscrapers are thus material and energy intensive buildings. Skyscrapers have considerable mass, requiring a stronger foundation than a shorter, lighter building. In construction, building materials must be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated , elevators are generally used instead of stairs, and electric lights are needed in rooms far from the windows and windowless spaces such as elevators, bathrooms and stairwells. Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions . Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States, [ 83 ] proving that skyscrapers can be environmentally friendly. The Gherkin in London , the United Kingdom is another example of an environmentally friendly skyscraper. [ citation needed ] In the lower levels of a skyscraper a larger percentage of the building floor area must be devoted to the building structure and services than is required for lower buildings: In low-rise structures, the support rooms ( chillers , transformers , boilers , pumps and air handling units ) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building. The building sector accounts for approximately 50% of greenhouse gas emissions, with operational energy accounting for 80-90% of building related energy use. [ 84 ] Operational energy use is affected by the magnitude of conduction between the interior and exterior, convection from infiltrating air, and radiation through glazing . The extent to which these factors affect the operational energy vary depending on the microclimate of the skyscraper, with increased wind speeds as the height of the skyscraper increases, and a decrease in the dry bulb temperature as the altitude increases. [ 84 ] For example, when moving from 1.5 meters to 284 meters, the dry bulb temperature decreased by 1.85 °C while the wind speeds increased from 2.46 meters per seconds to 7.75 meters per second, which led to a 2.4% decrease in summer cooling in reference to the Freedom Tower in New York City. However, for the same building it was found that the annual energy use intensity was 9.26% higher because of the lack of shading at high altitudes which increased the cooling loads for the remainder of the year while a combination of temperature, wind, shading, and the effects of reflections led to a combined 13.13% increase in annual energy use intensity. [ 85 ] In a study performed by Leung and Ray in 2013, it was found that the average energy use intensity of a structure with between 0 and 9 floors was approximately 80 kBtu/ft/yr, while the energy use intensity of a structure with more than 50 floors was about 117 kBtu/ft/yr. Refer to Figure 1 [ where? ] to see the breakdown of how intermediate heights affect the energy use intensity. The slight decrease in energy use intensity over 30-39 floors can be attributed to the fact that the increase in pressure within the heating, cooling, and water distribution systems levels out at a point between 40 and 49 floors and the energy savings due to the microclimate of higher floors are able to be seen. [ 86 ] There is a gap in data in which another study looking at the same information but for taller buildings is needed. A portion of the operational energy increase in tall buildings is related to the usage of elevators because the distance traveled and the speed at which they travel increases as the height of the building increases. Between 5 and 25% of the total energy consumed in a tall building is from the use of elevators . As the height of the building increases it is also more inefficient because of the presence of higher drag and friction losses. [ 87 ] The embodied energy associated with the construction of skyscrapers varies based on the materials used. Embodied energy is quantified per unit of material. Skyscrapers inherently have higher embodied energy than low-rise buildings due to the increase in material used as more floors are built. Figures 2 and 3 [ where? ] compare the total embodied energy of different floor types and the unit embodied energy per floor type for buildings with between 20 and 70 stories. For all floor types except for steel-concrete floors, it was found that after 60 stories, there was a decrease in unit embodied energy but when considering all floors, there was exponential growth due to a double dependence on height. The first of which is the relationship between an increase in height leading to an increase in the quantity of materials used, and the second being the increase in height leading to an increase in size of elements to increase the structural capacity of the building. A careful choice in building materials can likely reduce the embodied energy without reducing the number of floors constructed within the bounds presented. [ 88 ] Similar to embodied energy, the embodied carbon of a building is dependent on the materials chosen for its construction. Figures 4 and 5 [ where? ] show the total embodied carbon for different structure types for increasing numbers of stories and the embodied carbon per square meter of gross floor area for the same structure types as the number of stories increases. Both methods of measuring the embodied carbon show that there is a point where the embodied carbon is lowest before increasing again as the height increases. For the total embodied carbon it is dependent on the structure type, but is either around 40 stories, or approximately 60 stories. For the square meter of gross floor area, the lowest embodied carbon was found at either 40 stories, or approximately 70 stories. [ 89 ] In urban areas, the configuration of buildings can lead to exacerbated wind patterns and an uneven dispersion of pollutants . When the height of buildings surrounding a source of air pollution is increased, the size and occurrence of both "dead-zones" and "hotspots" were increased in areas where there were almost no pollutants and high concentrations of pollutants, respectively. Figure 6 [ where? ] depicts the progression of a Building F's height increasing from 0.0315 units in Case 1, to 0.2 units in Case 2, to 0.6 units in Case 3. This progression shows how as the height of Building F increases, the dispersion of pollutants decreases, but the concentration within the building cluster increases. The variation of velocity fields can be affected by the construction of new buildings as well, rather than solely the increase in height as shown in the figure. [ 90 ] As urban centers continue to expand upward and outward, the present velocity fields will continue to trap polluted air close to the tall buildings within the city. Specifically within major cities, a majority of air pollution is derived from transportation, whether it be cars, trains, planes, or boats. As urban sprawl continues and pollutants continue to be emitted, the air pollutants will continue to be trapped within these urban centers. [ 91 ] Different pollutants can be detrimental to human health in different ways. For example, particulate matter from vehicular exhaust and power generation can cause asthma, bronchitis, and cancer, while nitrogen dioxide from motor engine combustion processes can cause neurological disfunction and asphyxiation. [ 92 ] Like with all other buildings, if special measures are taken to incorporate sustainable design methods early on in the design process, it is possible to obtain a green building rating, such as a Leadership in Energy and Environmental Design (LEED) certification. An integrated design approach is crucial in making sure that design decisions that positively impact the whole building are made at the beginning of the process. Because of the massive scale of skyscrapers, the decisions made by the design team must take all factors into account, including the buildings impact on the surrounding community, the effect of the building on the direction in which air and water move, and the impact of the construction process, must be taken into account. There are several design methods that could be employed in the construction of a skyscraper that would take advantage of the height of the building. [ 93 ] The microclimates that exist as the height of the building increases can be taken advantage of to increase the natural ventilation , decrease the cooling load, and increase daylighting. Natural ventilation can be increased by utilizing the stack effect , in which warm air moves upward and increases the movement of the air within the building. If utilizing the stack effect, buildings must take extra care to design for fire separation techniques, as the stack effect can also exacerbate the severity of a fire. [ 94 ] Skyscrapers are considered to be internally dominated buildings because of their size as well as the fact that a majority are used as some sort of office building with high cooling loads. Due to the microclimate created at the upper floors with the increased wind speed and the decreased dry bulb temperatures, the cooling load will naturally be reduced because of infiltration through the thermal envelope. By taking advantage of the naturally cooler temperatures at higher altitudes, skyscrapers can reduce their cooling loads passively. On the other side of this argument, is the lack of shading at higher altitudes by other buildings, so the solar heat gain will be larger for higher floors than for floors at the lower end of the building. Special measures should be taken to shade upper floors from sunlight during the overheated period to ensure thermal comfort without increasing the cooling load. [ 86 ] At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings . As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture: Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates. This geographical transition is accompanied by a change in approach to skyscraper design. For much of the 20th century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art . Architects in recent years [ when? ] have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand. [ citation needed ] The following list measures height of the roof, not the pinnacle. [ 109 ] [ failed verification ] The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996. Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku . Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category. [ 110 ] The first building under construction and planned to be over one kilometre tall is the Jeddah Tower . Several wooden skyscraper designs have been designed and built. A 14-story housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015. [ 112 ] The Tree's record was eclipsed by Brock Commons , an 18-story wooden dormitory at the University of British Columbia in Canada , when it was completed in September 2016. [ 113 ] A 40-story residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden . [ 114 ] Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction. [ 115 ] The tallest currently-planned wooden skyscraper is the 70-story W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041. [ 116 ] An 80-story wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge . The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois , would be 348 feet shorter than the W350 Project despite having 10 more storys. [ 117 ] [ 116 ] Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75%. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures. [ 118 ] CLT panels are prefabricated and can therefore save on building time. [ 119 ]
https://en.wikipedia.org/wiki/Skyscraper
In geometry , a slab is a region between two parallel lines in the Euclidean plane , [ 1 ] or between two parallel planes in three-dimensional Euclidean space or between two hyperplanes in higher dimensions . [ 2 ] A slab can also be defined as a set of points: [ 3 ] { x ∈ R n ∣ α ≤ n ⋅ x ≤ β } , {\displaystyle \{x\in \mathbb {R} ^{n}\mid \alpha \leq n\cdot x\leq \beta \},} where n {\displaystyle n} is the normal vector of the planes n ⋅ x = α {\displaystyle n\cdot x=\alpha } and n ⋅ x = β {\displaystyle n\cdot x=\beta } . Or, if the slab is centered around the origin: [ 4 ] { x ∈ R n ∣ | n ⋅ x | ≤ θ / 2 } , {\displaystyle \{x\in \mathbb {R} ^{n}\mid |n\cdot x|\leq \theta /2\},} where θ = | α − β | {\displaystyle \theta =|\alpha -\beta |} is the thickness of the slab. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slab_(geometry)
The NCR 315 Data Processing System , released in January 1962 by NCR , [ 1 ] is a second-generation computer . All printed circuit boards use resistor–transistor logic (RTL) to create the various logic elements. It uses 12-bit slab memory structure using magnetic-core memory . The instructions can use a memory slab as either two 6-bit alphanumeric characters or as three 4-bit BCD digits. Basic memory is 5000 "slabs" (10,000 characters or 15,000 decimal digits) of handmade core memory, which is expandable to a maximum of 40,000 slabs (80,000 characters or 120,000 decimal digits) in four refrigerator -size cabinets. The main processor includes three cabinets and a console section that houses the power supply, keyboard, output writer (an IBM electric typewriter ), and a panel with lights that indicate the current status of the program counter, registers, arithmetic accumulator, and system errors. Input/Output is by direct parallel connections to each type of peripheral through a two-cable bundle with 1-inch-thick cables. Some devices like magnetic tape and the CRAM are daisy-chained to allow multiple drives to be connected. The central processor (315 Data Processor) weighed about 1,325 pounds (601 kg). [ 2 ] Later models in this series include the 315-100 and the 315-RMC (Rod Memory Computer). The addressable unit of memory on the NCR 315 series is a "slab", short for "syllable", consisting of 12 data bits and a parity bit . [ 3 ] [ 4 ] Its size falls between a byte and a typical word (hence the name, ' syllable '). [ 5 ] A slab may contain three digits (with at sign , comma , space , ampersand , point , and minus treated as digits) or two alphabetic characters of six bits each. A slab may contain a decimal value from -99 to +999. A numeric value contains up to eight slabs. If the value is negative then the minus sign is the leftmost digit of this row. There are instructions to transform digits to or from alphanumeric characters. These commands use the accumulator , which has a maximum length of eight slabs. To accelerate the processing the accumulator works with an effective length. [ 5 ] The NCR 315-100 is the second version of the original 315. It too has a 6- microsecond clock cycle, and from 10,000 to 40,000 slabs of memory. [ 3 ] The 315-100 series console I/O incorporates a Teletype printer and keyboard in place of the original 315's IBM typewriter. The primary difference between the older NCR 315 and the 315-100 was the inclusion of the Automatic Recovery Option (ARO). One of the problems with early generation of computers was that when a memory or program error occurred, the system would simply turn on a red light and halt. The normal recovery process was to copy all register and counter setting from the console light panel, and to restart the program that was running at the time of the error, usually from the very beginning of the program. The upgrade to the 315 required the removal of approximate 1800 wire-wrapped connection on the backplane, and the installation of approximately 2400 new point-to-point wired connection. The NCR 315-RMC, released in July 1965, was the first commercially available computer to employ thin-film memory . This reduced the clock cycle time to 800 nanoseconds . It also included floating-point logic to allow scientific calculations, while retaining the same instruction set as previous NCR 315 and NCR 315-100. The thin film is wrapped around "rods" to allow faster reading and writing of memory. The follow-on to the 315-RMC was the NCR Century series.
https://en.wikipedia.org/wiki/Slab_(unit)
In electrical power systems a slack bus (or swing bus ), defined as a Vδ bus, is used to balance the active power | P | {\displaystyle |P|} and reactive power | Q | {\displaystyle |Q|} in a system while performing load flow studies . The slack bus is used to provide for system losses by emitting or absorbing active and/or reactive power to and from the system. For power systems engineers, a load flow study explains the power system conditions at various intervals during operation. It aims to minimize the difference between the calculated and actual quantities. Here, the slack bus can contribute to the minimization by having an unconstrained real and reactive power input. P i = P G i − P L i {\displaystyle P_{i}=P_{Gi}-P_{Li}} Δ P i = P i − P i , c a l c {\displaystyle \Delta P_{i}=P_{i}-P_{i,calc}} Δ Q i = Q i − Q i , c a l c {\displaystyle \Delta Q_{i}=Q_{i}-Q_{i,calc}} The use of a slack bus has an inherent disadvantage when dealing with uncertain input variables: the slack bus must absorb all uncertainties arising from the system and thus must have the widest possible nodal power distributions. Even moderate amounts of uncertainty in a large system may allow the resulting distributions to contain values beyond the slack bus's margins. A load flow approach able to directly incorporate uncertainties into the solution processes can be very useful. The results from such analyses give solutions over the range of the uncertainties, i.e., solutions that are sets of values or regions instead of single values. Buses are of 3 types and are classified as: The slack bus provides or absorbs active and reactive power to and from the transmission line to provide for losses, since these variables are unknown until the final solution is established. The slack bus is the only bus for which the system reference phase angle is defined. From this, the various angular differences can be calculated in the power flow equations. If a slack bus is not specified, then a generator bus with maximum real power | P | {\displaystyle |P|} acts as the slack bus. A given scheme can involve more than one slack bus. The most common formulation of the load flow problem specifies all input variables (PQ at loads, PV at generators) as deterministic values. Each set of specified values corresponds to one system state, which depends on a set of system conditions. When those conditions are uncertain, numerous scenarios must be analyzed. A classic load flow analysis consists of calculating voltage magnitude and phase angle at the buses, as well as the active and reactive line flows for the specified terminal (or bus conditions). Four variables are associated with each bus: Based on these values, a bus may be classified into the above-mentioned three categories as - Real and reactive powers (i.e. complex power) cannot be fixed. The net complex power flow into the network is not known in advance, and the system power losses are unknown until the study is complete. It is necessary to have one bus (i.e. the slack bus) at which complex power is unspecified so that it supplies the difference in the total system load plus losses and the sum of the complex powers specified at the remaining buses. The complex power allocated to this bus is computed as part of the solution. In order for the variations in real and reactive powers of the slack bus to be a small percentage of its generating capacity during the solution process, the bus connected to the largest generating station is normally selected as the slack bus. The slack bus is crucial to a load flow problem since it will account for transmission line losses. In a load flow problem, conservation of energy results in the total generation equaling to the sum of the loads. However, there still would be a discrepancy in these quantities due to line losses, which are dependent on line current. Yet to determine line current, angles and voltages of the buses connected to the line would be needed. Here, the slack bus will be required to account for line losses and serve as a generator, injecting real power to the system. The solution requires mathematical formulation and numerical solution. Since load flow problems generate non-linear equations that computers cannot solve quickly, numerical methods are required. The following methods are commonly used algorithms:
https://en.wikipedia.org/wiki/Slack_bus
The general term slag may be a by-product or co-product of smelting ( pyrometallurgical ) ores and recycled metals depending on the type of material being produced. [ 1 ] Slag is mainly a mixture of metal oxides and silicon dioxide . Broadly, it can be classified as ferrous (co-products of processing iron and steel), ferroalloy (a by-product of ferroalloy production) or non-ferrous / base metals (by-products of recovering non-ferrous materials like copper , nickel , zinc and phosphorus ). [ 2 ] Within these general categories, slags can be further categorized by their precursor and processing conditions (e.g., blast furnace slags, air-cooled blast furnace slag, granulated blast furnace slag, basic oxygen furnace slag, and electric arc furnace slag). Slag generated from the EAF process can contain toxic metals, which can be hazardous to human and environmental health. [ 3 ] Due to the large demand for ferrous, ferralloy, and non-ferrous materials, slag production has increased throughout the years despite recycling (most notably in the iron and steelmaking industries) and upcycling efforts. The World Steel Association (WSA) estimates that 600 kg of co-materials (co-products and by-products)(about 90 wt% is slags) are generated per tonne of steel produced. [ 5 ] Slag is usually a mixture of metal oxides and silicon dioxide . However, slags can contain metal sulfides and elemental metals. It is important to note, the oxide form may or may not be present once the molten slag solidifies and forms amorphous and crystalline components. The major components of these slags include the oxides of calcium , magnesium , silicon , iron, and aluminium, with lesser amounts of manganese , phosphorus , and others depending on the specifics of the raw materials used. Furthermore, slag can be classified based on the abundance of iron among other major components. [ 1 ] In nature, iron, copper, lead, nickel , and other metals are found in impure states called ores , often oxidized and mixed in with silicates of other metals. During smelting, when the ore is exposed to high temperatures, these impurities are separated from the molten metal and can be removed. Slag is the collection of compounds that are removed. In many smelting processes, oxides are introduced to control the slag chemistry, assisting in the removal of impurities and protecting the furnace refractory lining from excessive wear. In this case, the slag is termed synthetic . A good example is steelmaking slag: quicklime (CaO) and magnesite (MgCO 3 ) are introduced for refractory protection, neutralizing the alumina and silica separated from the metal, and assisting in the removal of sulfur and phosphorus from the steel. [ citation needed ] As a co-product of steelmaking , slag is typically produced either through the blast furnace – oxygen converter route or the electric arc furnace – ladle furnace route. [ 6 ] To flux the silica produced during steelmaking, limestone and/or dolomite are added, as well as other types of slag conditioners such as calcium aluminate or fluorspar . There are three types of slag: ferrous , ferroalloy , non-ferrous slags, which are produced through different smelting processes. Ferrous slags are produced in different stages of the iron and steelmaking processes resulting in varying physiochemical properties. Additionally, the rate of cooling of the slag material affects its degree of crystallinity further diversifying its range of properties. For example, slow cooled blast furnace slags (or air-cooled slags) tend to have more crystalline phases than quenched blast furnace slags ( ground granulated blast furnace slags ) making it denser and better suited as an aggregate. It may also have higher free calcium oxide and magnesium oxide content, which are often converted to its hydrated forms if excessive volume expansions are not desired. On the other hand, water quenched blast furnace slags have greater amorphous phases giving it latent hydraulic properties (as discovered by Emil Langen in 1862) similar to Portland cement . [ 7 ] During the process of smelting iron, ferrous slag is created, but dominated by calcium and silicon compositions. Through this process, ferrous slag can be broken down into blast furnace slag (produced from iron oxides of molten iron), then steel slag (forms when steel scrap and molten iron combined). The major phases of ferrous slag contain calcium-rich olivine -group silicates and melilite -group silicates. Slag from steel mills in ferrous smelting is designed to minimize iron loss, which gives out the significant amount of iron, following by oxides of calcium , silicon , magnesium , and aluminium. As the slag is cooled down by water, several chemical reactions from a temperature of around 2,600 °F (1,430 °C) (such as oxidization ) take place within the slag. [ 1 ] Based on a case study at the Hopewell National Historical Site in Berks and Chester counties, Pennsylvania , US, ferrous slag usually contains lower concentration of various types of trace elements than non-ferrous slag . However, some of them, such as arsenic (As), iron, and manganese , can accumulate in groundwater and surface water to levels that can exceed environmental guidelines. [ 1 ] Non-ferrous slag is produced from non-ferrous metals of natural ores. Non-ferrous slag can be characterized into copper, lead, and zinc slags due to the ores' compositions, and they have more potential to impact the environment negatively than ferrous slag. The smelting of copper, lead and bauxite in non-ferrous smelting, for instance, is designed to remove the iron and silica that often occurs with those ores, and separates them as iron-silicate-based slags. [ 1 ] Copper slag, the waste product of smelting copper ores, was studied in an abandoned Penn Mine in California, US. For six to eight months per year, this region is flooded and becomes a reservoir for drinking water and irrigation . Samples collected from the reservoir showed the higher concentration of cadmium (Cd) and lead (Pb) that exceeded regulatory guidelines. [ 1 ] Slags can serve other purposes, such as assisting in the temperature control of the smelting, and minimizing any re-oxidation of the final liquid metal product before the molten metal is removed from the furnace and used to make solid metal. In some smelting processes, such as ilmenite smelting to produce titanium dioxide , the slag can be the valuable product. [ 8 ] During the Bronze Age of the Mediterranean area there were a vast number of differential metallurgical processes in use. A slag by-product of such workings was a colorful, glassy material found on the surfaces of slag from ancient copper foundries. It was primarily blue or green and was formerly chipped away and melted down to make glassware products and jewelry. It was also ground into powder to add to glazes for use in ceramics. Some of the earliest such uses for the by-products of slag have been found in ancient Egypt . [ 9 ] Historically, the re-smelting of iron ore slag was common practice, as improved smelting techniques permitted greater iron yields—in some cases exceeding that which was originally achieved. During the early 20th century, iron ore slag was also ground to a powder and used to make agate glass , also known as slag glass. Use of slags in the construction industry dates back to the 1800s, where blast furnace slags were used to build roads and railroad ballast. During this time, it was also used as an aggregate and had begun being integrated into the cement industry as a geopolymer . [ 10 ] Today, ground granulated blast furnace slags are used in combination with Portland cement to create " slag cement ". Granulated blast furnace slags react with portlandite ( Ca(OH) 2 ), which is formed during cement hydration, via the pozzolanic reaction to produce cementitious properties that primarily contribute to the later strength gain of concrete. This leads to concrete with reduced permeability and better durability. Careful consideration of the slag type used is required, as the high calcium oxide and magnesium oxide content can lead to excessive volume expansion and cracking in concrete. [ 11 ] These hydraulic properties have also been used for soil stabilization in roads and railroad constructions . [ 12 ] Granulated blast furnace slag is used in the manufacture of high-performance concretes, especially those used in the construction of bridges and coastal features, where its low permeability and greater resistance to chlorides and sulfates can help to reduce corrosive action and deterioration of the structure. [ 13 ] [ user-generated source? ] Slag can also be used to create fibers used as an insulation material called slag wool . Slag is also used as aggregate in asphalt concrete for paving roads . A 2022 study in Finland found that road surfaces containing ferrochrome slag release a highly abrasive dust that has caused car parts to wear at significantly greater than normal rates. [ 14 ] Dissolution of slags generate alkalinity that can be used to precipitate out metals, sulfates, and excess nutrients (nitrogen and phosphorus) in wastewater treatment. Similarly, ferrous slags have been used as soil conditioners to re-balance soil pH and fertilizers as sources of calcium and magnesium. [ 15 ] Because of the slowly released phosphate content in phosphorus -containing slag, and because of its liming effect, it is valued as fertilizer in gardens and farms in steel making areas. However, the most important application is construction. [ 16 ] Slags have one of the highest carbonation potential among the industrial alkaline waste due their high calcium oxide and magnesium oxide content, inspiring further studies to test its feasibility in CO 2 capture and storage ( CCS ) methods (e.g., direct aqueous sequestration, dry gas-solid carbonation among others). [ 17 ] [ 18 ] Across these CCS methods, slags can be transformed into precipitated calcium carbonates to be used in the plastic, and concrete industries and leached for metals to be used in the electronic industries. [ 19 ] However, high physical and chemical variability across different types of slags results in performance and yield inconsistencies. [ 20 ] Moreover, stoichiometric -based calculation of the carbonation potential can lead to overestimation that can further obfuscate the material's true potential. [ 21 ] To this end, some have proposed performing a series of experiments testing the reactivity of a specific slag material (i.e., dissolution ) or using the topological constraint theory (TCT) to account for its complex chemical network. [ 22 ] Slags are transported along with slag tailings to "slag dumps", where they are exposed to weathering, with the possibility of leaching of toxic elements and hyperalkaline runoffs into the soil and water, endangering the local ecological communities. Leaching concerns are typically around non-ferrous or base metal slags, which tend to have higher concentrations of toxic elements. However, ferrous and ferroalloy slags may also have them, which raises concerns about highly weathered slag dumps and upcycled materials. [ 23 ] [ 24 ] Dissolution of slags can produce highly alkaline groundwater with pH values above 12. [ 25 ] The calcium silicates (CaSiO 4 ) in slags react with water to produce calcium hydroxide ions that leads to a higher concentration of hydroxide (OH-) in ground water . This alkalinity promotes the mineralization of dissolved CO 2 (from the atmosphere) to produce calcite (CaCO 3 ), which can accumulate to as thick as 20 cm. This can also lead to the dissolution of other metals in slag, such as iron (Fe), manganese (Mn), nickel (Ni), and molybdenum (Mo), which become insoluble in water and mobile as particulate matter . The most effective method to detoxify alkaline ground water discharge is air sparging . [ 25 ] Fine slags and slag dusts generated from milling slags to be recycled into the smelting process or upcycled in a different industry (e.g. construction) can be carried by the wind, affecting a larger ecosystem. It can be ingested and inhaled, posing a direct health risk to the communities near the plants , mines, disposal sites, etc. [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Slag
The SLAPD ( Standalone LDAP Daemon ) and SLURPD (Stand-alone LDAP update replication daemon) originally evolved within the long-running project that developed the LDAP protocol. [ 1 ] [ 2 ] It was developed at the University of Michigan , and was the first Lightweight Directory Access Protocol (LDAP) software. [ 3 ] Today, many LDAP Server Implementations are derived from the same code base of the original SLAPD and/or evolutions of it. Tim Howes of the University of Michigan, Steve Kille of Isode Limited, Wengyik Yeong of Performance Systems International and Colin Robbins of Nexor authored the original LDAP specification. [ 4 ] [ 5 ] In 1993, initial implementations of the LDAP standard were made by Howes at the University of Michigan, in the form of LDAPD as a proxy for the Quipu X.500 directory and SLAPD. In 1996 Netscape Communications Corporation hired several of the project's developers, who then worked on what became known as the Netscape Directory Server. [ 3 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slapd
The slashed zero , , is a representation of the Arabic digit zero ("0") with a slash through it. This variant zero glyph is often used to distinguish the digit zero from the Latin script letter O anywhere that the distinction needs emphasis, particularly in encoding systems, scientific and engineering applications, computer programming (such as software development ), and telecommunications. It thus helps to differentiate characters that would otherwise be homoglyphs . It was commonly used during the punch card era, when programs were typically written out by hand, to avoid ambiguity when the character was later typed on a card punch . The slashed zero is used in a number of fields in order to avoid confusion with the letter "O". It is used by computer programmers , in recording amateur radio call signs and in military radio, as logs of such contacts tend to contain both letters and numerals. The slashed zero was used on teleprinter circuits for weather applications. In this usage it was sometimes called communications zero. [ 1 ] The slashed zero can be used in stoichiometry to avoid confusion with the symbol for oxygen (capital O). The slashed zero is also used in charting and documenting in the medical and healthcare fields to avoid confusion with the letter "O". It also denotes an absence of something (similar to the usage of an " empty set " character), such as a sign or a symptom. Slashed zeros are used on New Zealand number plates . [ 2 ] The slashed zero predates computers, and is known to have been used in the twelfth and thirteenth centuries. [ 3 ] In the days of the typewriter , there was no key for the slashed zero. Typists could generate it by first typing either an uppercase "O" or a zero and then backspace , followed by typing the slash key. The result would look very much like a slashed zero. It is used in many Baudot teleprinter applications, specifically the keytop and typepallet that combines "P" and slashed zero. [ 4 ] Additionally, the slashed zero is used in many ASCII graphic sets descended from the default typewheel on the Teletype Model 33 . [ 5 ] The use of the slashed zero by many computer systems of the 1970s and 1980s inspired the 1980s space rock band Underground Zerø to use a heavy metal umlaut Scandinavian vowel ø in the band's name and as the band logo on all their album covers. Along with the Westminster , MICR , and OCR-A fonts, the slashed zero became one of the things associated with hacker culture in the 1980s. Some cartoons depicted computer users talking in binary code with 1s and 0s using a slashed zero for the 0. Slashed zeroes have been used in the Flash-based artwork of Young-Hae Chang Heavy Industries , notably in their 2003 work, Operation Nukorea . The reason for their use is unknown, but has been conjectured to be related to themes of "negation, erasure, and absence". [ 6 ] The slashed zero has the disadvantage that it can be confused with several other symbols. Ø (disambiguation) See the disambiguation page for the symbol Ø for a comprehensive listing. In Unicode , slashed zero is considered a standardized typographic variation of the Arabic digit zero 0 , which is code point U+0030 . Appending Variation Selector 1 U+FE00 after the zero creates the "short diagonal stroked form", [ 7 ] on this browser it produces 0︀ . Note that the above should not be confused with the "slashed zero variant of the empty set ", ∅ {\displaystyle \emptyset } , as popularized by Donald Knuth's TeX . [ 8 ] Unicode represents that character as the empty set (∅) with variation selector 1. [ 7 ] Prior to Unicode 9.0, there was no code point defined for altering the visual appearance of zero. This meant that the slashed zero glyph was displayed for U+0030 only— and then always —in fonts whose designer chose the option. Successful display on a particular local system depended on making sure that such a font was available — either via the system's font files or via font embedding — and selected. (See also, Combining solidus below.) In HTML , slashed zero can be enabled by using CSS property font-variant-numeric : slashed-zero or alternatively font-feature-settings : 'zero' . If the font has support for OpenType feature tag zero , slashed zero will be substituted. [ 9 ] [ 10 ] [ 11 ] In most typographic designs, the slash of a slashed zero usually does not extend past the ellipse. Compare this to the Scandinavian vowel " Ø ", the " empty set " symbol "∅" and the diameter symbol ⌀ . A convention common on early line printers left zero unornamented but added a tail or hook to the letter-O so that it resembled an inverted Q (like U+213A ℺ ) or cursive capital letter-O ( O {\displaystyle \,{\mathcal {O}}\,} ). [ 12 ] In the Fixedsys typeface, the numeral 0 has two internal barbs along the lines of the slash. This appears much like a white "S" within the black borders of the zero. In the FE-Schrift typeface, used on German car license plates , the zero is rectangular and has an "insinuated" slash: a diagonal crack just beneath the top right curve. Typefaces commonly found on personal computers that use the slashed zero include: Dotted zero typefaces: The zero with a dot in the center seems to have originated as an option on IBM 3270 display controllers. The dotted zero may appear similar to the Greek letter theta (particularly capital theta, Θ), but the two have different glyphs . In raster fonts, the theta usually has a horizontal line connecting, or nearly touching, the sides of an O; while the dotted zero simply has a dot in the middle. However, on a low-definition display, such a form can be confused with a numeral 8. In some fonts the IPA letter for a bilabial click (ʘ) looks similar to the dotted zero. Alternatively, the dot can become a vertical trace, for example by adding a "combining short vertical line overlay" (U+20D3) . It may be coded as 0&#x20D3; giving 0⃓ . Dotted zero is used on vehicle registration plates of Slovakia since 2023. [ 17 ] IBM (and a few other early mainframe makers) used a convention in which the letter O had a slash and the digit 0 did not. [ 18 ] This is even more problematic for Danes , Faroese , and Norwegians because it means two of their letters — the O and slashed O ( Ø ) — are visually similar. This was later flipped and most mainframe chain or band printers used the opposite convention (letter O printed as is, and digit zero printed with a slash). This was the de facto standard from 1970s to 1990s. However current use of network laser printers that use PC style fonts caused the demise of the slashed zero in most companies — only a few configured laser printers to use it. [ citation needed ] Before Unicode standardized the slashed variation of zero (0︀) seen above, it did allow composite characters which were used historically to obtain a crude typographic approximation where a slash is drawn upon a zero. It is treated literally as "a zero that is slashed", and it is coded as two characters: a standard zero 0 followed by either "combining short solidus overlay" U+0337 or "combining long solidus overlay" U+0338 . However, besides confusing the meaning of the digit zero, it will make a mess if the zero is already slashed in the font. There is no way to specify an unslashed zero that can always be safely overprinted. Laying a slash over letter O also risks wrong appearance and confusion. For example, placing the "long solidus", which may be written in HTML as 0&#x338; , appears as 0̸ . Using the "short solidus overlay" U+0337 after a standard zero character is coded as 0&#x337; and produces the following: 0̷ . Some Burroughs / Unisys equipment displays a zero with a reversed slash, similar to the no symbol , 🛇 , as does the free typeface Atkinson Hyperlegible .
https://en.wikipedia.org/wiki/Slashed_zero
Astral Codex Ten (ACX), formerly Slate Star Codex (SSC), is a blog focused on science , medicine (especially psychiatry ), philosophy , politics , and futurism . The blog is written by Scott Alexander Siskind, [ 1 ] a San Francisco Bay Area psychiatrist , [ 2 ] under the pen name Scott Alexander. Slate Star Codex was launched in 2013 and was discontinued on June 23, 2020. As of July 22, 2020 [update] , the blog is partially back online, with the content restored but commenting disabled. The successor blog, Astral Codex Ten , [ 2 ] was launched on January 21, 2021. Alexander also blogged at the rationalist community blog LessWrong , [ 3 ] and wrote a fiction book in blog format named Unsong . [ 4 ] A revised version of Unsong was published on May 24, 2024. [ 5 ] [ 6 ] The site was a primary venue of the rationalist community and also attracted wider audiences. [ 3 ] The New Statesman characterizes it as "a nexus for the rationalist community and others who seek to apply reason to debates about situations, ideas, and moral quandaries." [ 7 ] The New Yorker describes Alexander's fiction as "delightfully weird" and his arguments "often counterintuitive and brilliant". [ 3 ] Economist Tyler Cowen calls Scott Alexander "a thinker who is influential among other writers". [ 8 ] The New Yorker states that the volume of content Alexander has written on Slate Star Codex makes the blog difficult to summarize, with an e-book of all posts running over nine thousand pages in PDF form. [ 3 ] Many posts are book reviews (typically of books in the fields of social sciences or medicine) or reviews of a topic in the scientific literature. For example, the March 2020 blog post "Face Masks: Much More Than You Wanted To Know" analyzes available medical literature and comes to a conclusion that contrary to early guidance by the CDC , masks are likely an effective protection measure against COVID-19 for the general public under certain conditions. [ 3 ] [ 9 ] Some posts are prefaced with a note on their "epistemic status", an assessment of Alexander's confidence in the material to follow. [ 3 ] In 2017, Slate Star Codex ranked fourth on a survey conducted by Rethink Charity of how effective altruists first heard about effective altruism, after "personal contact", " LessWrong ", and "other books, articles and blog posts", and just above " 80,000 Hours ." [ 10 ] The blog discusses moral questions and dilemmas relevant to effective altruism, such as moral offsets (the proposition that good acts can cancel out bad acts), ethical treatment of animals, and trade-offs of pursuing systemic change for charities. [ 11 ] Alexander regularly writes about advances in artificial intelligence and emphasized the importance of AI safety research. [ 12 ] In the long essay "Meditations On Moloch", he analyzes game-theoretic scenarios of cooperation failure like the prisoner's dilemma and the tragedy of the commons that underlie many of humanity's problems and argues that AI risks should be considered in this context. [ 13 ] In "The Toxoplasma of Rage", Alexander discusses how controversies spread in media and social networks. According to Alexander, memes that generate a lot of disagreement spread further, in part because they present an opportunity to members of different groups to send a strong signal of commitment to their cause. For example, he argues that PETA , with its controversial campaigns, is better known than other animal rights organizations such as Vegan Outreach because of this dynamic. [ 14 ] Another example of this cited by Alexander is the Rolling Stone article " A Rape on Campus ". [ 15 ] In the short story "Sort By Controversial", Alexander introduces the term "Shiri's scissor" or "scissor statement" to describe a statement that has great destructive power because it generates wildly divergent interpretations that fuel conflict and tear people apart. The term has been used to describe controversial topics widely discussed in social media. [ 16 ] The 2013 post "The Anti-Reactionary FAQ" critiques the work and worldview of the neoreactionary movement , arguing against the work of Curtis Yarvin (whose views include a belief in natural racial hierarchies and a desire to restore feudalism ). Alexander allowed neo-reactionaries to comment on posts and in "culture war" threads on the forum because he wanted to promote an open marketplace of ideas ; Alexander engaged in extended dialogues with these users, including his thirty-thousand-word FAQ. [ 3 ] Alexander's essays on neoreaction have been cited by David Auerbach and Dylan Matthews as explanations of the movement. [ 17 ] [ 18 ] In the 2013 post "Lizardman's Constant is 4%", Alexander coined the term "Lizardman's Constant", referring to the approximate percentage of responses to a poll, survey, or quiz that are not sincere. [ 19 ] The post was responding to a Public Policy Polling statement that "four percent of Americans believe lizardmen are running the Earth", which Alexander attributed to people giving a polling company an answer they did not really believe to be true, out of carelessness, politeness, anger, or amusement. [ 19 ] Alexander suggested that polls should include a question with an absurd answer as one of the options, so anyone choosing that option could be weeded out as a troll . [ 20 ] [ 21 ] Alexander used his first and middle name alone for safety and privacy reasons, although he had previously published Slate Star Codex content academically under his real name. [ 2 ] In June 2020, he deleted all entries on Slate Star Codex , stating that a technology reporter from The New York Times (NYT) intended to publish an article about the blog using his full name. Alexander said that the reporter told him that it was newspaper policy to use real names, [ 22 ] and he referred to it as doxing . [ 3 ] The New York Times responded: "We do not comment on what we may or may not publish in the future. But when we report on newsworthy or influential figures, our goal is always to give readers all the accurate and relevant information we can." [ 23 ] The Verge cited a source saying that at the time when Alexander deleted the blog, "not a word" of a story about SSC had been written. [ 24 ] The Poynter Institute 's David Cohn interpreted this event as part of an ongoing clash between the tech and media industries, reflecting a shift from primarily economic conflicts to fundamental disagreements over values, ethics, and cultural norms. [ 25 ] Prior to the article's publication, several commentators argued that The New York Times should not publish Alexander's name without good reason. Writing in National Review , Tobias Hoonhout said that the newspaper had applied its anonymity policy inconsistently. [ 22 ] The New Statesman 's Jasper Jackson wrote that it was "difficult to see how Scott Alexander's full name is so integral to the NYT 's story that it justifies the damage it might do to him", but cautioned that such criticism was based solely on Alexander's own statements and that "before we make that call, it might be a good idea to have more than his word to go on." [ 7 ] As reported by The Daily Beast , the criticism by Alexander and his supporters that the paper was doxing him caused internal debate among The New York Times ' staff. [ 26 ] Supporters of the site organized a petition against release of the author's name. The petition collected over six thousand signatures in its first few days, including psychologist Steven Pinker , social psychologist Jonathan Haidt , economist Scott Sumner , computer scientist and blogger Scott Aaronson , and philosopher Peter Singer . [ 3 ] According to New Statesman columnist Louise Perry , Scott Alexander wrote that he quit his job and took measures that made him comfortable with revealing his real name, [ 27 ] which he published on Astral Codex Ten . [ 1 ] The New York Times published an article about the blog in February 2021, three weeks after Alexander had publicly revealed his name. [ 2 ]
https://en.wikipedia.org/wiki/Slate_Star_Codex
In quantum chemistry , Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge , because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by s , S , or σ , which relates the effective and actual nuclear charges as The rules were devised semi-empirically by John C. Slater and published in 1930. [ 1 ] Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s. [ 2 ] [ 3 ] Firstly, [ 1 ] [ 4 ] the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together. Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it. The shielding constant for each group is formed as the sum of the following contributions: In tabular form, the rules are summarized as: An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration 1s 2 2s 2 2p 6 3s 2 3p 6 3d 6 4s 2 . The screening constant, and subsequently the shielded (or effective) nuclear charge for each electron is deduced as: [ 1 ] Note that the effective nuclear charge is calculated by subtracting the screening constant from the atomic number, 26. The rules were developed by John C. Slater in an attempt to construct simple analytic expressions for the atomic orbital of any electron in an atom. Specifically, for each electron in an atom, Slater wished to determine shielding constants ( s ) and "effective" quantum numbers ( n *) such that provides a reasonable approximation to a single-electron wave function. Slater defined n * by the rule that for n = 1, 2, 3, 4, 5, 6 respectively; n * = 1, 2, 3, 3.7, 4.0 and 4.2. This was an arbitrary adjustment to fit calculated atomic energies to experimental data. Such a form was inspired by the known wave function spectrum of hydrogen-like atoms which have the radial component where n is the (true) principal quantum number , l the azimuthal quantum number , and f nl ( r ) is an oscillatory polynomial with n - l - 1 nodes. [ 5 ] Slater argued on the basis of previous calculations by Clarence Zener [ 6 ] that the presence of radial nodes was not required to obtain a reasonable approximation. He also noted that in the asymptotic limit (far away from the nucleus), his approximate form coincides with the exact hydrogen-like wave function in the presence of a nuclear charge of Z - s and in the state with a principal quantum number n equal to his effective quantum number n *. Slater then argued, again based on the work of Zener, that the total energy of a N -electron atom with a wavefunction constructed from orbitals of his form should be well approximated as Using this expression for the total energy of an atom (or ion) as a function of the shielding constants and effective quantum numbers, Slater was able to compose rules such that spectral energies calculated agree reasonably well with experimental values for a wide range of atoms. Using the values in the iron example above, the total energy of a neutral iron atom using this method is −2497.2 Ry , while the energy of an excited Fe + cation lacking a single 1s electron is −1964.6 Ry. The difference, 532.6 Ry, can be compared to the experimental (circa 1930) K absorption limit of 524.0 Ry. [ 1 ]
https://en.wikipedia.org/wiki/Slater's_rules
Slater-type orbitals ( STOs ) or Slater-type functions (STFs) are functions used as atomic orbitals in the linear combination of atomic orbitals molecular orbital method . They are named after the physicist John C. Slater , who introduced them in 1930. [ 1 ] They possess exponential decay at long range and Kato's cusp condition at short range (when combined as hydrogen-like atom functions, i.e. the analytical solutions of the stationary Schrödinger equation for one electron atoms). Unlike the hydrogen-like ("hydrogenic") Schrödinger orbitals, STOs have no radial nodes (neither do Gaussian-type orbitals ). STOs have the following radial part: where The normalization constant is computed from the integral Hence It is common to use the spherical harmonics Y l m ( r ) {\displaystyle Y_{l}^{m}(\mathbf {r} )} depending on the polar coordinates of the position vector r {\displaystyle \mathbf {r} } as the angular part of the Slater orbital. The first radial derivative of the radial part of a Slater-type orbital is The radial Laplace operator is split in two differential operators The first differential operator of the Laplace operator yields The total Laplace operator yields after applying the second differential operator the result Angular dependent derivatives of the spherical harmonics don't depend on the radial function and have to be evaluated separately. The fundamental mathematical properties are those associated with the kinetic energy, nuclear attraction and Coulomb repulsion integrals for placement of the orbital at the center of a single nucleus. Dropping the normalization factor N , the representation of the orbitals below is The Fourier transform is [ 2 ] where the ω {\displaystyle \omega } are defined by The overlap integral is of which the normalization integral is a special case. The superscript star denotes complex-conjugation . The kinetic energy integral is ∫ χ n ℓ m ∗ ( r ) ( − 1 2 ∇ 2 ) χ n ′ ℓ ′ m ′ ( r ) d 3 r = 1 2 δ ℓ ℓ ′ δ m m ′ ∫ 0 ∞ e − ( ζ + ζ ′ ) r [ [ ℓ ′ ( ℓ ′ + 1 ) − n ′ ( n ′ − 1 ) ] r n + n ′ − 2 + 2 ζ ′ n ′ r n + n ′ − 1 − ζ ′ 2 r n + n ′ ] d r , {\displaystyle {\begin{aligned}&\int \chi _{n\ell m}^{*}(r)~\left(-{\tfrac {1}{2}}\nabla ^{2}\right)\,\chi _{n'\ell 'm'}(r)~\mathrm {d} ^{3}r\\&={\frac {1}{2}}\delta _{\ell \ell '}\,\delta _{mm'}\,\int _{0}^{\infty }e^{-(\zeta +\zeta ')\,r}\left[[\ell '(\ell '+1)-n'(n'-1)]\,r^{n+n'-2}+2\zeta 'n'\,r^{n+n'-1}-\zeta '^{2}\,r^{n+n'}\right]~\mathrm {d} r~,\end{aligned}}} a sum over three overlap integrals already computed above. The Coulomb repulsion integral can be evaluated using the Fourier representation (see above) which yields ∫ χ n ℓ m ∗ ( r ) 1 | r − r ′ | χ n ′ ℓ ′ m ′ ( r ′ ) d 3 r = 4 π ∫ 1 ( 2 π ) 3 χ n ℓ m ∗ ( k ) 1 k 2 χ n ′ ℓ ′ m ′ ( k ) d 3 k = 8 δ ℓ ℓ ′ δ m m ′ ( n − ℓ ) ! ( n ′ − ℓ ) ! ( 2 ζ ) n ζ ℓ ( 2 ζ ′ ) n ′ ζ ′ ℓ ∫ 0 ∞ k 2 ℓ [ ∑ s = 0 ⌊ ( n − ℓ ) / 2 ⌋ ω s n ℓ ( k 2 + ζ 2 ) n + 1 − s ∑ s ′ = 0 ⌊ ( n ′ − ℓ ) / 2 ⌋ ω s ′ n ′ ℓ ′ ( k 2 + ζ ′ 2 ) n ′ + 1 − s ′ ] d k {\displaystyle {\begin{aligned}\int \chi _{n\ell m}^{*}(\mathbf {r} ){\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}~\chi _{n'\ell 'm'}(\mathbf {r} ')~\mathrm {d} ^{3}r&=4\pi \int {\frac {1}{(2\pi )^{3}}}~\chi _{n\ell m}^{*}(\mathbf {k} )~{\frac {1}{k^{2}}}~\chi _{n'\ell 'm'}(\mathbf {k} )~\mathrm {d} ^{3}k\\&=8\,\delta _{\ell \ell '}\,\delta _{mm'}~(n-\ell )!~(n'-\ell )!~{\frac {\,(2\zeta )^{n}\,}{\zeta ^{\ell }}}{\frac {\,(2\zeta ')^{n'}\,}{\zeta '^{\ell }}}\int _{0}^{\infty }k^{2\ell }\left[\sum _{s=0}^{\lfloor (n-\ell )/2\rfloor }{\frac {\omega _{s}^{n\ell }}{(k^{2}+\zeta ^{2})^{n+1-s}}}\sum _{s'=0}^{\lfloor (n'-\ell )/2\rfloor }{\frac {\omega _{s'}^{n'\ell '}}{~~(k^{2}+\zeta '^{2})^{n'+1-s'}~}}\right]\mathrm {d} k\end{aligned}}} These are either individually calculated with the law of residues or recursively as proposed by Cruz et al . (1978). [ 3 ] Some quantum chemistry software uses sets of Slater-type functions (STF) analogous to Slater type orbitals, but with variable exponents chosen to minimize the total molecular energy (rather than by Slater's rules as above). The fact that products of two STOs on distinct atoms are more difficult to express than those of Gaussian functions (which give a displaced Gaussian) has led many to expand them in terms of Gaussians. [ 4 ] Analytical ab initio software for polyatomic molecules has been developed, e.g., STOP: a Slater Type Orbital Package in 1996. [ 5 ] SMILES uses analytical expressions when available and Gaussian expansions otherwise. It was first released in 2000. Various grid integration schemes have been developed, sometimes after analytical work for quadrature (Scrocco), most famously in the ADF suite of DFT codes. After the work of John Pople , Warren. J. Hehre and Robert F. Stewart , a least squares representation of the Slater atomic orbitals as a sum of Gaussian-type orbitals is used. In their 1969 paper, the fundamentals of this principle are discussed and then further improved and used in the GAUSSIAN DFT code. [ 6 ]
https://en.wikipedia.org/wiki/Slater-type_orbital
In quantum mechanics , a Slater determinant is an expression that describes the wave function of a multi- fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle , by changing sign upon exchange of two fermions. [ 1 ] Only a small subset of all possible many-body fermionic wave functions can be written as a single Slater determinant, but those form an important and useful subset because of their simplicity. The Slater determinant arises from the consideration of a wave function for a collection of electrons, each with a wave function known as the spin-orbital χ ( x ) {\displaystyle \chi (\mathbf {x} )} , where x {\displaystyle \mathbf {x} } denotes the position and spin of a single electron. A Slater determinant containing two electrons with the same spin orbital would correspond to a wave function that is zero everywhere. The Slater determinant is named for John C. Slater , who introduced the determinant in 1929 as a means of ensuring the antisymmetry of a many-electron wave function, [ 2 ] although the wave function in the determinant form first appeared independently in Heisenberg's [ 3 ] and Dirac's [ 4 ] [ 5 ] articles three years earlier. The simplest way to approximate the wave function of a many-particle system is to take the product of properly chosen orthogonal wave functions of the individual particles. For the two-particle case with coordinates x 1 {\displaystyle \mathbf {x} _{1}} and x 2 {\displaystyle \mathbf {x} _{2}} , we have This expression is used in the Hartree method as an ansatz for the many-particle wave function and is known as a Hartree product . However, it is not satisfactory for fermions because the wave function above is not antisymmetric under exchange of any two of the fermions, as it must be according to the Pauli exclusion principle . An antisymmetric wave function can be mathematically described as follows: This does not hold for the Hartree product, which therefore does not satisfy the Pauli principle. This problem can be overcome by taking a linear combination of both Hartree products: where the coefficient is the normalization factor . This wave function is now antisymmetric and no longer distinguishes between fermions (that is, one cannot indicate an ordinal number to a specific particle, and the indices given are interchangeable). Moreover, it also goes to zero if any two spin orbitals of two fermions are the same. This is equivalent to satisfying the Pauli exclusion principle. The expression can be generalised to any number of fermions by writing it as a determinant . For an N -electron system, the Slater determinant is defined as [ 1 ] [ 6 ] where the last two expressions use a shorthand for Slater determinants: The normalization constant is implied by noting the number N, and only the one-particle wavefunctions (first shorthand) or the indices for the fermion coordinates (second shorthand) are written down. All skipped labels are implied to behave in ascending sequence. The linear combination of Hartree products for the two-particle case is identical with the Slater determinant for N = 2. The use of Slater determinants ensures an antisymmetrized function at the outset. In the same way, the use of Slater determinants ensures conformity to the Pauli principle . Indeed, the Slater determinant vanishes if the set { χ i } {\displaystyle \{\chi _{i}\}} is linearly dependent . In particular, this is the case when two (or more) spin orbitals are the same. In chemistry one expresses this fact by stating that no two electrons with the same spin can occupy the same spatial orbital. Many properties of the Slater determinant come to life with an example in a non-relativistic many electron problem. [ 7 ] Starting from a molecular Hamiltonian : H ^ tot = ∑ i p i 2 2 m + ∑ I P I 2 2 M I + ∑ i V nucl ( r i ) + 1 2 ∑ i ≠ j e 2 | r i − r j | + 1 2 ∑ I ≠ J Z I Z J e 2 | R I − R J | {\displaystyle {\hat {H}}_{\text{tot}}=\sum _{i}{\frac {\mathbf {p} _{i}^{2}}{2m}}+\sum _{I}{\frac {\mathbf {P} _{I}^{2}}{2M_{I}}}+\sum _{i}V_{\text{nucl}}(\mathbf {r_{i}} )+{\frac {1}{2}}\sum _{i\neq j}{\frac {e^{2}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}+{\frac {1}{2}}\sum _{I\neq J}{\frac {Z_{I}Z_{J}e^{2}}{|\mathbf {R} _{I}-\mathbf {R} _{J}|}}} where r i {\displaystyle \mathbf {r} _{i}} are the electrons and R I {\displaystyle \mathbf {R} _{I}} are the nuclei and For simplicity we freeze the nuclei at equilibrium in one position and we remain with a simplified Hamiltonian where and where we will distinguish in the Hamiltonian between the first set of terms as G ^ 1 {\displaystyle {\hat {G}}_{1}} (the "1" particle terms) and the last term G ^ 2 {\displaystyle {\hat {G}}_{2}} (the "2" particle term) which contains exchange term for a Slater determinant. The two parts will behave differently when they have to interact with a Slater determinant wave function. We start to compute the expectation values of one-particle terms In the above expression, we can just select the identical permutation in the determinant in the left part, since all the other N! − 1 permutations would give the same result as the selected one. We can thus cancel N! at the denominator Because of the orthonormality of spin-orbitals it is also evident that only the identical permutation survives in the determinant on the right part of the above matrix element This result shows that the anti-symmetrization of the product does not have any effect for the one particle terms and it behaves as it would do in the case of the simple Hartree product. And finally we remain with the trace over the one-particle Hamiltonians Which tells us that to the extent of the one-particle terms the wave functions of the electrons are independent of each other and the expectation value of total system is given by the sum of expectation value of the single particles. For the two-particle terms instead If we focus on the action of one term of G 2 {\displaystyle G_{2}} , it will produce only the two terms And finally ⟨ Ψ 0 | G 2 | Ψ 0 ⟩ = 1 2 ∑ i ≠ j [ ⟨ ψ i ψ j | e 2 r i j | ψ i ψ j ⟩ − ⟨ ψ i ψ j | e 2 r i j | ψ j ψ i ⟩ ] {\displaystyle \langle \Psi _{0}|G_{2}|\Psi _{0}\rangle ={\frac {1}{2}}\sum _{i\neq j}\left[\langle \psi _{i}\psi _{j}|{\frac {e^{2}}{r_{ij}}}|\psi _{i}\psi _{j}\rangle -\langle \psi _{i}\psi _{j}|{\frac {e^{2}}{r_{ij}}}|\psi _{j}\psi _{i}\rangle \right]} which instead is a mixing term. The first contribution is called the "coulomb" term or "coulomb" integral and the second is the "exchange" term or exchange integral. Sometimes different range of index in the summation is used ∑ i j {\textstyle \sum _{ij}} since the Coulomb and exchange contributions exactly cancel each other for i = j {\displaystyle i=j} . It is important to notice explicitly that the exchange term, which is always positive for local spin-orbitals, [ 8 ] is absent in the simple Hartree product. Hence the electron-electron repulsive energy ⟨ Ψ 0 | G 2 | Ψ 0 ⟩ {\displaystyle \langle \Psi _{0}|G_{2}|\Psi _{0}\rangle } on the antisymmetrized product of spin-orbitals is always lower than the electron-electron repulsive energy on the simple Hartree product of the same spin-orbitals. Since exchange bielectronic integrals are different from zero only for spin-orbitals with parallel spins, we link the decrease in energy with the physical fact that electrons with parallel spin are kept apart in real space in Slater determinant states. Most fermionic wavefunctions cannot be represented as a Slater determinant. The best Slater approximation to a given fermionic wave function can be defined to be the one that maximizes the overlap between the Slater determinant and the target wave function. [ 9 ] The maximal overlap is a geometric measure of entanglement between the fermions. A single Slater determinant is used as an approximation to the electronic wavefunction in Hartree–Fock theory . In more accurate theories (such as configuration interaction and MCSCF ), a linear combination of Slater determinants is needed. The word " detor " was proposed by S. F. Boys to refer to a Slater determinant of orthonormal orbitals, [ 10 ] but this term is rarely used. Unlike fermions that are subject to the Pauli exclusion principle, two or more bosons can occupy the same single-particle quantum state. Wavefunctions describing systems of identical bosons are symmetric under the exchange of particles and can be expanded in terms of permanents .
https://en.wikipedia.org/wiki/Slater_determinant
In mathematics and mathematical physics, Slater integrals are certain integrals of products of three spherical harmonics . They occur naturally when applying an orthonormal basis of functions on the unit sphere that transform in a particular way under rotations in three dimensions. Such integrals are particularly useful when computing properties of atoms which have natural spherical symmetry. These integrals are defined below along with some of their mathematical properties. In connection with the quantum theory of atomic structure , John C. Slater defined the integral of three spherical harmonics as a coefficient c {\displaystyle c} . [ 1 ] These coefficients are essentially the product of two Wigner 3jm symbols . These integrals are useful and necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator and Exchange operator are needed. For an explicit formula, one can use Gaunt's formula for associated Legendre polynomials . Note that the product of two spherical harmonics can be written in terms of these coefficients. By expanding such a product over a spherical harmonic basis with the same order one may then multiply by Y ∗ {\displaystyle Y^{*}} and integrate, using the conjugate property and being careful with phases and normalisations: Hence These coefficient obey a number of identities. They include This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slater_integrals
Within computational chemistry , the Slater–Condon rules express integrals of one- and two-body operators over wavefunctions constructed as Slater determinants of orthonormal orbitals in terms of the individual orbitals. In doing so, the original integrals involving N -electron wavefunctions are reduced to sums over integrals involving at most two molecular orbitals, or in other words, the original 3 N dimensional integral is expressed in terms of many three- and six-dimensional integrals. The rules are used in deriving the working equations for all methods of approximately solving the Schrödinger equation that employ wavefunctions constructed from Slater determinants. These include Hartree–Fock theory , where the wavefunction is a single determinant, and all those methods which use Hartree–Fock theory as a reference such as Møller–Plesset perturbation theory , and Coupled cluster and Configuration interaction theories. In 1929 John C. Slater derived expressions for diagonal matrix elements of an approximate Hamiltonian while investigating atomic spectra within a perturbative approach. [ 1 ] The following year Edward Condon extended the rules to non-diagonal matrix elements. [ 2 ] In 1955 Per-Olov Löwdin further generalized these results for wavefunctions constructed from non-orthonormal orbitals, leading to what are known as the Löwdin rules . [ 3 ] In terms of an antisymmetrization operator ( A {\displaystyle {\mathcal {A}}} ) acting upon a product of N orthonormal spin-orbitals (with r and σ denoting spatial and spin variables), a determinantal wavefunction is denoted as A wavefunction differing from this by only a single orbital (the m' th orbital) will be denoted as and a wavefunction differing by two orbitals will be denoted as For any particular one- or two-body operator, Ô , the Slater–Condon rules show how to simplify the following types of integrals: [ 4 ] Matrix elements for two wavefunctions differing by more than two orbitals vanish unless higher order interactions are introduced. One body operators depend only upon the position or momentum of a single electron at any given instant. Examples are the kinetic energy , dipole moment , and total angular momentum operators. A one-body operator in an N -particle system is decomposed as The Slater–Condon rules for such an operator are: [ 4 ] [ 5 ] Two-body operators couple two particles at any given instant. Examples being the electron-electron repulsion, magnetic dipolar coupling , and total angular momentum-squared operators. A two-body operator in an N -particle system is decomposed as The Slater–Condon rules for such an operator are: [ 4 ] [ 5 ] where Any matrix elements of a two-body operator with wavefunctions that differ in three or more spin orbitals will vanish, meaning
https://en.wikipedia.org/wiki/Slater–Condon_rules
In condensed matter physics , the Slater–Pauling rule states that adding an element to a metal alloy will reduce the alloy's saturation magnetization by an amount proportional to the number of valence electrons outside of the added element's d shell . [ 1 ] Conversely, elements with a partially filled d shell will increase the magnetic moment by an amount proportional to number of missing electrons. Investigated by the physicists John C. Slater [ 2 ] and Linus Pauling [ 3 ] in the 1930s, the rule is a useful approximation for the magnetic properties of many transition metals . The use of the rule depends on carefully defining what it means for an electron to lie outside of the d shell. The electrons outside a d shell are the electrons which have higher energy than the electrons within the d shell. The Madelung rule (incorrectly) suggests that the s shell is filled before the d shell. For example, it predicts Zinc has a configuration of [Ar] 4s 2 3d 10 . However, Zinc's 4s electrons actually have more energy than the 3d electrons, putting them outside the d shell. Ordered in terms of energy, the electron configuration of Zinc is [Ar] 3d 10 4s 2 . (see: the n+ℓ energy ordering rule ) The basic rule given above makes several approximations. One simplification is rounding to the nearest integer. Because we are describing the number of electrons in a band using an average value, the s and d shells can be filled to non-integer numbers of electrons, allowing the Slater–Pauling rule to give more accurate predictions. While the Slater–Pauling rule has many exceptions, it is often a useful as an approximation to more accurate, but more complicated physical models. Building on further theoretical developments done by physicists such as Jacques Friedel , [ 4 ] a more widely applicable version of the rule, known as the generalized Slater–Pauling rule was developed. [ 5 ] [ 6 ] This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slater–Pauling_rule
The slave boson method is a technique for dealing with models of strongly correlated systems , providing a method to second-quantize valence fluctuations within a restrictive manifold of states. In the 1960s the physicist John Hubbard introduced an operator, now named the "Hubbard operator" [ 1 ] to describe the creation of an electron within a restrictive manifold of valence configurations. Consider for example, a rare earth or actinide ion in which strong Coulomb interactions restrict the charge fluctuations to two valence states, such as the Ce 4+ (4f 0 ) and Ce 3+ (4f 1 ) configurations of a mixed-valence cerium compound. The corresponding quantum states of these two states are the singlet | f 0 ⟩ {\displaystyle \vert f^{0}\rangle } state and the magnetic | f 1 : σ ⟩ {\displaystyle \vert f^{1}:\sigma \rangle } state, where σ =↑ , ↓ {\displaystyle \sigma =\uparrow ,\ \downarrow } is the spin. The fermionic Hubbard operators that link these states are then The algebra of operators is closed by introducing the two bosonic operators Together, these operators satisfy the graded Lie algebra where the [ A , B ] ± = A B ± B A {\displaystyle [A,B]_{\pm }=AB\pm BA} and the sign is chosen to be negative, unless both A {\displaystyle A} and B {\displaystyle B} are fermions, in which case it is positive. The Hubbard operators are the generators of the super group SU(2|1). This non-canonical algebra means that these operators do not satisfy a Wick's theorem , which prevents a conventional diagrammatic or field theoretic treatment. In 1983 Piers Coleman introduced the slave boson formulation of the Hubbard operators, [ 2 ] which enabled valence fluctuations to be treated within a field-theoretic approach. [ 3 ] In this approach, the spinless configuration of the ion is represented by a spinless "slave boson" | f 0 ⟩ = b † | 0 ⟩ {\displaystyle \vert f^{0}\rangle =b^{\dagger }\vert 0\rangle } , whereas the magnetic configuration | f 1 : σ ⟩ = f σ † | 0 ⟩ {\displaystyle \vert f^{1}:\sigma \rangle =f_{\sigma }^{\dagger }\vert 0\rangle } is represented by an Abrikosov slave fermion. From these considerations, it is seen that the Hubbard operators can be written as and This factorization of the Hubbard operators faithfully preserves the graded Lie algebra. Moreover, the Hubbard operators so written commute with the conserved quantity In Hubbard's original approach, Q = 1 {\displaystyle Q=1} , but by generalizing this quantity to larger values, higher irreducible representations of SU(2|1) are generated. The slave boson representation can be extended from two component to N {\displaystyle N} component fermions, where the spin index α ∈ [ 1 , N ] {\displaystyle \alpha \in [1,N]} runs over N {\displaystyle N} values. By allowing N {\displaystyle N} to become large, while maintaining the ratio Q / N {\displaystyle Q/N} , it is possible to develop a controlled large- N {\displaystyle N} expansion. The slave boson approach has since been widely applied to strongly correlated electron systems, and has proven useful in developing the resonating valence bond theory (RVB) of high temperature superconductivity [ 4 ] [ 5 ] and the understanding of heavy fermion compounds. [ 6 ] This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slave_boson
Slayton A. Evans Jr. (May 17, 1943 – March 24, 2001) was an American chemist and professor at the University of North Carolina, Chapel Hill . He was a leading researcher into organophosphorus chemistry. His research led to a greater understanding of the functions of organophosphate compounds and innovations in methods to produce chemical compounds for pharmaceutical drugs. Slayton Alvin Evans Jr. was born on May 17, 1943, in Chicago, Illinois, to Corine M. Thompson Evans and Slayton A. Evans, Sr. [ 1 ] [ 2 ] Months later, his father was called to serve in World War II . [ 3 ] When Slayton was three years old, the family moved to Meridian, Mississippi , [ 2 ] where they lived in a segregated public housing project and his father worked at a J. C. Penney store. Slayton's interest in chemistry began early, when he was given a chemistry set. In addition, a small microscope allowed him to study various plant specimens and insects. Evans and his two younger siblings enrolled at a segregated primary school run by the Roman Catholic Church , and later he attended St. Joseph's High School. In 1957, when Evans was in the ninth grade, news of the artificial satellite Sputnik inspired him to learn about rocketry and attempt to build his own. While he was given permission by the nuns at his school to buy chemicals to make rocket fuel , he had to make his own powdered charcoal. He built six rockets, two of them achieving liftoff. [ 3 ] Evans helped pay for his school tuition by mowing lawns and during eighth grade he was a junior assistant janitor at his elementary school. Later he worked in the high school cafeteria. In his third year of high school, he considered going into the Air Force , but was too tall for flight training. However, he took several competitive examinations and was the recipient of an academic scholarship to Tougaloo College where he also received an athletic scholarship for basketball. He enrolled at Tougaloo in 1961. [ 3 ] By the end of his first year, Evans had top marks in chemistry in his class. He got a summer job working for the pharmaceutical company Abbott Laboratories in Chicago where he was tasked first with creating chemical compounds from raw materials, and later with identifying the stages of chemical reactions. [ 3 ] Evans graduated from Tougaloo with a Bachelor of Science in chemistry in 1965. [ 2 ] Evans was encouraged to attend graduate school, though he didn't know how to pay for it. He briefly attended the Illinois Institute of Technology before transferring to Case Western Reserve University in Cleveland, Ohio, where he was offered a research assistant position in the chemistry department. In his first year, he received a draft notice to go to the Vietnam War . University officials contacted the draft board and explained that Evans' research was crucial to the war effort. He was researching a medicine to treat schistosomiasis , a disease caused by parasitic flatworms that are common in Southeast Asia. He completed his coursework in 1969 and received his Ph.D. in chemistry in early 1970. [ 3 ] Evans took a postdoctoral fellowship at the University of Texas at Arlington for the 1970–1971 academic year, followed by second fellowship at the University of Notre Dame in Indiana, [ 1 ] where he worked with the organic chemist Ernest L. Eliel studying stereochemistry . [ 3 ] Upon the completion of the fellowship, he was invited to be a research instructor at Dartmouth College in 1972, [ 1 ] [ 2 ] though they did not have the laboratory equipment he required to continue his research. [ 3 ] Evans then joined the faculty of the University of North Carolina at Chapel Hill as an assistant professor of chemistry in 1974. [ 1 ] [ 2 ] He was the first African-American chemistry professor at the university. [ 4 ] After 10 years at Chapel Hill, Evans became a full professor, and in 1992 was honored with a Kenan Professor chair. [ 3 ] Evans was a leading researcher in the field of organophosphorus chemistry, [ 2 ] authoring more than 85 scientific articles on organosulfur and organophosphorus chemistry. [ 1 ] His research led to a deeper understanding of the functions of organophosphate compounds and innovations in methods to produce chemical compounds for pharmaceutical drugs. Evans was inspired by William Standish Knowles , who in 1968 developed a method of asymmetric hydrogenation , which Evans used to develop alternative asymmetric synthesis methods as a way to produce single stereoisomers . Evans started experimenting with organophosphorus chemistry in 1970, developing a process using phosphorus atoms of organophosphate compounds as agents to produce specific stereoisomers. He also devised a method of asymmetric synthesis to synthesize alpha-amino phosphonic acids by adding phosphorus to sulfimides . [ 2 ] At the University of North Carolina Evans assembled a research team of undergraduates, graduate students, and postdoctoral fellows from around the world. In the 1980s, a Ford Foundation Fellowship allowed him to create ties between his research team and a research group at the Paul Sabatier University in France, where he spent a full sabbatical year. Later, with the help of a Fulbright Fellowship , he built ties with groups in Mexico, Poland, Germany, Greece, and Russia. [ 3 ] Evans championed recruiting minority applicants to UNC-Chapel Hill, [ 4 ] while on the national front, he served on committees of the American Chemical Society , the National Institutes of Health , the National Science Foundation , and was chair of the U.S. National Committee of the International Union of Pure and Applied Chemistry . [ 3 ] [ 2 ] [ 4 ] He also served on a council that advised the National Institute of General Medical Sciences . [ 2 ] Evans married Tommie Johnson in 1967. They had two children. Evans died on March 24, 2001, in Chapel Hill . The Slayton A. Evans Jr. Memorial Lecture Fund [ 4 ] and the Slayton Evans Research Award were both named in his honor post-humously. [ 2 ]
https://en.wikipedia.org/wiki/Slayton_A._Evans_Jr.
The sleep cycle is an oscillation between the slow-wave and REM (paradoxical) phases of sleep . It is sometimes called the ultradian sleep cycle , sleep–dream cycle , or REM-NREM cycle , to distinguish it from the circadian alternation between sleep and wakefulness . In humans, this cycle takes 70 to 110 minutes (90 ± 20 minutes). [ 1 ] Within the sleep of adults and infants there are cyclic fluctuations between quiet and active sleep. These fluctuations may persist during wakefulness as rest-activity cycles but are less easily discerned. [ 2 ] Electroencephalography shows the timing of sleep cycles by virtue of the marked distinction in brainwaves manifested during REM and non-REM sleep. Delta wave activity, correlating with slow-wave (deep) sleep, in particular shows regular oscillations throughout a good night's sleep. Secretions of various hormones , including renin , growth hormone , and prolactin , correlate positively with delta-wave activity, while secretion of thyroid-stimulating hormone correlates inversely. [ 3 ] Heart rate variability , well known to increase during REM, predictably also correlates inversely with delta-wave oscillations over the ~90-minute cycle. [ 4 ] In order to determine in which stage of sleep the asleep subject is, electroencephalography is combined with other devices used for this differentiation. EMG ( electromyography ) is a crucial method to distinguish between sleep phases: for example, a decrease of muscle tone is in general a characteristic of the transition from wake to sleep, [ 5 ] [ 6 ] and during REM sleep, there is a state of muscle atonia (paralysis), resulting in an absence of signals in the EMG. [ 5 ] EOG (electrooculography) , the measure of the eyes’ movement, is the third method used in the sleep architecture measurement; [ 7 ] for example, REM sleep, as the name indicates, is characterized by a rapid eye movement pattern, visible thanks to the EOG. [ 8 ] Moreover, methods based on cardiorespiratory parameters are also effective in the analysis of sleep architecture—if they are associated with the other aforementioned measurements (such as electroencephalography, electrooculography and the electromyography). [ 9 ] Homeostatic functions, especially thermoregulation , occur normally during non-REM sleep, but not during REM sleep. Thus, during REM sleep, body temperature tends to drift away from its mean level, and during non-REM sleep, to return to normal. Alternation between the stages therefore maintains body temperature within an acceptable range. [ 10 ] In humans, the transition between non-REM and REM is abrupt; in other animals, it is less so. [ 11 ] Researchers have proposed different models to elucidate the undoubtedly complex rhythm of electrochemical processes that result in the regular alternation of REM and NREM sleep. Monoamines are active during NREMS, but not REMS, whereas acetylcholine is more active during REMS. The reciprocal interaction model proposed in the 1970s suggested a cyclic give-and-take between these two systems. More recent theories such as the "flip-flop" model, proposed in the 2000s, include the regulatory role of an inhibitory neurotransmitter gamma-aminobutyric acid (GABA). [ 12 ] The standard figure given for the average length of the sleep cycle in an adult man is 90 minutes. N1 (NREM stage 1) is when the person is drowsy or awake to falling asleep. Brain waves and muscle activity start to decrease at this stage. N2 is when the person experiences a light sleep. Eye movement has stopped by this time. Brain wave frequency and muscle tonus is decreased. The heart rate and body temperature also goes down. N3 or even N4 are the most difficult stages to be awakened. Every part of the body is now relaxed, breathing, blood pressure and body temperature are reduced. The National Sleep Foundation discusses the different stages of NREM sleep and their importance. They describe REM sleep as "A unique state, in which dreams usually occur. The brain is awake and body paralyzed." This unique stage usually occurs when the person dreams. [ 13 ] [ 11 ] The figure of 90 minutes for the average length of a sleep cycle was popularized by Nathaniel Kleitman around 1963. [ 14 ] Other sources give 90–110 minutes [ 3 ] or 80–120 minutes. [ 4 ] In infants , the sleep cycle lasts about 50–60 minutes; average length increases as the human grows into adulthood. In cats , the sleep cycle lasts about 30 minutes, though it is about 12 minutes in rats and up to 120 minutes in elephants (In this regard, the ontogeny of the sleep cycle appears proportionate with metabolic processes, which vary in proportion with organism size. However, shorter sleep cycles detected in some elephants complicate this theory). [ 11 ] [ 13 ] [ 15 ] The cycle can be defined as lasting from the end of one REM period to the end of the next, [ 13 ] or from the beginning of REM, or from the beginning of non-REM stage 2 (the decision of how to mark the periods makes a difference for research purposes, because of the unavoidable inclusion or exclusion of the night's first NREM or its final REM phase if directly preceding awakening ). [ 14 ] A 7–8-hour sleep probably includes five cycles, the middle two of which tend to be longer than the first and the fourth. [ 14 ] REM takes up more of the cycle as the night goes on. [ 11 ] [ 16 ] Unprovoked awakening occurs most commonly during or after a period of REM sleep, as body temperature is rising. [ 17 ] Ernest Hartmann discovered in 1968 that humans seem to continue a roughly 90-minute ultradian rhythm throughout a 24-hour day, whether they are asleep or awake. [ 13 ] According to this hypothesis, during the period of this cycle corresponding with REM, people tend to daydream more and show less muscle tone . [ 18 ] Kleitman and others following have referred to this rhythm as the basic rest–activity cycle , of which the "sleep cycle" would be a manifestation. [ 14 ] [ 19 ] A difficulty for this theory is the fact that a long non-REM phase almost always precedes REM, regardless of when in the cycle a person falls asleep. [ 14 ] The sleep cycle has proven resistant to systematic alteration by drugs . Although some drugs shorten REM periods, they do not abolish the underlying rhythm. Deliberate REM deprivation shortens the cycle temporarily, as the brain moves into REM sleep more readily (the " REM rebound ") in an apparent correction for the deprivation. [ 13 ] There are various methods to control the alterations of sleep cycles:
https://en.wikipedia.org/wiki/Sleep_cycle
Sleep is a biological requirement for all animals that have a brain, except for ones which have only a rudimentary brain. Therefore basal species do not sleep, since they do not have brains. It has been observed in mammals, birds, reptiles, amphibians, fish, and, in some form, in insects. The internal circadian clock promotes sleep at night for diurnal organisms (such as humans) and in the day for nocturnal organisms (such as rats ). Sleep patterns vary widely among species, with some foregoing sleep for extended periods and some engaging in unihemispheric sleep , in which one brain hemisphere sleeps while the other remains awake. Sleep can follow a physiological or behavioral definition. In the physiological sense, sleep is a state characterized by reversible unconsciousness, special brainwave patterns, sporadic eye movement, loss of muscle tone (possibly with some exceptions; see below regarding the sleep of birds and of aquatic mammals), and a compensatory increase following deprivation of the state, this last known as sleep homeostasis (i.e., the longer a waking state lasts, the greater the intensity and duration of the sleep state thereafter). [ 1 ] [ 2 ] In the behavioral sense, sleep is characterized by minimal movement, non-responsiveness to external stimuli (i.e. increased sensory threshold ), the adoption of a typical posture, and the occupation of a sheltered site, all of which is usually repeated on a 24-hour basis. [ 3 ] The physiological definition applies well to birds and mammals, but in other animals whose brains are not as complex, the behavioral definition is more often used. In very simple animals, behavioral definitions of sleep are the only ones possible, and even then the behavioral repertoire of the animal may not be extensive enough to allow distinction between sleep and wakefulness. [ 4 ] Sleep is quickly reversible, as opposed to hibernation or coma , and sleep deprivation is followed by longer or deeper rebound sleep. If sleep were not essential, one would expect to find These symptoms are not seen in complex animals, and sleep is thus considered necessary to them. Sleep helps the body and mind to feel rested. Findings show that if rats do not get sleep, they die in a few weeks. Despite having enough food, their appetite tends to decrease resulting in weight loss and eventually death. [ 5 ] Outside of a few basal animals that have no brain or a very simple one, no animals have been found to date that satisfy any of these criteria. [ 6 ] While some varieties of shark, such as great whites and hammerheads , must remain in motion at all times to move oxygenated water over their gills, it is possible they still sleep one cerebral hemisphere at a time as marine mammals do. However, it remains to be shown definitively whether any fish is capable of unihemispheric sleep . [ 7 ] Sleep as a phenomenon appears to have very old evolutionary roots. Unicellular organisms do not necessarily "sleep", although many of them have pronounced circadian rhythms . The fresh-water polyp Hydra vulgaris and the jellyfish Cassiopea are among the most primitive organisms in which sleep-like states have been observed. [ 8 ] [ 9 ] Observing sleep states in jellyfish provides evidence that sleep states do not require that an animal have a brain or central nervous system. [ 10 ] The nematode C. elegans is another primitive organism that appears to require sleep. Here, a lethargus phase occurs in short periods preceding each moult , a fact which may indicate that sleep primitively is connected to developmental processes. Raizen et al.' s results [ 11 ] furthermore suggest that sleep is necessary for changes in the neural system. Bees have some of the most complex sleep states amongst insects. [ 12 ] Decades of research show that insects do sleep, and that this resembles mammalian and avian sleep. Nonetheless, sleep scientists continued to not accept these results and there was wide agreement that insects did not experience sleep. It took the gene expression studies of Hendricks et al. 2000 and Shaw et al. 2000 [ 13 ] [ 14 ] showing orthology between mammals and the fruit fly Drosophila melanogaster for this to finally be accepted. The electrophysiological study of sleep in small invertebrates is complicated. Insects go through circadian rhythms of activity and passivity, but some do not seem to have a homeostatic sleep need. Insects do not seem to exhibit REM sleep. However, fruit flies appear to sleep, and systematic disturbance of that state leads to cognitive disabilities . [ 15 ] There are several methods of measuring cognitive functions in fruit flies. A common method is to let the flies choose whether they want to fly through a tunnel that leads to a light source, or through a dark tunnel. Normally, flies are attracted to light. But if sugar is placed in the end of the dark tunnel, and something the flies dislike is placed in the end of the light tunnel, the flies will eventually learn to fly towards darkness rather than light. Flies deprived of sleep require a longer time to learn this and also forget it more quickly. If an arthropod is experimentally kept awake longer than it is used to, then its coming rest period will be prolonged. In cockroaches , that rest period is characterized by the antennae being folded down and by a decreased sensitivity to external stimuli. [ 16 ] Sleep has been described in crayfish , too, characterized by passivity and increased thresholds for sensory stimuli as well as changes in the EEG pattern, markedly differing from the patterns found in crayfish when they are awake. [ 17 ] In honeybees, it has been shown that they use sleep to store long-term memories. [ 18 ] Sleep-like state has been described in jumping spiders too, as well as regularly occurring bouts of retinal movements that suggest an REM sleep–like state. [ 19 ] Also sleeping cuttlefish and octopuses show signs of having REM-sleep behaviors. [ 20 ] [ 21 ] It is suggested that octopus also has a two-staged sleep similar to some verberates. “Quiet sleep” stage usually involves behaviors such as eyes closing, flat body posture, and white-skin pattern. This stage usually lasts around 60 minutes. After the “quiet stage”, octopus moves onto an “active sleep” stage which lasts about 1 minute. In the “active sleep” stage, octopus has more eye and body movements as well as increased breathing rate. Although octopus is suggested to have the most obvious color changing during the “active sleep” stage, octopus has very brief and fast “color-flash” during the “quiet sleep” stage as well. [ 22 ] Sleep in fish is the subject of ongoing scientific research. [ 23 ] [ 24 ] Typically fish exhibit periods of inactivity, but show no significant reactions to deprivation of this condition. [ inconsistent ] Some species that always live in shoals or that swim continuously (because of a need for ram ventilation of the gills, for example) are suspected never to sleep. [ 25 ] There is also doubt about certain blind species that live in caves . [ 26 ] Other fish seem to sleep, however. For example, zebrafish , [ 27 ] [ 28 ] tilapia , [ 29 ] tench , [ 30 ] brown bullhead , [ 31 ] and swell shark [ 32 ] become motionless and unresponsive at night (or by day, in the case of the swell shark); Spanish hogfish and blue-headed wrasse can even be lifted by hand all the way to the surface without evoking a response. [ 33 ] Studies show that some fish (for example rays and sharks ) have unihemispheric sleep, which means they put half their brain to sleep while the other half still remains active and they swim while they are sleeping. [ 7 ] [ 34 ] A 1961 observational study of approximately 200 species in European public aquaria reported many cases of apparent sleep. [ 35 ] On the other hand, sleep patterns are easily disrupted and may even disappear during periods of migration, spawning, and parental care. [ 36 ] Mammals, birds and reptiles evolved from amniotic ancestors, the first vertebrates with life cycles independent of water. The fact that birds and mammals are the only known animals to exhibit REM and NREM sleep indicates a common trait before divergence. [ 37 ] However, recent evidence of REM-like sleep in fish suggests this divergence may have occurred much earlier than previously thought. [ 38 ] Up to this point, reptiles were considered the most logical group to investigate the origins of sleep. Daytime activity in reptiles alternates between basking and short bouts of active behavior, which has significant neurological and physiological similarities to sleep states in mammals. It is proposed that REM sleep evolved from short bouts of motor activity in reptiles, while slow-wave sleep (SWS) evolved from their basking state, which shows similar slow-wave EEG patterns. [ 39 ] Reptiles have quiescent periods similar to mammalian sleep, and a decrease in electrical activity in the brain has been registered when the animals have been asleep. However, the EEG pattern in reptilian sleep differs from what is seen in mammals and other animals. [ 4 ] In reptiles, sleep time increases following sleep deprivation , and stronger stimuli are needed to awaken the animals when they have been deprived of sleep as compared to when they have slept normally. This suggests that the sleep which follows deprivation is compensatorily deeper. [ 40 ] In 2016, a study [ 41 ] reported the existence of REM- and NREM-like sleep stages in the Australian dragon Pogona vitticeps . Amphibians have periods of inactivity but show high vigilance (receptivity to potentially threatening stimuli) in this state. Like some birds and aquatic mammals, crocodilians are also capable of unihemispheric sleep . [ 42 ] There are significant similarities between sleep in birds and sleep in mammals, [ 43 ] which is one of the reasons for the idea that sleep in higher animals with its division into REM and NREM sleep has evolved together with warm-bloodedness . [ 44 ] Birds compensate for sleep loss in a manner similar to mammals, by deeper or more intense slow-wave sleep (SWS). [ 45 ] Birds have both REM and NREM sleep, and the EEG patterns of both have similarities to those of mammals. Different birds sleep different amounts, but the associations seen in mammals between sleep and variables such as body mass, brain mass, relative brain mass, basal metabolism and other factors (see below) are not found in birds. The only clear explanatory factor for the variations in sleep amounts for birds of different species is that birds who sleep in environments where they are exposed to predators have less deep sleep than birds sleeping in more protected environments. [ 46 ] Birds do not necessarily exhibit sleep debt, but a peculiarity that birds share with aquatic mammals, and possibly also with certain species of lizards (opinions differ about that last point [ clarification needed ] ), is the phenomenon of unihemispheric slow-wave sleep ; that is, the ability to sleep with one cerebral hemisphere at a time, while keeping the other hemisphere awake. [ 47 ] When just one hemisphere is sleeping, only the contralateral eye will be shut; that is, when the right hemisphere is asleep, the left eye will be shut, and vice versa. [ 48 ] The distribution of sleep between the two hemispheres and the amount of unihemispheric sleep are determined both by which part of the brain has been the most active during the previous period of wake [ 49 ] —that part will sleep the deepest—and by the level of risk of attacks from predators. Ducks near the perimeter of the flock are likely to be the ones that first will detect predator attacks. These ducks have significantly more unihemispheric sleep than those who sleep in the middle of the flock, and they react to threatening stimuli seen by the open eye. [ 50 ] Opinions partly differ about sleep in migratory birds . [ citation needed ] The controversy is mainly about whether they can sleep while flying or not. [ citation needed ] Theoretically, certain types of sleep could be possible while flying, but technical difficulties preclude the recording of brain activity in birds while they are flying. Mammals have wide diversity in sleep phenomena. Generally, they go through periods of alternating non-REM and REM sleep, but these manifest differently. Horses and other herbivorous ungulates can sleep while standing, but must necessarily lie down for REM sleep (which causes muscular atony ) for short periods. Giraffes, for example, only need to lie down for REM sleep for a few minutes at a time. Bats sleep while hanging upside down. Male armadillos get erections during non-REM sleep, and the inverse is true in rats. [ 51 ] Early mammals engaged in polyphasic sleep, dividing sleep into multiple bouts per day. Higher daily sleep quotas and shorter sleep cycles in polyphasic species as compared to monophasic species, suggest that polyphasic sleep may be a less efficient means of attaining sleep's benefits. Small species with higher basal metabolic rate (BMR) may therefore have less efficient sleep patterns. It follows that the evolution of monophasic sleep may hitherto be an unknown advantage of evolving larger mammalian body sizes and therefore lower BMR. [ 52 ] Sleep is sometimes thought to help conserve energy, though this theory is not fully adequate as it only decreases metabolism by about 5–10%. [ 53 ] Additionally it is observed that mammals require sleep even during the hypometabolic state of hibernation, in which circumstance it is actually a net loss of energy as the animal returns from hypothermia to euthermia in order to sleep. [ 54 ] Nocturnal animals have higher body temperatures, greater activity, rising serotonin, and diminishing cortisol during the night—the inverse of diurnal animals. Nocturnal and diurnal animals both have increased electrical activity in the suprachiasmatic nucleus , and corresponding secretion of melatonin from the pineal gland, at night. [ 55 ] Nocturnal mammals, which tend to stay awake at night, have higher melatonin at night just like diurnal mammals do. [ 56 ] And, although removing the pineal gland in many animals abolishes melatonin rhythms, it does not stop circadian rhythms altogether—though it may alter them and weaken their responsiveness to light cues. [ 57 ] Cortisol levels in diurnal animals typically rise throughout the night, peak in the awakening hours , and diminish during the day. [ 58 ] [ 59 ] In diurnal animals, sleepiness increases during the night. Different mammals sleep different amounts. Some, such as bats , sleep 18–20 hours per day, while others, including giraffes , sleep only 3–4 hours per day. There can be big differences even between closely related species. There can also be big differences between laboratory and field studies: for example, researchers in 1983 reported that captive sloths slept nearly 16 hours a day, but in 2008, when miniature neurophysiological recorders were developed that could be affixed to wild animals, sloths in nature were found to sleep only 9.6 hours a day. [ 60 ] [ 61 ] As with birds, the main rule for mammals (with certain exceptions, see below) is that they have two essentially different stages of sleep: REM and NREM sleep (see above). Mammals' feeding habits are associated with their sleep length. The daily need for sleep is highest in carnivores , lower in omnivores and lowest in herbivores . Humans sleep less than many other omnivores but otherwise not unusually much or unusually little in comparison with other mammals. [ 62 ] Many herbivores, like Ruminantia (such as cattle), spend much of their wake time in a state of drowsiness, [ further explanation needed ] which perhaps could partly explain their relatively low need for sleep. In herbivores, an inverse correlation is apparent between body mass and sleep length; big mammals sleep less than smaller ones. This correlation is thought to explain about 25% of the difference in sleep amount between different mammals. [ 62 ] Also, the length of a particular sleep cycle is associated with the size of the animal; on average, bigger animals will have sleep cycles of longer durations than smaller animals. Sleep amount is also coupled to factors like basal metabolism , brain mass, and relative brain mass. [ citation needed ] The duration of sleep among species is also directly related to BMR. Rats, which have a high BMR, sleep for up to 14 hours a day, whereas elephants and giraffes, which have lower BMRs, sleep only 2–4 hours per day. [ 63 ] It has been suggested that mammalian species which invest in longer sleep times are investing in the immune system, as species with the longer sleep times have higher white blood cell counts. [ 64 ] Mammals born with well-developed regulatory systems, such as the horse and giraffe, tend to have less REM sleep than the species which are less developed at birth, such as cats and rats. [ 65 ] This appears to echo the greater need for REM sleep among newborns than among adults in most mammal species. Many mammals sleep for a large proportion of each 24-hour period when they are very young. [ 66 ] The giraffe only sleeps 2 hours a day in about 5–15 minute sessions. Koalas are the longest sleeping-mammals, about 20–22 hours a day. However, killer whales and some other dolphins do not sleep during the first month of life. [ 67 ] Instead, young dolphins and whales frequently take rests by pressing their body next to their mother's while she swims. As the mother swims she is keeping her offspring afloat to prevent them from drowning. This allows young dolphins and whales to rest, which will help keep their immune system healthy; in turn, protecting them from illnesses. [ 68 ] During this period, mothers often sacrifice sleep for the protection of their young from predators. However, unlike other mammals, adult dolphins and whales are able to go without sleep for a month. [ 68 ] [ 69 ] Reasons given for the wide variations include the fact that mammals "that nap in hiding, like bats or rodents tend to have longer, deeper snoozes than those on constant alert." Lions, which have little fear of predators also have relatively long sleep periods, while elephants have to eat most of the time to support their huge bodies. Little brown bats conserve their energy except for the few hours each night when their insect prey are available, and platypuses eat a high energy crustacean diet and, therefore, probably do not need to spend as much time awake as many other mammals. [ 72 ] A study conducted by Datta indirectly supports the idea that memory benefits from sleep. [ 73 ] A box was constructed wherein a single rat could move freely from one end to the other. The bottom of the box was made of a steel grate. A light would shine in the box accompanied by a sound. After a five-second delay, an electrical shock would be applied. Once the shock commenced, the rat could move to the other end of the box, ending the shock immediately. The rat could also use the five-second delay to move to the other end of the box and avoid the shock entirely. The length of the shock never exceeded five seconds. This was repeated 30 times for half the rats. The other half, the control group, was placed in the same trial, but the rats were shocked regardless of their reaction. After each of the training sessions, the rat would be placed in a recording cage for six hours of polygraphic recordings. This process was repeated for three consecutive days. During the posttrial sleep recording session, rats spent 25.47% more time in REM sleep after learning trials than after control trials. [ 73 ] An observation of the Datta study is that the learning group spent 180% more time in SWS than did the control group during the post-trial sleep-recording session. [ 74 ] This study shows that after spatial exploration activity, patterns of hippocampal place cells are reactivated during SWS following the experiment. Rats were run through a linear track using rewards on either end. The rats would then be placed in the track for 30 minutes to allow them to adjust (PRE), then they ran the track with reward-based training for 30 minutes (RUN), and then they were allowed to rest for 30 minutes. During each of these three periods, EEG data were collected for information on the rats' sleep stages. The mean firing rates of hippocampal place cells during prebehavior SWS (PRE) and three ten-minute intervals in postbehavior SWS (POST) were calculated by averaging across 22 track-running sessions from seven rats. The results showed that ten minutes after the trial RUN session, there was a 12% increase in the mean firing rate of hippocampal place cells from the PRE level. After 20 minutes, the mean firing rate returned rapidly toward the PRE level. The elevated firing of hippocampal place cells during SWS after spatial exploration could explain why there were elevated levels of slow-wave sleep in Datta's study, as it also dealt with a form of spatial exploration. In rats, sleep deprivation causes weight loss and reduced body temperature. Rats kept awake indefinitely develop skin lesions, hyperphagia , loss of body mass, hypothermia , and, eventually, fatal sepsis . [ 75 ] Sleep deprivation also hinders the healing of burns on rats. [ 76 ] When compared with a control group , sleep-deprived rats' blood tests indicated a 20% decrease in white blood cell count, a significant change in the immune system. [ 77 ] A 2014 study found that depriving mice of sleep increased cancer growth and dampened the immune system's ability to control cancers. The researchers found higher levels of M2 tumor-associated macrophages and TLR4 molecules in the sleep deprived mice and proposed this as the mechanism for increased susceptibility of the mice to cancer growth. M2 cells suppress the immune system and encourage tumour growth. TRL4 molecules are signalling molecules in the activation of the immune system. [ 78 ] Since monotremes (egg-laying mammals) are considered to represent one of the evolutionarily oldest groups of mammals, they have been subject to special interest in the study of mammalian sleep. As early studies of these animals could not find clear evidence for REM sleep, it was initially assumed that such sleep did not exist in monotremes, but developed after the monotremes branched off from the rest of the mammalian evolutionary line, and became a separate, distinct group. However, EEG recordings of the brain stem in monotremes show a firing pattern that is quite similar to the patterns seen in REM sleep in higher mammals. [ 79 ] [ 80 ] In fact, the largest amount of REM sleep known in any animal is found in the platypus . [ 81 ] REM electrical activation does not extend at all to the forebrain in platypods, suggesting that they do not dream. The average sleep time of the platypus in a 24-hour period is said to be as long as 14 hours, though this may be because of their high-calorie crustacean diet. [ 72 ] The consequences of falling into a deep sleep for marine mammalian species can be suffocation and drowning, or becoming easy prey for predators. Thus, dolphins, whales, and pinnipeds (seals) engage in unihemispheric sleep while swimming, which allows one brain hemisphere to remain fully functional, while the other goes to sleep. The hemisphere that is asleep alternates, so that both hemispheres can be fully rested. [ 68 ] [ 82 ] Just like terrestrial mammals, pinnipeds that sleep on land fall into a deep sleep and both hemispheres of their brain shut down and are in full sleep mode. [ 83 ] [ 84 ] Aquatic mammal infants do not have REM sleep in infancy; [ 85 ] REM sleep increases as they age. Among others, seals and whales belong to the aquatic mammals. Earless seals and eared seals have solved the problem of sleeping in water via two different methods. Eared seals, like whales, show unihemispheric sleep. The sleeping half of the brain does not awaken when they surface to breathe. When one half of a seal's brain shows slow-wave sleep, the flippers and whiskers on its opposite side are immobile. While in the water, these seals have almost no REM sleep and may go a week or two without it. As soon as they move onto land they switch to bilateral REM sleep and NREM sleep comparable to land mammals, surprising researchers with their lack of "recovery sleep" after missing so much REM. Earless seals sleep bihemispherically like most mammals, under water, hanging at the water surface or on land. They hold their breath while sleeping under water, and wake up regularly to surface and breathe. They can also hang with their nostrils above water and in that position have REM sleep, but they do not have REM sleep underwater. REM sleep has been observed in the pilot whale , a species of dolphin. [ 86 ] Whales do not seem to have REM sleep, nor do they seem to have any problems because of this. One reason REM sleep might be difficult in marine settings is the fact that REM sleep causes muscular atony; that is to say, a functional paralysis of skeletal muscles that can be difficult to combine with the need to breathe regularly. [ 62 ] [ 87 ] Conscious breathing cetaceans sleep but cannot afford to be unconscious for long, because they may drown . While knowledge of sleep in wild cetaceans is limited, toothed cetaceans in captivity have been recorded to exhibit unihemispheric slow-wave sleep (USWS), which means they sleep with one side of their brain at a time, so that they may swim, breathe consciously and avoid both predators and social contact during their period of rest. [ 88 ] A 2008 study found that sperm whales sleep in vertical postures just under the surface in passive shallow 'drift-dives', generally during the day, during which whales do not respond to passing vessels unless they are in contact, leading to the suggestion that whales possibly sleep during such dives. [ 89 ] Unihemispheric sleep refers to sleeping with only a single cerebral hemisphere . The phenomenon has been observed in birds and aquatic mammals , [ 90 ] as well as in several reptilian species (the latter being disputed: many reptiles behave in a way which could be construed as unihemispheric sleeping, but EEG studies have given contradictory results). Reasons for the development of unihemispheric sleep are likely that it enables the sleeping animal to receive stimuli—threats, for instance—from its environment, and that it enables the animal to fly or periodically surface to breathe when immersed in water. Only NREM sleep exists unihemispherically, and there seems to exist a continuum in unihemispheric sleep regarding the differences in the hemispheres: in animals exhibiting unihemispheric sleep, conditions range from one hemisphere being in deep sleep with the other hemisphere being awake to one hemisphere sleeping lightly with the other hemisphere being awake. If one hemisphere is selectively deprived of sleep in an animal exhibiting unihemispheric sleep (one hemisphere is allowed to sleep freely but the other is awoken whenever it falls asleep), the amount of deep sleep will selectively increase in the hemisphere that was deprived of sleep when both hemispheres are allowed to sleep freely. The neurobiological background for unihemispheric sleep is still unclear. In experiments on cats in which the connection between the left and the right halves of the brain stem has been severed, the brain hemispheres show periods of a desynchronized EEG, during which the two hemispheres can sleep independently of each other. [ 91 ] In these cats, the state where one hemisphere slept NREM and the other was awake, as well as one hemisphere sleeping NREM with the other state sleeping REM were observed. The cats were never seen to sleep REM sleep with one hemisphere while the other hemisphere was awake. This is in accordance with the fact that REM sleep, as far as is currently known, does not occur unihemispherically. The fact that unihemispheric sleep exists has been used as an argument for the necessity of sleep. [ 92 ] It appears that no animal has developed an ability to go without sleep altogether. Animals that hibernate are in a state of torpor , differing from sleep. Hibernation markedly reduces the need for sleep, but does not remove it. Some hibernating animals end their hibernation a couple of times during the winter so that they can sleep. [ 54 ] Hibernating animals waking up from hibernation often go into rebound sleep because of lack of sleep during the hibernation period. They are definitely well-rested and are conserving energy during hibernation, but need sleep for something else. [ 54 ]
https://en.wikipedia.org/wiki/Sleep_in_animals
A sleeper wall may refer to the following types of walls: This architectural element –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sleeper_wall
Incapacitating agent is a chemical or biological agent which renders a person unable to harm themselves or others, regardless of consciousness . [ 1 ] Lethal agents are primarily intended to kill, but incapacitating agents can also kill if administered in a potent enough dose, or in certain scenarios. The term "incapacitation," when used in a general sense, is not equivalent to the term "disability" as used in occupational medicine and denotes the inability to perform a task because of a quantifiable physical or mental impairment. In this sense, any of the chemical warfare agents may incapacitate a victim; however, by the military definition of this type of agent, incapacitation refers to impairments that are temporary and nonlethal. Thus, riot-control agents are incapacitating because they cause temporary loss of vision due to blepharospasm , but they are not considered military incapacitants because the loss of vision does not last long. Although incapacitation may result from physiological changes such as mucous membrane irritation, diarrhea , or hyperthermia , the term "incapacitating agent" as militarily defined refers to a compound that produces temporary and nonlethal impairment of military performance by virtue of its psychobehavioral or CNS effects. In biological warfare , a distinction is also made between bio-agents as Lethal Agents (e.g., Bacillus anthracis , Francisella tularensis , Botulinum toxin ) or Incapacitating Agents (e.g., Brucella suis , Coxiella burnetii , Venezuelan equine encephalitis virus , Staphylococcal enterotoxin B ). [ 2 ] The use of chemicals to induce altered states of mind in an adversary dates back to antiquity and includes the use of plants of the nightshade family (Solanaceae) , such as the thornapple ( Datura stramonium ) , that contain various combinations of anticholinergic alkaloids . The use of nonlethal chemicals to render an enemy force incapable of fighting dates back to at least 600 B.C. when Solon's soldiers threw hellebore roots into streams supplying water to enemy troops, who then developed diarrhea. [ 3 ] In 184 B.C., Hannibal 's army used belladonna plants to induce disorientation, [ 4 ] [ 5 ] and the Bishop of Münster in A.D. 1672 attempted to use belladonna-containing grenades in an assault on the city of Groningen . [ 6 ] In 1881, members of a French railway surveying expedition crossing Tuareg territory in North Africa ate dried dates that tribesmen had apparently deliberately contaminated with Egyptian henbane ( Hyoscyamus muticus , or H. falezlez ), to devastating effect. [ 7 ] In 1908, 200 French soldiers in Hanoi became delirious and experienced hallucinations after being poisoned with a related plant . More recently, accusations of Soviet use of incapacitating agents internally and in Afghanistan were never substantiated. Following World War II , the United States military investigated a wide range of possible nonlethal, psychobehavioral, chemical incapacitating agents to include psychedelic indoles such as lysergic acid diethylamide (LSD-25) and the tetrahydrocannabinol derivative DMHP , certain tranquilizers, as well as several glycolate anticholinergics. One of the anticholinergic compounds, 3-quinuclidinyl benzilate , was assigned the NATO code "BZ" and was weaponized beginning in the 1960s for possible battlefield use. (Although BZ figured prominently in the plot of the 1990 movie, Jacob's Ladder , as the compound responsible for hallucinations and violent deaths in a fictitious American battalion in Vietnam , this agent never saw operational use.) Destruction of American stockpiles of BZ began in 1988 and is now complete. By 1958 a search of the tropics for venomous animal species in order to isolate and synthesize their toxins was prioritized. For example, snake venoms were studied and The College of Medical Evangelists was under contract to isolate puffer fish poison. The New England Institute for Medical Research and Fort Detrick were studying the properties and biological activity of the Botulinum toxin molecule. The U.S. Army Chemical Warfare Laboratories were isolating shellfish toxin and trying to obtain its structure. [ 8 ] A Central Intelligence Agency Project Artichoke document reads: "Not all viruses have to be lethal ... the objective includes those that act as short-term and long-term incapacitants." [ 9 ] One of the most urgent of Chemical Corps projects in the period 1960 to 1961 was the effort to achieve a standard chemical incapacitating agent. For several years attention had been fixed on the military potentialities of the psychochemicals of various types. Research on new agents tended to concentrate on viral and rickettsial diseases. A whole range of exotic virus diseases prevalent in tropical areas came within the screening program in 1960–61, with major effort directed at increased first hand knowledge of so-called arboviruses (i.e. arthropod borne viruses). The importance of epidemiological studies in connection with this area of endeavor was being emphasized. [ citation needed ] Pine Bluff Arsenal was a rickittsiae and virus production center and biological agents against wheat and rice fields were tested in several locations the southern U.S. as well as in Okinawa. [ 10 ] The concept of "humane warfare" with widespread use of incapacitating or deliriant drugs such as LSD or Agent BZ to stun an enemy, capture them alive, or separate friend from foe had been available in locations such as Berlin since the 1950s, an initial focus of US CBW development was the offensive use of diseases, drugs, and substances that could completely incapacitate an enemy for several days with some lesser possibility of death using a variety of chemical, biological, radiological, or toxin agents. [ 11 ] The US Army Assistant Chief of Staff for Intelligence (ACSI) authorized operational field testing of LSD in interrogations in the early 1960s. The first field tests were conducted in Europe by an Army Special Purpose Team (SPT) during May to August 1961 in tests known as Project THIRD CHANCE . The second series of field tests, Project DERBY HAT , were conducted by an Army Special Purpose Team in the Far East during August to November 1962. [ 12 ] A study of possible uses of migratory birds in germ warfare was funded through Camp Detrick for years using the Smithsonian as a cover. Government documents have linked the Smithsonian to the CIA's mind control program known as MKULTRA . The CIA were interested in bird migration patterns for CBW research under MKULTRA where, a Subproject 139 designated "Bird Disease Studies" at Pennsylvania State University . An agents purchase of a copy of the book Birds of Britain, Europe, is recorded as part of what was described in a financial accounting of the MKULTRA program as a continuous project on bird survey in special areas. [ 13 ] Sampling of native migratory organisms with a focus on birds provided to researchers the natural habitat of disease causing fungus, viruses, and bacteria as well as the established (or potential) vectors for them. The sampling also provided exotic tropical viruses and toxins from the various organisms collected on both land and sea. The studies, including the Pacific Ocean Biological Survey Program (POBSP) were conducted by the Smithsonian Institution and Project SHAD crews on Pacific islands and atolls. The "bird cruises" were subsequently found to be a U.S. Army cover for the prelude to chemical, biological, and entomological warfare experiments related to Deseret Test Center , Project 112 , and Project SHAD . [ 14 ] [ 15 ] [ 16 ] A U.S. War Departments report notes that "in addition to the results of human experimentation much data is available from the Japanese experiments on animals and food crops." [ 17 ] German researchers have found that records of the Entomology Institute at the Dachau concentration camp show that under orders of Schutzstaffel (SS) leader Heinrich Himmler , the Nazis began studying mosquitoes as an offensive biological warfare vector against humans in 1942. It was generally thought by historians that the Nazis only intended ever to use biological weapons defensively. [ 18 ] Project 112 included objectives such as “the feasibility of an offshore release of Aedes aegypti mosquitoes as a vector for infectious diseases,” and “the feasibility of a biological attack against an island complex.” [ 19 ] "The feasibility of area coverage with adedes aegypti mosquitoes was based on the Avon Park, Florida mosquito trials." [ 20 ] Several CIA documents, and a 1975 Congressional committee, revealed that several locations in Florida, as well as Avon Park, hosted experiments with mosquito-borne viruses and other biological substances. Formerly top-secret documents related to the CIA's Project MKNAOMI prove that the mosquitoes used in Avon Park were the Aedes aegypti type. "A 1978 Pentagon publication, entitled Biological Warfare: Secret Testing & Volunteers, reveals that the Army's Chemical Corps and Special Operations and Projects Divisions at Fort Detrick conducted 'tests' similar to the Avon Park experiments but the bulk of the documentation concerning this highly classified and covert work is still held secret by the Pentagon." [ 9 ] Sleeping gas is an oneirogenic general anaesthetic that is used to put subjects into a state in which they are not conscious of what is happening around them. Most sleeping gases have undesirable side effects, or are effective at doses that approach toxicity . It is primarily used for major surgeries and to render non-dangerous animals unconscious for research purposes. Examples of modern volatile anaesthetics that may be considered sleeping gases are BZ , [ 21 ] halothane vapour (Fluothane), [ 22 ] methyl propyl ether (Neothyl), methoxyflurane (Penthrane), [ 23 ] and the undisclosed fentanyl derivative delivery system used by the FSB in the Moscow theater hostage crisis . [ 24 ] Possible side effects might not prevent use of sleeping gas by criminals willing to murder , or carefully control the dose on a single already sleepy individual. There are reports of thieves spraying sleeping gases on campers, [ 25 ] or in train compartments in some parts of Europe. [ 26 ] Alarms are sold to detect such attacks and alert the victim. [ 25 ] There is one documented case of incapacitating agents being used in recent years. In 2002, Chechen terrorists took a large number of hostages in the Moscow theatre siege , and threatened to blow up the entire theatre if any attempt was made to break the siege. An incapacitating agent was used to disable the terrorists whilst the theatre was stormed by special forces. However, the incapacitating agent , unknown at that time, caused many of the hostages to die. The terrorists were rendered unconscious, but roughly 15% of the 800 people exposed were killed by the gas. [ 27 ] The situation was not helped by the fact that the authorities kept the nature of the incapacitating agent secret from doctors trying to treat its victims. At the time, the gas was reported to be an unknown incapacitating agent called " Kolokol-1 ". The Russian Health Minister Yuri Shevchenko later stated that the incapacitating agent used was a fentanyl derivative. Scientists at Britain's chemical and biological defense labs at Porton Down analyzed residue from the clothing of three hostages and the urine of one hostage rescued during the Moscow theater hostage crisis and found two chemical derivatives of fentanyl, remifentanil and carfentanil . [ 28 ] In a Mennonite community in Bolivia, eight men were convicted of raping 130 women in Manitoba Colony over a four-year period from 2005 to 2009, by spraying "a chemical used to anesthetize cows" through the victims' open bedroom windows. The perpetrators would then wait for the women to be incapacitated, whereupon they entered the residences to commit the crimes. Later, the women would awaken to a pounding headache, find blood, semen or dirt on their sheets, and would sometimes discover their extremities had also been bound. Most did not remember the attacks, although a few had vague, fleeting memories of men on top of them. Several men and boys were also suspected of having been raped. While additional actors were thought to have participated, they were never identified nor prosecuted; in fact, the rapes did not stop with the incarceration of the original eight men. [ 29 ] When two of these men were caught in the act of entering one of the women's homes, they implicated friends in the rapes to local authorities. Eventually nine Manitoba men, ages 19 to 43, were charged with using a spray adapted from an anesthetic by a veterinarian from a neighboring Mennonite colony to subdue their victims, then raping them. Eight of the accused were found guilty of rape, one escaped from the local jail before the end of the trial, and the veterinarian was found guilty of being an accomplice to the rapes. According to at least three residents of the colony, a local prosecutor, and a local journalist, these "ghost rapes" continue despite imprisonment of the men convicted in the 130 original rapes. [ 29 ] A date rape drug, also called a predator drug, is any drug that can be used as incapacitating agent to assist in the execution of drug facilitated sexual assault (DFSA). The most common types of DFSA are those in which a victim ingested drugs willingly for recreational purposes, or had them administered surreptitiously: [ 30 ] it is the latter type of assault that the term "date rape drug" most often refers to. "The findings by Du Mont and colleagues support the view that alcohol plays a major role in drug-facilitated sexual assault. Previously, Weir noted that cases of drug-facilitated sexual assault were frequently found to involve alcohol, marijuana or cocaine, and were less likely to involve drugs, such as flunitrazepam (Rohypnol) and gamma-hydroxybutyrate, that are commonly described as being used in this context. Similar findings have been reported by others, including Hall and colleagues, in a recent retrospective study from Northern Ireland". A fictional form of incapacitating agent, sometimes known as "knockout gas", has been a staple of pulp detective and science fiction novels, movies and television shows. It is presented in various forms, but generally is supposed to be a gas or aerosol that affords a harmless method of rendering characters quickly and temporarily unconscious without physical contact. This is in contrast to chloroform , a liquid anesthetic —itself a common element in genre fiction —that requires a victim to be physically subdued before it can be applied. A number of notable fictional characters created in the early 20th century, both villains and heroes, were associated with the use of knockout gas: Fu Manchu , Dr. Mabuse , Doc Savage , Batman , and The Avenger . A military knockout gas called the "Gas of Peace" is an important plot device in H. G. Wells 's 1936 movie Things to Come . It had become a familiar trope by the 1960s, when it was utilized in the X-Men comics. A famous example recurs in every opening sequence of the British TV series The Prisoner (1967–68). The U.S. Army psychiatrist James S. Ketchum , who worked for almost a decade on the U.S. military's top secret psychochemical warfare program, relates a story relevant to the concept of a "knockout gas" in his 2006 memoir, Chemical Warfare Secrets Almost Forgotten . In 1970, Ketchum and his boss were visited by CIA agents for a brainstorming session at his Maryland laboratory. The agents wanted to know if an incapacitating agent (his specialty) could be used to intervene in the ongoing hijacking of a Tel Aviv aircraft by Palestinian terrorists without injuring the hostages. We considered the pros and cons of using incapacitating agents and various other options. As it turned out, we could not imagine a scenario in which any available agent could be pumped into the airliner without the hijackers possibly reacting violently and killing passengers. Ultimately, the standoff was resolved by other means. [ 32 ] Arguably, the use of fentanyl derivatives by Russian authorities in the 2002 Moscow hostage crisis [ 28 ] (see above) is a real-life instance of deployment of a "knockout gas". Of course, the criterion that the gas reliably render subjects temporarily and harmlessly unconscious was not fulfilled in this case, as the procedure killed about fifteen percent of those subjected to it. [ 27 ]
https://en.wikipedia.org/wiki/Sleeping_gas
In construction , a sleeve is used both by the electrical and mechanical trades to create a penetration in a solid wall, ceilling or floor. Sleeves can be made of:
https://en.wikipedia.org/wiki/Sleeve_(construction)
In fluid dynamics and electrostatics , slender-body theory is a methodology that can be used to take advantage of the slenderness of a body to obtain an approximation to a field surrounding it and/or the net effect of the field on the body. Principal applications are to Stokes flow — at very low Reynolds numbers — and in electrostatics . Consider slender body of length ℓ {\displaystyle \ell } and typical diameter 2 a {\displaystyle 2a} with ℓ ≫ a {\displaystyle \ell \gg a} , surrounded by fluid of viscosity μ {\displaystyle \mu } whose motion is governed by the Stokes equations . Note that the Stokes' paradox implies that the limit of infinite aspect ratio ℓ / a → ∞ {\displaystyle \ell /a\rightarrow \infty } is singular, as no Stokes flow can exist around an infinite cylinder. Slender-body theory allows us to derive an approximate relationship between the velocity of the body at each point along its length and the force per unit length experienced by the body at that point. Let the axis of the body be described by X ( s , t ) {\displaystyle {\boldsymbol {X}}(s,t)} , where s {\displaystyle s} is an arc-length coordinate, and t {\displaystyle t} is time. By virtue of the slenderness of the body, the force exerted on the fluid at the surface of the body may be approximated by a distribution of Stokeslets along the axis with force density f ( s ) {\displaystyle {\boldsymbol {f}}(s)} per unit length. f {\displaystyle {\boldsymbol {f}}} is assumed to vary only over lengths much greater than a {\displaystyle a} , and the fluid velocity at the surface adjacent to X ( s , t ) {\displaystyle {\boldsymbol {X}}(s,t)} is well-approximated by ∂ X / ∂ t {\displaystyle \partial {\boldsymbol {X}}/\partial t} . The fluid velocity u ( x ) {\displaystyle {\boldsymbol {u}}({\boldsymbol {x}})} at a general point x {\displaystyle {\boldsymbol {x}}} due to such a distribution can be written in terms of an integral of the Oseen tensor (named after Carl Wilhelm Oseen ), which acts as a Green's function for a single Stokeslet. We have where I {\displaystyle \mathbf {I} } is the identity tensor. Asymptotic analysis can then be used to show that the leading-order contribution to the integral for a point x {\displaystyle {\boldsymbol {x}}} on the surface of the body adjacent to position s 0 {\displaystyle s_{0}} comes from the force distribution at | s − s 0 | = O ( a ) {\displaystyle |s-s_{0}|=O(a)} . Since a ≪ ℓ {\displaystyle a\ll \ell } , we approximate f ( s ) ≈ f ( s 0 ) {\displaystyle {\boldsymbol {f}}(s)\approx {\boldsymbol {f}}(s_{0})} . We then obtain where X ′ = ∂ X / ∂ s {\displaystyle {\boldsymbol {X}}'=\partial {\boldsymbol {X}}/\partial s} . The expression may be inverted to give the force density in terms of the motion of the body: Two canonical results that follow immediately are for the drag force F {\displaystyle F} on a rigid cylinder (length ℓ {\displaystyle \ell } , radius a {\displaystyle a} ) moving a velocity u {\displaystyle u} either parallel to its axis or perpendicular to it. The parallel case gives while the perpendicular case gives with only a factor of two difference. Note that the dominant length scale in the above expressions is the longer length ℓ {\displaystyle \ell } ; the shorter length has only a weak effect through the logarithm of the aspect ratio. In slender-body theory results, there are O ( 1 ) {\displaystyle O(1)} corrections to the logarithm, so even for relatively large values of ℓ / a {\displaystyle \ell /a} the error terms will not be that small.
https://en.wikipedia.org/wiki/Slender-body_theory
In architecture , the slenderness ratio , or simply slenderness , is an aspect ratio , the quotient between the height and the width of a building . In structural engineering , slenderness is used to calculate the propensity of a column to buckle . It is defined as l / k {\displaystyle l/k} where l {\displaystyle l} is the effective length of the column and k {\displaystyle k} is the least radius of gyration , the latter defined by k 2 = I / A {\displaystyle k^{2}=I/A} where A {\displaystyle A} is the area of the cross-section of the column and I {\displaystyle I} is the second moment of area of the cross-section. The effective length is calculated from the actual length of the member considering the rotational and relative translational boundary conditions at the ends. Slenderness captures the influence on buckling of all the geometric aspects of the column, namely its length, area, and second moment of area . The influence of the material is represented separately by the material's modulus of elasticity E {\displaystyle E} . Structural engineers generally consider a skyscraper as slender if the height:width ratio exceeds 10:1 or 12:1. Slim towers require the adoption of specific measures to counter the high strengths of wind in the vertical cantilever , like including additional structures to endow greater rigidity to the building or diverse types of tuned mass dampers to avoid unwanted swinging. [ 1 ] Tall buildings with high slenderness ratio are sometime referred to as pencil towers . [ 2 ]
https://en.wikipedia.org/wiki/Slenderness_ratio
In probability theory , Slepian's lemma (1962), named after David Slepian , is a Gaussian comparison inequality. It states that for Gaussian random variables X = ( X 1 , … , X n ) {\displaystyle X=(X_{1},\dots ,X_{n})} and Y = ( Y 1 , … , Y n ) {\displaystyle Y=(Y_{1},\dots ,Y_{n})} in R n {\displaystyle \mathbb {R} ^{n}} satisfying E ⁡ [ X ] = E ⁡ [ Y ] = 0 {\displaystyle \operatorname {E} [X]=\operatorname {E} [Y]=0} , the following inequality holds for all real numbers u 1 , … , u n {\displaystyle u_{1},\ldots ,u_{n}} : or equivalently, While this intuitive-seeming result is true for Gaussian processes, it is not in general true for other random variables—not even those with expectation 0. As a corollary, if ( X t ) t ≥ 0 {\displaystyle (X_{t})_{t\geq 0}} is a centered stationary Gaussian process such that E ⁡ [ X 0 X t ] ≥ 0 {\displaystyle \operatorname {E} [X_{0}X_{t}]\geq 0} for all t {\displaystyle t} , it holds for any real number c {\displaystyle c} that Slepian's lemma was first proven by Slepian in 1962, and has since been used in reliability theory , extreme value theory and areas of pure probability. It has also been re-proven in several different forms. This probability -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Slepian's_lemma