id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
3,145,562
https://en.wikipedia.org/wiki/-zilla
-zilla is an English slang suffix, a libfix back-formation derived from the English name of the Japanese movie monster Godzilla. It is popular for the names of software and websites. It is also found often in popular culture to imply some form of excess, denoting the monster-like qualities of Godzilla. This trend has been observed since the popularization of the Mozilla Project, which itself included the Internet Relay Chat client ChatZilla. The use of the suffix was contested by Toho, owners of the trademark Godzilla, in a lawsuit against the website Davezilla and also against Sears for their mark Bagzilla. Toho has since trademarked the word "Zilla" and retroactively used it as an official name for the "Godzilla In Name Only" creature from the 1998 Roland Emmerich film. List of items ending in -zilla Some uses of the suffix -zilla include: Businesses and products AmiZilla, an Amiga port of Mozilla Firefox Chipzilla, a humorous epithet for the Intel Corporation Clonezilla, an open source disk cloning software FileZilla, an FTP program Go!Zilla, a download manager program Hubzilla, an open source social network, part of Fediverse Mozilla, a group of Internet-related programs created by the Mozilla Foundation, also name of the group's widely known web browser Bugzilla, open source bug tracking software, with a web-based interface. ChatZilla, an Internet Relay Chat program Mozilla Application Suite Classilla, a rebranded Mozilla Application Suite, an internet suite for the classic Mac OS Ghostzilla, a web browser GNUzilla, GNU's fork of the Mozilla Application Suite Warpzilla, the Mozilla Application suite for OS/2 Newszilla, the Usenet server of the Dutch internet provider XS4ALL Podzilla, an open source user interface for the IPodLinux project, which allows for alternative functionality of Apple Computer's iPod Quizilla, an online personality quiz website, which contains its own "Zillapedia" RevZilla.com, an online motorcycle gear retailer Shopzilla, a comparison-shopping search engine, formerly BizRate.com Godzilla (Nissan GT-R), a grand touring/ sports vehicle produced by Nissan Entertainment Bongzilla, a rock band from Madison, Wisconsin "Bootzilla", a song recorded by Bootsy's Rubber Band Bridezilla (band), an Australian indie rock band Bridezilla (EP), a recording by the band Bridezilla Bridezillas, a reality show which airs on the WE: Women's Entertainment network Broadzilla, a rock band from Detroit, Michigan Davezilla, a humor website Godzilla, the franchise from which the -zilla suffix originates Godzilla (1954 film), the first film in the franchise Zilla, a fictional character originally known as Godzilla, from the 1998 American Godzilla film Illzilla, an Australian hip hop group featuring live instruments Popzilla, an animated TV series in production for MTV Rapzilla, a Christian hip hop online magazine Tekzilla, a weekly video podcast on the Revision3 network "Throatzillaaa", a song recorded by Slayyyter Miscellaneous axizilla, a heavy form of axion in an extension of the Standard Model. bitchzilla, a severely disagreeable or aggressive woman. couplezilla, a couple who, in the course of planning their wedding, display difficult, selfish, narcissistic behavior relating to the event. cuntzilla, a term of abuse for a severely disagreeable or aggressive woman. Fedzilla, the federal government regarded as a rapacious monster with an appetite for political power, money, etc. groomzilla, a demanding and perfectionist groom (man who is to be married). Hogzilla, a large male wild hog hybrid that was stabbed and killed in 2004 in Georgia, United States. momzilla, a controlling or over-involved mother. Pigzilla, another large feral pig or possible hoax shot in 2007 in Alabama, United States. Squawkzilla, a nickname for a prehistoric parrot species. promzilla, a teenage girl who is obsessed with preparing for her prom and ensuring it turns out the way she envisions. Snowzilla (disambiguation) weddingzilla, a person overly concerned with ensuring that a wedding goes exactly as they envision it. wimpzilla, a theoretical superheavy dark matter particle, trillions of times more massive than other proposed types of dark matter. For derived words Words ending with -zilla (List of examples in English Wiktionary) -zilla (explaining the suffix in English Wiktionary) References English suffixes Computing terminology Godzilla (franchise) Internet slang Slang Neologisms Japanese words and phrases
-zilla
Technology
1,013
66,951,273
https://en.wikipedia.org/wiki/Toreforant
Toreforant (JNJ-38518168) is an orally-dosed selective antagonist of the histamine H4 receptor that has been studied for various health conditions. It is the successor of a number of H4-selective compounds developed by Johnson & Johnson. Phase IIa clinical trials completed as recently as November 2018 continue to suggest that toreforant is safe. As of the end of 2020, there is no regulator-approved H4 antagonist. In U.S. Phase II clinical trials, toreforant, by itself, did not show efficacy against eosinophilic asthma. The drug did show at least partial efficacy against rheumatoid arthritis in patients who were nonresponsive to methotrexate. As the H4 receptor is widely implicated in the regulation of inflammatory states, the potential uses for an H4 antagonist remain significant. See also JNJ-7777120 References Benzimidazoles Piperidines Anti-inflammatory agents Drugs developed by Johnson & Johnson H4 receptor antagonists Carboxamides
Toreforant
Chemistry
222
4,694,833
https://en.wikipedia.org/wiki/GATA1
GATA-binding factor 1 or GATA-1 (also termed Erythroid transcription factor) is the founding member of the GATA family of transcription factors. This protein is widely expressed throughout vertebrate species. In humans and mice, it is encoded by the GATA1 and Gata1 genes, respectively. These genes are located on the X chromosome in both species. GATA1 regulates the expression (i.e. formation of the genes' products) of an ensemble of genes that mediate the development of red blood cells and platelets. Its critical roles in red blood cell formation include promoting the maturation of precursor cells, e.g. erythroblasts, to red blood cells and stimulating these cells to erect their cytoskeleton and biosynthesize their oxygen-carrying components viz., hemoglobin and heme. GATA1 plays a similarly critical role in the maturation of blood platelets from megakaryoblasts, promegakaryocytes, and megakaryocytes; the latter cells then shed membrane-enclosed fragments of their cytoplasm, i.e. platelets, into the blood. In consequence of the vital role that GATA1 has in the proper maturation of red blood cells and platelets, inactivating mutations in the GATA1 gene (i.e. mutations that result in the production of no, reduced levels of, or a less active GATA1) cause X chromosome-linked anemic and/or bleeding diseases due to the reduced formation and functionality of red blood cells and/or platelets, respectively, or, under certain circumstances, the pathological proliferation of megakaryoblasts. These diseases include transient myeloproliferative disorder occurring in Down syndrome, acute megakaryoblastic leukemia occurring in Down syndrome, Diamond–Blackfan anemia, and various combined anemia-thrombocytopenia syndromes including a gray platelet syndrome-type disorder. Reduced levels of GATA1 due to reductions in the translation of GATA1 mRNA into its transcription factor product are associated with promoting the progression of myelofibrosis, i.e. a malignant disease that involves the replacement of bone marrow cells by fibrous tissue and extramedullary hematopoiesis, i.e. the extension of blood cell-forming cells to sites outside of the bone marrow. Gene The human GATA1 gene is located on the short (i.e. "p") arm of the X chromosome at position 11.23. It is 7.74 kilobases in length, consists of 6 exons, and codes for a full-length protein, GATA1, of 414 amino acids as well as a shorter one, GATA1-S. GATA1-S lacks the first 83 amino acids of GATA1 and therefore consists of only 331 amino acids. GATA1 codes for two zinc finger structural motifs, C-ZnF and N-ZnF, that are present in both GATA1 and GATA1-S proteins. These motifs are critical for both transcription factors' gene-regulating actions. N-ZnF is a frequent site of disease-causing mutations. Lacking the first 83 amino acids and therefore one of the two activation domains of GATA1, GATA1-S has significantly less gene-regulating activity than GATA1. Studies in Gata1-knockout mice, i.e. mice lacking the Gata1 gene, indicate that this gene is essential for the development and maintenance of blood-based and/or tissue-based hematological cells, particularly red blood cells and platelets but also eosinophils, basophils, mast cells, and dendritic cells. The knock-out mice die by day 11.5 of their embryonic development due to severe anemia that is associated with absence of cells of the red blood cell lineage, excessive numbers of malformed platelet-precursor cells, and an absence of platelets. These defects reflect the essential role of Gata-1 in stimulating the development, self-renewal, and/or maturation of red blood cell and platelet precursor cells. Studies using mice depleted of their Gata1 gene during adulthood show that: 1) Gata1 is required for the stimulation of erythropoiesis (i.e. increase in red blood cell formation) in response to stress and 2) Gata1-deficient adult mice invariably develop a form of myelofibrosis. GATA1 proteins In both GATA1 and GATA1-S, C-ZnF (i.e. C-terminus zinc finger) binds to DNA-specific nucleic acid sequences sites viz., (T/A(GATA)A/G), on the expression-regulating sites of its target genes and in doing so either stimulates or suppresses the expression of these target genes. Their N-ZnF (i.e. N-terminus zinc fingers) interacts with an essential transcription factor-regulating nuclear protein, FOG1. FOG1 powerfully promotes or suppresses the actions that the two transcription factors have on most of their target genes. Similar to the knockout of Gata1, knockout of the mouse gene for FOG1, Zfpm1, causes total failure of red blood cell development and embryonic lethality by day 11.5. Based primarily on mouse studies, it is proposed that the GATA1-FOG1 complex promotes human erythropoiesis by recruiting and binding with at least two gene expression-regulating complexes, Mi-2/NuRD complex (a chromatin remodeler) and CTBP1 (a histone deacetylase) and three gene expression-regulating proteins, SET8 (a GATA1-inhibiting histone methyltransferase), BRG1 (a transcription activator), and Mediator (a transcription co-activator). Other interactions include those with: BRD3 (remodels DNA nucleosomes), BRD4 (binds acetylated lysine residues in DNA-associated histone to regulate gene accessibility), FLI1 (a transcription factor that blocks erythroid differentiation), HDAC1 (a histone deacetylase), LMO2 (regulator of erythrocyte development), ZBTB16 (transcription factor regulating cell cycle progression), TAL1 (a transcription factor), FOG2 (a transcription factor regulator), and GATA2 (Displacement of GATA2 by GATA1, i.e. the "GATA switch", at certain gene-regulating sites is critical for red blood development in mice and, presumably, humans). GATA1-FOG1 and GATA2-FOG1 interactions are critical for platelet formation in mice and may similarly be critical for this in humans. Other types of GATA2 mutations cause the over-expression of the GATA2 transcription factor. This overexpression is associated with the development of non-familial AML. Apparently, the GATA2 gene's expression level must be delicately balanced between deficiency and excess in order to avoid life-threatening disease. Physiology and Pathology GATA1 was first described as a transcription factor that activates the hemoglobin B gene in the red blood cell precursors of chickens. Subsequent studies in mice and isolated human cells found that GATA1 stimulates the expression of genes that promote the maturation of precursor cells (e.g. erythroblasts) to red blood cells while silencing genes that cause these precursors to proliferate and thereby to self-renew. GATA1 stimulates this maturation by, for example, inducing the expression of genes in erythroid cells that contribute to the formation of their cytoskeleton and that make enzymes necessary for the biosynthesis of hemoglobins and heme, the oxygen-carrying components of red blood cells. GATA1-inactivating mutations may thereby result in a failure to produce sufficient numbers of and/or fully functional red blood cells. Also based on mouse and isolated human cell studies, GATA1 appears to play a similarly critical role in the maturation of platelets from their precursor cells. This maturation involves the stimulation of megakaryoblasts to mature ultimately to megakaryocytes which cells shed membrane-enclosed fragments of their cytoplasm, i.e. platelets, into the blood. GATA1-inactivating mutations may thereby result in reduced levels of and/or dysfunctional blood platelets. Reduced levels of GATA1 due to defective translation of GATA1 mRNA in human megakaryocytes is associated with myelofibrosis, i.e. the replacement of bone marrow cells by fibrous tissue. Based primarily on mouse and isolated human cell studies, this myelofibrosis is thought to result from the accumulation of platelet precursor cells in the bone marrow and their release of excessive amounts of cytokines that stimulate bone marrow stromal cells to become fiber-secreting fibroblasts and osteoblasts. Based on mouse studies, low GATA1 levels are also thought to promote the development of splenic enlargement and extramedullary hematopoiesis in human myelofibrosis disease. These effects appear to result directly from the over-proliferation of abnormal platelet precursor cells. The clinical features associated with inactivating GATA1 mutations or other causes of reduced GATA1 levels vary greatly with respect not only to the types of disease exhibited but also to disease severity. This variation depends on at least four factors. First, inactivating mutations in GATA1 cause X-linked recessive diseases. Males, with only one GATA1 gene, experience the diseases of these mutations while women, with two GATA1 genes, experience no or extremely mild evidence of these diseases unless they have inactivating mutations in both genes or their mutation is dominant negative, i.e. inhibiting the good gene's function. Second, the extent to which a mutation reduces the cellular levels of fully functional GATA1 correlates with disease severity. Third, inactivating GATA1 mutations can cause different disease manifestations. For example, mutations in GATA1's N-ZnF that interfere with its interaction with FOG1 result in reduced red blood cell and platelet levels whereas mutations in N-ZnF that reduce its binding affinity to target genes cause a reduction in red blood cells plus thalassemia-type and porphyria-type symptoms. Fourth, the genetic background of individuals can impact the type and severity of symptoms. For example, GATA1-inactivating mutations in individuals with the extra chromosome 21 of Down syndrome exhibit a proliferation of megakaryoblasts that infiltrate and consequentially directly damage liver, heart, marrow, pancreas, and skin plus secondarily life-threatening damage to the lungs and kidneys. These same individuals can develop secondary mutations in other genes that results in acute megakaryoblastic leukemia. Genetic disorders GATA1 gene mutations are associated with the development of various genetic disorders which may be familial (i.e. inherited) or newly acquired. In consequence of its X chromosome location, GATA1 mutations generally have a far greater physiological and clinical impact in men, who have only one X chromosome along with its GATA1 gene, than woman, who have two of these chromosomes and genes: GATA1 mutations lead to X-linked diseases occurring predominantly in males. Mutations in the activation domain of GATA1 (GATA1-S lacks this domain) are associated with the transient myeloproliferative disorder and acute megakaryoblastic leukemia of Down syndrome while mutations in the N-ZnF motif of GATA1 and GATA1-S are associated with diseases similar to congenital dyserythropoietic anemia, congenital thrombocytopenia, and certain features that occur in thalassemia, gray platelet syndrome, congenital erythropoietic porphyria, and myelofibrosis. Down syndrome-related disorders Transient myeloproliferative disorder Acquired inactivating mutations in the activation domain of GATA1 are the apparent cause of the transient myeloproliferative disorder that occurs in individuals with Down syndrome. These mutations are frameshifts in exon 2 that result in the failure to make GATA1 protein, continued formation of GATA1-S, and therefore a greatly reduced ability to regulate GATA1-targeted genes. The presence of these mutations is restricted to cells bearing the trisomy 21 karyotype (i.e. extra chromosome 21) of Down syndrome: GATA1 inactivating mutations and trisomy 21 are necessary and sufficient for development of the disorder. Transient myeloproliferative disorder consists of a relatively mild but pathological proliferation of platelet-precursor cells, primarily megakaryoblasts, which often show an abnormal morphology that resembles immature myeloblasts (i.e. unipotent stem cells which differentiate into granulocytes and are the malignant proliferating cell in acute myeloid leukemia). Phenotype analyses indicate that these blasts belong to the megakaryoblast series. Abnormal findings include the frequent presence of excessive blast cell numbers, reduced platelet and red blood cell levels, increased circulating white blood cell levels, and infiltration of platelet-precursor cells into the bone marrow, liver, heart, pancreas, and skin. The disorder is thought to develop in utero and is detected at birth in about 10% of individuals with Down syndrome. It resolves totally within ~3 months but in the following 1–3 years progresses to acute megakaryoblastic leukemia in 20% to 30% of these individuals: transient myeloprolierative disorder is a clonal (abnormal cells derived from single parent cells), pre-leukemic condition and is classified as a myelodysplastic syndrome disease. Acute megakaryoblastic leukemia Acute megakaryoblastic leukemia is a subtype of acute myeloid leukemia that is extremely rare in adults and, although still rare, more common in children. The childhood disease is classified into two major subgroups based on its occurrence in individuals with or without Down syndrome. The disease in Down syndrome occurs in 20% to 30% of individuals who previously had transient myeloproliferative disorder. Their GATA1 mutations are frameshifts in exon 2 that result in the failure to make GATA1 protein, continued formation of GATA1-S, and thus a greatly reduced ability to regulate GATA1-targeted genes. Transient myeloproliferative disorder is detected at or soon after birth and generally resolves during the next months but is followed within 1–3 years by acute megakaryoblastic leukemia. During this 1-3 year interval, individuals accumulate multiple somatic mutations in cells bearing inactivating GATA1 mutations plus trisomy 21. These mutations are thought to result from the uncontrolled proliferation of blast cells caused by the GATAT1 mutation in the presence of the extra chromosome 21 and to be responsible for progression of the transient disorder to leukemia. The mutations occur in one or, more commonly, multiple genes including: TP53, RUNX1, FLT3, ERG, DYRK1A, CHAF1B, HLCS, CTCF, STAG2, RAD21, SMC3, SMC1A, NIPBL, SUZ12, PRC2, JAK1, JAK2, JAK3, MPL, KRAS, NRAS, SH2B3, and MIR125B2 which is the gene for microRNA MiR125B2. Diamond–Blackfan anemia Diamond–Blackfan anemia is a familial (i.e. inherited) (45% of cases) or acquired (55% of cases) genetic disease that presents in infancy or, less commonly, later childhood as aplastic anemia and the circulation of abnormally enlarged red blood cells. Other types of blood cell and platelets circulate at normal levels and appear normal in structure. About half of affected individuals have various birth defects. The disease is regarded as a uniformly genetic disease although the genes causing it have not been identified in ~30% of cases. In virtually all the remaining cases, autosomal recessive inactivating mutations occur in any one of 20 of the 80 genes encoding ribosomal proteins. About 90% of the latter mutations occur in 6 ribosomal protein genes viz., RPS19, RPL5, RPS26, RPL11, RPL35A, and RPS24. However, several cases of familial Diamond–Blackfan anemia have been associated with GATA1 gene mutations in the apparent absence of a mutation in ribosomal protein genes. These GATA1 mutations occur in an exon 2 splice site or the start codon of GATA1, cause the production of the GATA1-S in the absence of the GATA1 transcription factor, and therefore are gene-inactivating in nature. It is proposed that these GATA1 mutations are a cause for Diamond Blackfan anemia. Combined anemia-thrombocytopenia syndromes Certain GATA1-inactivatng mutations are associated with familial or, less commonly, sporadic X-linked disorders that consist of anemia and thrombocytopenia due to a failure in the maturation of red blood cell and platelet precursors plus other hematological abnormalities. These GATA1 mutations are identified by an initial letter identifying the normal amino acid followed by a number giving the position of this amino acid in GATA1, followed by a final letter identifying the amino acid substituted for the normal one. The amino acids are identified as V=valine; M=methionine; G=glycine; S=serine, D=aspartic acid; Y=tyrosine, R=arginine; W=tryptophan, Q=glutamine). These mutations and some key abnormalities they cause are: V205M: familial disease characterized by severe anemia in fetuses and newborns; bone marrow has increased numbers of malformed platelet and red blood cell precursors. G208S and D218G: familial disease characterized by severe bleeding, reduced number of circulating platelets which are malformed (i.e. enlarged), and mild anemia. D218Y: familial disease similar to but more severe that the disease cause by G209S and D218G mutations. R216W: characterized by a beta thalassemia-type disease, i.e. microcytic anemia, absence of hemoglobin B, and hereditary persistence of fetal hemoglobin; symptoms of congenital erythropoietic porphyria; mild to moderately severe thrombocytopenia with features of the gray platelet syndrome. R216Q: familial disease characterized by mild anemia with features of heterozygous rather than homozygous (i.e. overt) beta thalassemia; mild thrombocytopenia with features of the gray platelet syndrome. G208R: disease characterized by mild anemia and severe thrombocytopenia with malformed erythroblasts and megakaryoblasts in the bone marrow. Structural features of these cells were similar to those observed in congenital dyserythropoietic anemia. -183G>A: rare Single-nucleotide polymorphism (rs113966884) in which the nucleotide adenine replaces guanine in DNA at the position 183 nucleotides upstream of the start of GATA1; disorder characterized as mild anemia with structural features in bone marrow red cell precursors similar to those observed in congenital dyserythropoietic anemia. The Gray platelet syndrome is a rare congenital bleeding disorder caused by reductions or absence of alpha-granules in platelets. Alpha-granules contain various factors which contribute to blood clotting and other functions. In their absence, platelets are defective. The syndrome is commonly considered to result solely from mutations in the NBEAL2 gene located on human chromosome 3 at position p21. In these cases, the syndrome follows autosomal recessive inheritance, causes a mild to moderate bleeding tendency, and may be accompanied by a defect in the secretion of the granule contents in neutrophils. There are other causes for a congenital platelet alpha-granule-deficient bleeding disorder viz., the autosomal recessive disease of Arc syndrome caused by mutations in either the VPS33B (on human chromosome 15 at q26) or VIPAS39 (on chromosome 14 at q34); the autosomal dominant disease of GFI1B-related syndrome caused by mutations in GFI1B (located on human chromosome 9 at q34); and the disease caused by R216W and R216Q mutations in GATA1. The GATA1 mutation-related disease resembles the one caused by NBEAL2 mutations in that it is associated with the circulation of a reduced number (i.e. thrombocytopenia) of abnormally enlarged (i.e. macrothrombocytes), alpha-granule deficient platelets. It differs from the NBEAL2-induced disease in that it is X chromosome-linked, accompanied by a moderately severe bleeding tendency, and associated with abnormalities in red blood cells (e.g. anemia, a thalassemia-like disorder due to unbalanced hemoglobin production, and/or a porphyria-like disorder. A recent study found that GATA1 is a strong enhancer of NBEAL2 expression and that the R216W and R216Q inactivating mutations in GATA1 may cause the development of alpha granule-deficient platelets by failing to stimulate the expression of NBDAL2 protein. Given these differences, the GATA1 mutation-related disorder appears better classified as clinically and pathologically different than the gray platelet syndrome. GATA1 in myelofibrosis Myelofibrosis is a rare hematological malignancy characterized by progressive fibrosis of the bone marrow, extramedullary hematopoiesis (i.e. formation of blood cells outside of their normal site in the bone marrow), variable reductions in the levels of circulating blood cells, increases in the circulating levels of the precursors to the latter cells, abnormalities in platelet precursor cell maturation, and the clustering of grossly malformed megakaryocytes in the bone marrow. Ultimately, the disease may progress to leukemia. Recent studies indicate that the megakaryocytes but not other cell types in rare cases of myelofibrosis have greatly reduced levels of GATA1 as a result of a ribosomal deficiency in translating GATA1 mRNA into GATA1 transcription factor. The studies suggest that these reduced levels of GATA1 contribute to the progression of myelofibrosis by leading to an impairment in platelet precursor cell maturation, by promoting extramedullary hematopoiesis, and, possibly, by contributing to its leukemic transformation. References Further reading External links Genecards GeneReviews/NCBI/NIH/UW entry on GATA1-Related X-Linked Cytopenia Geneatlas Infobiogen Nextbio Transcription factors
GATA1
Chemistry,Biology
4,958
6,434,245
https://en.wikipedia.org/wiki/Traumatin
Traumatin is a plant hormone produced in response to wound. Traumatin is a precursor to the related hormone traumatic acid. References Fatty aldehydes Fatty acids Plant hormones Aldehydic acids Carboxylic acids
Traumatin
Chemistry
46
47,352,090
https://en.wikipedia.org/wiki/Education%20for%20Nature%20Vietnam
Education for Nature Vietnam (ENV) was set up in 2000 and according to their website is Vietnam's "first local non-governmental organization to focus on wildlife protection." They have offices in Hanoi. There are three main planks to the work of ENV: Persuading the Vietnamese public of the need to protect nature and wildlife Convincing Vietnamese society that using animal products is hastening the extinction of endangered species Working with the Vietnamese authorities to strengthen wildlife protection laws and enforce the current legislation to its full extent to stop illegal wildlife trade Wildlife trade in Vietnam As well as being a source of endangered wildlife and a consumer of protected animal products, Vietnam is also a major international transit hub for illegal wildlife goods from other countries in South East Asia and as far afield as South Africa. The main destination for much of the illegal wildlife trade is China. Due to the burgeoning economies of both China and Vietnam in recent years, the expanding middle class, with larger disposable incomes, created a surge in demand for wildlife trade products such as rhino horn and tiger bone paste. Two species of pangolin, believed to be the most trafficked mammal in the world and native to Vietnamese forests, are particularly under threat. Many thousands are caught and traded between China and Vietnam every year. Pangolin scales are in demand for use in traditional medicine, and the meat is served in restaurants as a high-end delicacy. Rhinos, annihilated in Vietnam in 2010, are also threatened with extinction as a result of wildlife trade: in 2007, 13 were killed for their horns. By 2014, 1215 rhinos were poached and killed in South Africa. Approach ENV aims to reduce illegal wildlife trade in three ways by: Mobilizing the public to support wildlife protection Strengthening wildlife crime law enforcement by direct support and assistance, and mobilizing public opinion and participation Working with high-level decision-makers and government agencies to formulate and improve wildlife-related policy and legislation. Reducing demand ENV strives to curb demand for wildlife products in Vietnam via Public Service Announcements (PSAs) on TV and radio. They concentrate their efforts on tigers, rhinos, bears and pangolins. The emphasis of these short infomercials is that wildlife trade products, such as pangolin scales, rhino horn and tiger bones, lack medicinal value. A recent 2016 campaign, for instance, pointed out that rhino horn is made of keratin, the same substance as human hair and nails. One would, therefore, be as well eating one's own finger nails and hair. It is well documented that there is now a growing social stigma attached to the use of wildlife products as vulgar status symbols. Money does not buy taste or a social conscience it would appear. But just as important is the fact that being a consumer of illegal wildlife products helps to line the pockets of organized crime and encourages cruelty to animals. The PSAs are aired across Vietnam on up to 80 TV channels, reaching millions of viewers. ENV also produce radio adverts that are regularly broadcast on Voice of Vietnam radio. Since 2008, ENV has partnered with Voice of Vietnam to produce a radio show about wildlife protection topics every month. ENV also works with well-known Vietnamese celebrities to spread the wildlife protection message to their fans and the general public. Further efforts to cut consumer demand have included the establishment of Wildlife Safety Zones in conjunction with ministries, government offices, corporate partners and markets across Vietnam. Among the partners signed up are the US Embassy, Mercedes-Benz, BMW, and the Daewoo Hotel. Outreach events are also run at universities, parks and shopping malls to promote public awareness and involvement in their campaigns. For example, ENV, in partnership with the South African organization Endangered Wildlife Trust, launched a targeted campaign to convince shoppers to ‘Say no to rhino horn’ through awareness events and viral media activities. Strengthening enforcement In addition to reducing demand, ENV are heavily involved in ensuring law enforcement agencies prosecute wildlife crime offenders to the full extent of the law. In recent years there has been a perception that the authorities are not taking wildlife crime as seriously as they should and are reluctant to prosecute anyone other than low level players involved in wildlife trafficking. In 2005 ENV established a Wildlife Crime Unit (WCU) to encourage the Vietnamese public to report wildlife offences. The WCU operates the Wildlife Crime Hotline 1-800-1522, a national toll-free hotline that the public can use to report wildlife crimes throughout the country. Crimes can also be reported via an email hotline and a smart phone application. Since its establishment, ENV's national crime database has recorded over 10,000 reports of wildlife crimes. Getting the public involved is a major concern for ENV. It has established wildlife protection volunteer clubs in over 15 major cities across Vietnam. These clubs carry out awareness events, monitor businesses and report wildlife crimes. They also have promoted greater public involvement in wildlife protection. Since 2013 ENV has cracked down on consumer demand by targeting major cities including Hanoi, Huế, Dong Ha, Ho Chi Mi, Vinh and Da Nang. Their campaign involves surveying restaurants, hotels, bars, traditional medicine shops, pet shops and markets across selected districts. Any violations discovered are reported to the district People's Committee, along with a request that they work with local authorities to tackle each violation. Two months later, a follow-up team from ENV inspects the establishments that had previously been reported and tracks any changes. Report cards are then sent to the People's Committees, summarizing their effectiveness compared to colleagues in different districts. In areas where the process has been completed, wildlife crime has fallen by between 39% and 77%. ENV also tackles illegal online wildlife trade through its internet crime campaign. Thousands of links reported as selling wildlife have been removed, and numerous websites and forums have joined ENV's wildlife safety zone by banning all wildlife advertising. Accompanying this outreach, the wildlife protection organization undertakes wildlife crime investigations. Current investigations focus on crime syndicates that support the illegal transnational trade of endangered species, such as a major criminal network that smuggled frozen tigers from Laos into Vietnam. An investigation into the marine turtle trade in Vietnam resulted in a seizure of 10 tons of marine turtles, and an ongoing criminal investigation by the police. ENV produced a film about the case, and sent it to hundreds of legislators to encourage them to prosecute the kingpin of the marine turtle trade. In May 2020, ENV and Four Paws rescued a bear cub from illegal wildlife trade and brought to Bear Sanctuary Ninh Binh. Improving policy and legislation The third leg of the ENV strategy stool is to work with the highest levels of government to bring about change on a national level by improving legislation and ensuring sound policy in support of wildlife protection. Ultimately their goal is a legal framework in Vietnam that effectively protects endangered species. But much of their day-to-day work revolves around helping to address conflicts and loopholes in existing legislation. . Among ENV's most notable cases recently include: Successfully campaigning to prevent authorities from auctioning off tiger products seized from poachers, arguing that it would increase demand and undermine wildlife protection efforts. ENV's intervention helped to stop this practice. Working with authorities to close down bear bile tourism in Halong Bay. From 2007, hundreds of Korean tourists arrived each day to watch a live bear bile extraction and to buy the bile to take back to their country. After an intensive joint enforcement campaign by Quang Ninh People's Committee, relevant government agencies, and ENV, bear bile tourism in Ha Long was shut down in May 2014, putting the bear farms out of business. Since 2005, there has been a 72% decline in the number of bears caged on farms and exploited for their bile in Vietnam. ENV also fought for any new bears being kept illegally to be confiscated by the government, not left in their owner's hands. Since September 2011, there hasn't been a single case where an illegal bear was discovered and not confiscated. ENV is currently opposing proposals to legalize the farming of endangered species in Vietnam, as they risk increasing demand for wildlife trade products, as well as complicating enforcement efforts. ENV also focuses on investigating, prosecuting and punishing major wildlife crime figures. By working to improve the penal code and fighting for the implementation of Decree 160 at provincial level, ENV aims to make it easier to tackle the kingpins of the illegal wildlife trade. ENV also works to improve awareness of current wildlife protection law in all provinces of Vietnam, enabling more effective law enforcement throughout the country. ENV encourages local authorities to comply with Decree 160 by not auctioning off endangered wildlife such as pangolins after confiscation. References Environmental organizations based in Vietnam Wildlife conservation
Education for Nature Vietnam
Biology
1,793
17,578,696
https://en.wikipedia.org/wiki/Marine%20spatial%20planning
Marine spatial planning (MSP) also known interchangeably as Maritime Spatial Planning, is an ocean management instrument which aids policy-makers and stakeholders in compartmentalizing sea basins within state jurisdiction according to social, ecological and economical objectives in order to make informed and coordinated decisions about how to use marine resources sustainably. MSP generally uses maps to create a more comprehensive picture of a marine area – identifying where and how an ocean area is being used and what natural resources and habitat exist. It is similar to land-use planning, but for marine waters. Through the planning and mapping process of a marine ecosystem, planners can consider the cumulative effect of maritime industries on our seas, seek to make industries more sustainable and proactively minimize conflicts between industries seeking to utilise the same sea area. The intended result of MSP is a more coordinated and sustainable approach to how our oceans are used – ensuring that marine resources and services are utilized, but within clear environmental limits to ensure marine ecosystems remain healthy and biodiversity is conserved. Definition and concept The most commonly used definition of marine spatial planning was developed by the Intergovernmental Oceanographic Commission (IOC) of UNESCO: The main elements of marine spatial planning include an interlinked system of plans, policies and regulations; the components of environmental management systems (e.g. setting objectives, initial assessment, implementation, monitoring, audit and review); and some of the many tools that are already used for land use planning. Whatever the building blocks, the essential consideration is that they need to work across sectors and give a geographic context in which to make decisions about the use of resources, development, conservation and the management of activities in the marine environment Effective marine spatial planning has essential attributes: Multi-objective. Marine spatial planning should balance ecological, social, economic, and governance objectives, but the over riding objective should be increased sustainability. Spatially focused. The ocean area to be managed must be clearly defined, ideally at the ecosystem level - certainly being large enough to incorporate relevant ecosystem processes. Integrated. The planning process should address the interrelationships and interdependence of each component within the defined management area, including natural processes, activities, and authorities. The IOC-UNESCO Marine Spatial Planning Programme helps countries implement ecosystem-based management by finding space for biodiversity, conservation and sustainable economic development in marine areas. IOC-UNESCO has developed several guides, including a 10-step guide on how to get a marine spatial plan started: "Step-by-step Approach for Marine Spatial Planning toward Ecosystem-based Management". IOC-UNESCO has also developed a world-wide inventory of MSP activities. In order for an MSP programme to be successful, there is an crucial need to secure inter- and intra-sectoral cooperations - cooperations between sectors with diverging objectives such as social, ecological and economical - in order to ensure equal fulfillment of all objectives sought to be achieved at sea. Evaluation of Spatially managed marine areas To evaluate how well a marine spatial plan performs, the EU FP7 project MESMA (2009–2013) has developed a step-wise evaluation approach. This framework provides guidance on the selection, mapping, and assessment of ecosystem components and human pressures. It also addresses the evaluation of management effectiveness and potential adaptations to management. Moreover, it provides advice on the use of spatially explicit tools for practical tasks like the assessment of cumulative impacts of human pressures or pressure-state relationships. Governance is directly linked to the framework through a governance analysis that can be performed in parallel and feeds into the different steps of the framework. To help managers, MESMA has developed a tools portal. Tools There are a number of useful and innovative tools that can help managers implement marine spatial planning. Some include: USA MarineCadastre.gov Australia's Marxan Software SeaSketch, a collaborative geodesign tool for MSP UCSB's Global Map of Human Impacts to Marine Ecosystems Duke University's Marine Geospatial Ecology Tools Center for Ocean Solutions' Collaborative Geospatial Information and Tools MESMA Tools for monitoring and evaluation of marine spatial planning Scotland's National Marine Plan Interactive and Marine Scotland Information Portal Mid-Atlantic Ocean Data Portal New England's Northeast Ocean Data Portal Marine Spatial Planning in the European Union Marine Spatial Planning within the context of the European Union is most often addressed as Maritime Spatial Planning - and thus for the sake of the section, the latter name will be used. In the European Commission's 2002 Communications Report to the European Parliament called “Towards a Strategy to Protect and Conserve the Marine Environment”, the very first mentioning of the concept Maritime Spatial Planning appears. The report urged for a need to plan sectoral activities within the sea basins in order to measure the environmental impacts and integrate protective measures. These EU-wide statements could have derived inspiration from the 4th conference of Baltic Sea Ministers for Spatial Planning and Development seeking to establish a transnational spatial planning cooperation including the management of marine and coastal areas. Maritime Spatial Planning officially became a central pillar to the European Commission's maritime policies with the publishing of the Integrated Maritime Policy (IMP) in October 2007. The following year, 2008, the European Commission introduced another marine focused document called a Roadmap for Maritime Spatial Planning: Achieving Common Principles – and in 2012 further development took place when the Commission adopted a Communication on Blue Growth: Opportunities for marine and maritime sustainable growth aiming to unlock the potentials of the blue economy. After more than a decade of MSP programs within the EU, it was decided to pass an EU-wide legislation on the matter in 2014, introducing the Maritime Spatial Planning Directive (2014/89/EU). MSP is presented as a vital environmental protection instrument of the IMP, as one of the central aims are to secure Good Environmental Status (GES) which involves the conservation of clean, healthy and productive seas. The Maritime Spatial Planning Directive According to the European Commission the MSP Directive serves as a framework to organize the many sectoral activities and industries taking place in the sea basins surrounding the European Union, such as fishing, aquaculture, nature conservation, shipping and renewable energy installations. The main objectives of the Directive are to reduce conflicts and increase border cooperation among member states with regards to improving efficient utilization of the sea basins, encourage investments and ensuring proper protection of the marine environment. The MSP Directive introduced requirements for Member States to establish their own maritime spatial planning strategies and implement these by 2021 - especially targeting the 22 coastal Member States. In the context of the European Union, we can identify four central policy drivers for the implementation of Maritime Spatial Planning: environmental legislation, legislation for renewable energy, fisheries regulation and frameworks for cross-sectoral and integrated management. MSP in the EU’s marine renewable energy sector As MSP merely is an instrument aiming to organize sectoral activities in sea bassins, a look into the renewable sector can give an understanding of the workings of MSP. The European Commission highlights that the MSP Directive alongside the aims outlined in the Biodiversity Strategy is the primary legal framework for the achievement of the new marine renewable energy objectives within the European Union. As the EU is pioneering in offshore wind energy, and is the world's leading actor in the development of marine renewable energy as well as possesses the world's largest installment of renewable energy sources, MSP poses a promising role for the development of marine renewable energy, as it can streamline licensing and installations, reduce conflicts among maritime users as well as increase legal security for stakeholders Challenges of MSP in the case of EU’s marine renewable energy While MSP Directive has the potential to simplify the balancing of renewable energy and protection of nature, as it allows actors to divide the sea basins into space with different usages, the MSP Directive itself does have its weaknesses and thus cannot stand alone. The MSP Directive requires all coastal states to work out maritime spatial policies by 2021, however much discretion is left with the member states. The current EU legislation on the protection of nature, species and habitats, such as the Habitats and Birds Directives and the Water Framework Directive, possess derogation clauses, however there are no obligations for member states to actually apply these and thus balance out the creation of marine energy sources and protection of nature (van Hees 2021: 28–31). It is largely recommended that the spatial choices feed into the nature preserving Directives such as the Habitats and Birds Directives and the Water Framework Directive The cumulative effects of offshore renewable energy are uncertain, as Cumulative Impact Assessments (CIA), Strategic Environmental Assessment (SEA), and Environmental Impact Assessment (EIA) often are implemented independently, why they to a large extend fail to paint a full picture of the negative impacts. The EU's MSP is criticized in this regard because when results of these assessments proves inconclusive the MSP Directive lays down the precautionary principle however without giving specification other than member states are to take “preventative measures”. The usage of EIA is further criticized due to it lack of abilities to create assessments for tidal and wave energy installments. The ability to tackle the potential impacts of marine renewable energy sources such as tidal stream, wave energy and salinity gradient energy is a great concern due to uncertaint and the potential risks of sandbank erosions, underwater noise pollution from constructions, sediment starvation, industrial heat waste and physical damage to travelling species as well as aquatic environments. However, evidence show that small-scale energy project has fewer prospects of grave environmental implications, and main concerns lies with projets of large-scales. A central challenge to the MSP within the European Union is the lack of standardized data collection and integration across databases. Therefore, improving this area of concern could help paint a greater picture of the environmental impacts and statuses across Member States’ shared sea basins. To further develop and strengthen MSP with the European Union, it is vital to improve informed policy-making on the area of ocean management. Developing quantitative and comprehensive environmental sustainability assessment (QCESA) tools under the Sustainable Marine Ecosystem Services (SUMES) projects, allows for an integration of Life Cycle Assessments (LCA) and Ecosystem Service Assessments (ESA), potentially simplifying decision-making process among policy makers, as QCESA highlight the trade-offs between various sectoral activities. MSP and marine renewable energy in EU-Member States A majority of research conducted on MSP activities are case studies of each Member State as much discretion is left with the Member States, meaning that MSP varies a great deal across the EU. Belgium and the Netherlands can be seen as some of the frontrunners i MSP, as they both implemented own MSP strategies before EU-wide obligations to do so. From 2002 to 2005 the Belgian GAURFE project sought to deal with the use of the North Sea, and the project defined spatial scenarios clearly visualizing possibilities for the sea territory. Similarly in the Netherlands in 2005 the country's ministry on Housing, Spatial Planning and Environment published its first marine chapter in their national Spatial Planning Document, promoting efficient uses of marine spaces and drawing areas for shipping, military uses and ecologically valued areas. In 2013 Dutch ministries initiated a discussion on marine visions for the North Sea by 2050 involving cultivating the ecosystems whilst utilizing the waves and current to generate more renewable energy. In order to achieve such vision the North Sea 2030 Strategy was initiated. The Netherlands also presents an interesting challenge that smaller coastal Member States can face, as a smaller coastal line simply means less maritime space to delegate, which results in stricter MSP policies at risk for deprioritizing conservation of marine environment in order to achieve economic goals. In 2009 France developed National Strategy for the Oceans, welcoming the emerging focus on blue economy and further elaborated on this approach in 2016 with an alteration of the national Environmental Code, officially implementing the concept of MSP. The tools for implementing MSP are given in the national document of the Sea Basin Strategies. In France, MSP practices currently focus on strengthening the concept of participation as listed in art. 9 and 10 of the MSP Directive (2014/89/EU). Unlike the above-mentioned Member States, Spain has no overarching national maritime policy, however the country has set down sector-specific legislations clearly stating the management of marine spaces. Currently there are 72 areas which have undergone the Strategic Environmental Assessment (SEA), and been classified as suitable for wind farm installations with the ability to be restricted based on the environmental impacts of the installations (Garcia et al. 2021: 2–3). Strengthening Marine Spatial Planning (MSP) in Spain could elevate the importance of maritime zones on the national agenda, as demonstrated by countries like the Netherlands and Belgium, where a unified, long-term vision has supported effective marine area planning. Marine spatial planning in the United Kingdom The Marine and Coastal Access Act 2009 defined arrangements for a new system of marine management, including the introduction of marine spatial planning, across the UK. Although the new system comprises the principles of marine spatial planning as articulated by the European Commission, it is commonly referred to in the UK simply as 'marine planning'. Among the government's stated aims for the new marine planning system is to ensure that coastal areas, the activities within them and the problems they face are managed in an integrated and holistic way. This will require close interaction with town and country planning regimes and, in England and Wales, the new regime for nationally significant infrastructure projects (NSIPs) in key sectors, such as energy and transport. The Marine Policy Statement The cornerstone of the UK marine planning system is the Marine Policy Statement (MPS). It sets out the sectoral/activity specific policy objectives that the UK Government, Scottish Government, Welsh Assembly Government and Northern Ireland Executive are seeking to achieve in the marine area in securing the UK vision of 'clean, healthy, safe, productive and biologically diverse oceans and seas.' The MPS is the framework for preparing Marine Plans and taking decisions that affect the marine environment in England, Scotland, Wales and Northern Ireland. It will also set the direction for new marine licensing and other authorisation systems in each administration. It is proposed that the draft MPS, which was subject to consultation in 2010, will be formally adopted as Government policy in 2011. The Marine Management Organisation In England, the new arrangements provide for the creation of the Marine Management Organisation (MMO), which started work in April 2010. The MMO will deliver UK marine policy objectives for English waters through a series of statutory Marine Plans and other measures. The first Marine Plans will start to be prepared by the MMO on adoption of the MPS in 2011. The UK Government's Consultation on a marine planning system for England document provides, for the benefit of the MMO and other interested parties, more detail on the scope, structure, content and process envisaged for each Marine Plan. Marine Scotland (Scottish Government) Marine Scotland is the government authority which will implement marine planning in Scottish waters under the Marine (Scotland) Act. A pre-consultation National Marine Plan was prepared in 2011 and the final Plan was released in March 2015 Marine spatial planning in the United States On June 12, 2009, President Obama created an Interagency Ocean Policy Task Force to provide recommendations on ocean policy, including MSP. Some individual states have already undertaken MSP initiatives: Massachusetts The Massachusetts Ocean Act, enacted in May 2008, requires the secretary of the Massachusetts Office of Energy and Environmental Affairs to develop a comprehensive ocean management plan. The plan will be submitted to NOAA for incorporation into the existing coastal zone management plan and enforced through the state's regulatory and permitting processes, including the Massachusetts Environmental Policy Act (MEPA) and Chapter 91, the state's waterways law. The goal is to institute a comprehensive approach to ocean resource management that supports ecosystem health and economic vitality, balances current ocean uses, and considers future needs. This will be accomplished by determining where specific ocean uses will be permitted and which ocean uses are compatible. Rhode Island The Rhode Island Ocean Special Area Management Plan, or Ocean SAMP, serves as a federally recognized coastal management and regulatory tool. It was adopted by the Coastal Resource Management Council (CRMC),the state's coastal management agency on October 19, 2010. The Ocean SAMP was then adopted by the National Oceanic and Atmospheric Administration (NOAA) on May 11, 2011. Using the best available science, the Ocean SAMP provides a balanced approach to the development and protection of Rhode Island's ocean-based resources. Research projects undertaken by University of Rhode Island (URI) scientists provide the essential scientific basis for Ocean SAMP policy development. The Ocean SAMP document underwent an extensive public review process prior to adoption. California In 1999, the California state legislature adopted the Marine Life Protection Act. This action required the state to evaluate and possibly redesign all existing state marine protected areas and to potentially create new protected areas that could, to the greatest degree possible, act as a networked system. (Marine protected area designations in California include state marine reserves, marine parks, and marine conservation areas.) This effort does not meet the full definition of marine spatial planning since its goal was to cite only protected areas, rather than all potential ocean uses, but many of its elements (such as stakeholder involvement and mapping approaches) will be of interest to marine spatial planners. Oregon Two controversial ocean issues led to a marine spatial planning effort: concern by fishermen over the designation of marine reserves off the Oregon coast, and proposals by industry to site wave energy facilities in Oregon ocean waters. An executive order directed the Oregon Department of Land Conservation and Development to work with stakeholders and scientists to prepare a plan for ocean energy development (also known as wave energy). This plan was then to be adopted as part of the Oregon Territorial Sea Plan. The state has appointed an advisory committee and expects to adopt the plan in early 2010. It will include mandatory policies for state and federal agency decisions with regard to locating ocean energy facilities in the Oregon Territorial Sea. Washington In March 2010, the Washington State Legislature enacted the Marine Waters Planning and Management Act to address resource use conflicts. A report to the legislature providing guidance and recommendations for moving forward was produced in 2011, and based on the 2012 report, the legislature authorized funds to begin the MSP process off Washington's coast. A state law required an interagency team to provide recommendations to the Washington State Legislature about how to effectively use Marine Spatial Planning and integrate MSP into existing state management plans and authorities. The team is chaired by the Governor's office and coordinated by the Department of Ecology. Other members include the Washington Department of Natural Resources, Washington Sea Grant, the Washington Department of Fish and Wildlife, and Washington State Parks and Recreation Commission. See also Land use planning Marine Park Marine Protected Area Zoning References Further reading ABPmer (2005), Marine Spatial Planning Pilot Literature Review Peterborough. Online:http://www.abpmer.net/mspp ABPmer (2006), Marine Spatial Planning Pilot Final Report. Peterborough. Online:http://www.abpmer.net/mspp Joint Marine Programme Marine Update 55 (2007): Marine Spatial Planning: A down to earth view of managing activities in the marine environment for the benefit of humans and wildlife Long R. (2007). Marine Resource Law. Dublin: Thompson Round Hall Gubbay S. (2004). Marine protected areas in the context of marine spatial planning—discussing the links. A report for WWF-UK Online: https://web.archive.org/web/20070106114002/http://www.wwf.org.uk/filelibrary/pdf/MPAs-marinespacialplanning.pdf External links and references UNESCO International Ocean Council MSP Guide NOAA's MSP Information Site Marine Spatial Planning from Plymouth Marine Institute White House Memorandum creating Interagency Ocean Policy Task Force MESMA Toolbox for monitoring and evaluation of marine spatial planning Oceanography
Marine spatial planning
Physics,Environmental_science
4,044
14,841,405
https://en.wikipedia.org/wiki/Pseudo-zero%20set
In complex analysis (a branch of mathematical analysis), the pseudo-zero set or root neighborhood of a degree-m polynomial p(z) is the set of all complex numbers that are roots of polynomials whose coefficients differ from those of p by a small amount. Namely, given a norm on the space of polynomial coefficients, the pseudo-zero set is the set of all zeros of all degree-m polynomials q such that (as vectors of coefficients) is less than a given ε. See also List of complex analysis topics Timeline of calculus and mathematical analysis References Complex analysis
Pseudo-zero set
Mathematics
117
25,940,561
https://en.wikipedia.org/wiki/The%20Power%20of%20Half
The Power of Half: One Family's Decision to Stop Taking and Start Giving Back is a book written by Kevin Salwen and his teenage daughter Hannah in 2010. The book describes how the Salwen family decided to sell their home so that they could donate half the proceeds to charity. It discusses the initial decision-making, the process of selling the home, making the donation, downgrading to a smaller home, and what they learned in the process. The book details the Salwens' process in choosing a charity partner that would fit their values and effect a lasting change, and how their actions supporting and empowering a village in Ghana differed from their original idea of "direct involvement". Synopsis The book details why and how the Salwen family decided to sell their home in 2006. The home was a luxurious, 6,500-square-foot (600-square-meter), 1912 historic dream-house in Ansley Park, in midtown Atlanta, Georgia. It had Corinthian columns, five bedrooms, eight fireplaces, four ornate bathrooms, and a private elevator to Hannah's bedroom. The family down-graded by replacing their home with a house that was half as expensive and less than half the size. The Salwens donated half the proceeds ($850,000) of the sale of their original home to The Hunger Project, a charity that works to lessen the hunger of 30,000 rural villagers in over 30 villages in Ghana, and helps them gain self-reliance. The book describes the consensus-driven process that the parents and their two children used–over a period of time–to reach the decision to give away half the value of their home, and how they chose the charity from a number of non-profit organizations that they considered. It describes the challenges that the family faced while turning their family project into a reality, from economic ones to keeping the project a secret for a period of time so that they would not appear to be "freaks" to their friends. Before they embarked on the project, the family members had little contact with one another, other than at meals. Hannah notes that The Power of Half "is a relationships book, not really a giving book." She feels that the project helped her family grow closer to one another. The New York Times Book Review notes how the family "became happier with less—and urges others to do likewise." "We know that selling a house is goofy, and we recognize that most people can't do it", admitted Kevin Salwen. "We never encourage anybody to sell their house. That was just the thing that we had more than enough of. For others it may be time, or lattes, or iTunes downloads, or clothes in their closet. But everyone has more than enough of something." He clarified: We want our kids to be idealistic, but we also say, 'Let's not go too nuts here'. We're not Mother Teresa. We're not taking a vow of poverty, or giving away half of everything we own. We gave away half of one thing, which happened to be our house. Everybody can give away half of one thing, and put it to use. You'll do a little bit of good for the world–and amazing things for your relationships. Popular reception Archbishop Desmond Tutu praised Hannah and the Salwens for the project, remarking: "We often say that young people must not let themselves be infected by the cynicism of their elders. Hannah inoculated her family with the vision to dream a different world, and the courage to help create it." Skeptics criticized them for "self-promotion" or for the amount of money they donated to charity. Some critics questioned their choice of charity—finding fault with the family for having donated their money to help needy people in Africa rather than in the United States. Not everyone understood the parents' egalitarian approach with their children, or the family's underlying philosophy. Commented one viewer of a television interview: "What kind of ass clown works his tail off, and busts his hump getting a decent education, only to listen to his kid suggest they give away the house?" As Kevin Salwen noted: "Most people are supportive. And a few are very uncomfortable." Asked facetiously whether Hannah, then still in high school, had "concocted the world's greatest college-admissions ploy", Kevin laughed and replied: "No. Anyway, wouldn't it be the world's most expensive?" Critical reception Reviewing it for The Washington Post, Lisa Bonos wrote that the book, "soaring in idealism, and yet grounded in realism, can show Americans of any means how best to give back." Nicholas D. Kristof, writing in The New York Times, said he found the project "crazy, impetuous, and utterly inspiring", and that "It's a book that, frankly, I'd be nervous about leaving around where my own teenage kids might find it. An impressionable child reads this, and the next thing you know your whole family is out on the street." In the Los Angeles Times, Susan Salter Reynolds wrote: "You feel lighter reading this book, as if the heavy weight of house and car and appliances, the need to collect these things to feel safe as a family, are lifted and replaced by something that makes much more sense." Lili Rosboch wrote for Bloomberg that it "is an inspiring book about the decision to trade objects for togetherness and the chance to help others." Writing in Grist, Jen Harper said that while she was somewhat skeptical before she started the book, the "compelling and well-written narrative left me both impressed and inspired," and that she found the book "endearing, funny, and uplifting". Courtney E. Martin wrote in The Daily Beast that the book "is highly accessible, sure to be devoured by Oprah devotees and disaffected finance guys hoping for a jolt of optimism." Bill Williams of The Boston Globe called it "spirited". Also writing for The Boston Globe, Joseph P. Kahn said "they're my new role models" – after admitting: "I confess to being fixated on the opposite life formula. Call it the Power of Twice. As in, twice the leisure time, twice the income, twice the sleep. A man can dream, can't he?" Subsequent projects The project inspired others to commit to donating half their money, or half of a possession or income, to charity. In an interview in Natural Home Magazine, Hannah noted that "A number of my friends at Atlanta Girls School have started their own Half projects, including a couple who are donating half of their babysitting money to environmental causes. That's pretty flattering." Rev. Tess Baumberger, the Minister at Unity Church of North Easton, Massachusetts, read the book and announced that in December 2010 the Church would give away half of its Sunday collections to a local charity. Baumberger remarked: "What will we learn by practicing the power of half? What will this program teach our children and youth? I cannot wait to find out." Melinda Gates, ex-wife of Microsoft founder Bill Gates, said she was inspired by the Salwens' philanthropic efforts. On December 9, 2010, Bill Gates, Mark Zuckerberg (Facebook's CEO), and investor Warren Buffett signed a promise they called the "Gates-Buffet Giving Pledge", in which they promised to donate to charity at least half of their wealth. After launching the Giving Pledge, the Gates invited the Salwens to Seattle for a photo shoot and conversation about The Power of Half. References External links The Power of Half: One Family's Decision to Stop Taking and Start Giving Back, by Hannah Salwen and Kevin Salwen Book excerpt in Parade, January 17, 2010 Video by the Salwen children, describing the Power of Half project thepowerofhalf.com, website and blog The Power of Half Facebook page Video; April 27, 2010 Simple living 2010 non-fiction books Sustainability books American non-fiction books Altruism Philanthropy Giving
The Power of Half
Biology
1,682
38,618,650
https://en.wikipedia.org/wiki/HD%2028527
HD 28527 is a star in the constellation Taurus, and a member of the Hyades open cluster. It is faintly visible to the naked eye with an apparent visual magnitude of 4.78. The distance to this star, as determined from its parallax shift of , is . It is moving away from the Earth with a heliocentric radial velocity of . Based upon a stellar classification of A6 IV by Cowley et al. (1969), this is an A-type subgiant star that has consumed the hydrogen at its core and is evolving away from the main sequence. Older studies had it classed as an A-type main-sequence star with a class of A7 V. At the age of 307 million years, it has a high rate of spin, revolving upon its axis once every 1.278 days. It is a Delta Scuti variable with 1.75 times the mass of the Sun and 2.2 times the Sun's radius. The star is radiating 19 times the Sun's luminosity from its photosphere at an effective temperature of 8,274 K. Due to its location near the ecliptic, this star is subject to lunar occultations. These events have provided occasional, but not definitive, evidence of a close secondary companion. Eggleton and Tokovinin (2008) catalogue this as a possible triple star system, having the inner pair being similar stars with an angular separation of , and the outer component a magnitude 6.7 star of class F2 at a much wider separation of . See also Taurus (Chinese astronomy) List of stars in Taurus References A-type subgiants Delta Scuti variables Triple stars Taurus (constellation) Durchmusterung objects Gliese and GJ objects 028527 021029 1427 Suspected variables
HD 28527
Astronomy
372
5,547,312
https://en.wikipedia.org/wiki/Vacuum%20deposition
Vacuum deposition is a group of processes used to deposit layers of material atom-by-atom or molecule-by-molecule on a solid surface. These processes operate at pressures well below atmospheric pressure (i.e., vacuum). The deposited layers can range from a thickness of one atom up to millimeters, forming freestanding structures. Multiple layers of different materials can be used, for example to form optical coatings. The process can be qualified based on the vapor source; physical vapor deposition uses a liquid or solid source and chemical vapor deposition uses a chemical vapor. Description The vacuum environment may serve one or more purposes: reducing the particle density so that the mean free path for collision is long reducing the particle density of undesirable atoms and molecules (contaminants) providing a low pressure plasma environment providing a means for controlling gas and vapor composition providing a means for mass flow control into the processing chamber. Condensing particles can be generated in various ways: thermal evaporation sputtering cathodic arc vaporization laser ablation decomposition of a chemical vapor precursor, chemical vapor deposition In reactive deposition, the depositing material reacts either with a component of the gaseous environment (Ti + N → TiN) or with a co-depositing species (Ti + C → TiC). A plasma environment aids in activating gaseous species (N2 → 2N) and in decomposition of chemical vapor precursors (SiH4 → Si + 4H). The plasma may also be used to provide ions for vaporization by sputtering or for bombardment of the substrate for sputter cleaning and for bombardment of the depositing material to densify the structure and tailor properties (ion plating). Types When the vapor source is a liquid or solid, the process is called physical vapor deposition (PVD), which is used in semiconductor devices, thin-film solar panels, and glass coatings. When the source is a chemical vapor precursor, the process is called chemical vapor deposition (CVD). The latter has several variants: low-pressure chemical vapor deposition (LPCVD), plasma-enhanced chemical vapor deposition (PECVD), and plasma-assisted CVD (PACVD). Often a combination of PVD and CVD processes are used in the same or connected processing chambers. Applications Electrical conduction: metallic films, resistors, transparent conductive oxides (TCOs), superconducting films & coatings Semiconductor devices: semiconductor films, electrically insulating films Solar cells Optical films: anti-reflective coatings, optical filters Reflective coatings: mirrors, hot mirrors Tribological coating: hard coatings, erosion resistant coatings, solid film lubricants Energy conservation & generation: low emissivity glass coatings, solar absorbing coatings, mirrors, solar thin film photovoltaic cells, smart films Magnetic films: magnetic recording Diffusion barrier: gas permeation barriers, vapor permeation barriers, solid state diffusion barriers Corrosion protection: Automotive applications: lamp reflectors and trim applications Vinyl record pressing, manufacture of gold and platinum records A thickness of less than one micrometre is generally called a thin film, while a thickness greater than one micrometre is called a coating. See also Ion plating Sputter deposition Cathodic arc deposition Spin coating Metallised film Molecular vapor deposition References Bibliography SVC, "51st Annual Technical Conference Proceedings" (2008) SVC Publications (previous proceeding available on CD) Anders, Andre (editor) "Handbook of Plasma Immersion Ion Implantation and Deposition" (2000) Wiley-Interscience Bach, Hans and Dieter Krause (editors) "Thin Films on Glass" (2003) Springer-Verlag Bunshah, Roitan F (editor). "Handbook of Deposition Technologies for Films and Coatings", second edition (1994) Glaser, Hans Joachim "Large Area Glass Coating" (2000) Von Ardenne Anlagentechnik GmbH Glocker and I. Shah (editors), "Handbook of Thin Film Process Technology", Vol.1&2 (2002) Institute of Physics (2 vol. set) Mahan, John E. "Physical Vapor Deposition of Thin Films" (2000) John Wiley & Sons Mattox, Donald M. "Handbook of Physical Vapor Deposition (PVD) Processing" 2nd edition (2010) Elsevier Mattox, Donald M. "The Foundations of Vacuum Coating Technology" (2003) Noyes Publications Mattox, Donald M. and Vivivenne Harwood Mattox (editors) "50 Years of Vacuum Coating Technology and the Growth of the Society of Vacuum Coaters" (2007), Society of Vacuum Coaters Westwood, William D. "Sputter Deposition", AVS Education Committee Book Series, Vol. 2 (2003) AVS Willey, Ronald R. "Practical Monitoring and Control of Optical Thin Films (2007)" Willey Optical, Consultants Willey, Ronald R. "Practical Equipment, Materials, and Processes for Optical Thin Films" (2007) Willey Optical, Consultants Thin film deposition Vacuum Industrial processes
Vacuum deposition
Physics,Chemistry,Materials_science,Mathematics
1,037
41,983,973
https://en.wikipedia.org/wiki/Rees%20matrix%20semigroup
In mathematics, the Rees matrix semigroups are a special class of semigroups introduced by David Rees in 1940. They are of fundamental importance in semigroup theory because they are used to classify certain classes of simple semigroups. Definition Let S be a semigroup, I and Λ non-empty sets and P a matrix indexed by I and Λ with entries pλ,i taken from S. Then the Rees matrix semigroup M(S; I, Λ; P) is the set I×S×Λ together with the product formula (i, s, λ)(j, t, μ) = (i, spλ,j t, μ). Rees matrix semigroups are an important technique for building new semigroups out of old ones. Rees' theorem In his 1940 paper Rees proved the following theorem characterising completely simple semigroups: That is, every completely simple semigroup is isomorphic to a semigroup of the form M(G; I, Λ; P) for some group G. Moreover, Rees proved that if G is a group and G0 is the semigroup obtained from G by attaching a zero element, then M(G0; I, Λ; P) is a regular semigroup if and only if every row and column of the matrix P contains an element that is not 0. If such an M(G0; I, Λ; P) is regular, then it is also completely 0-simple. See also Semigroup Completely simple semigroup David Rees (mathematician) Footnotes References . . Semigroup theory
Rees matrix semigroup
Mathematics
323
62,586,161
https://en.wikipedia.org/wiki/Thangjing%20Temple%2C%20Moirang
Thangjing Temple (), also known as Ibudhou Thangjing Temple is an ancient temple dedicated to the god Thangjing , the ancient national deity of Keke Moirang (in modern day Moirang). The best time to visit the temple is from May to July during the onset of the traditional music and dance religious festival of Lai Haraoba. It attracts many tourists every year, including historians and archaeologists. According to legend, temple is the place where the dance was first performed. See also Hiyangthang Lairembi Temple Sanamahi Temple Sanamahi Kiyong Temple References Ancient archaeological sites Ancient culture Archaeological monuments in India Archaeological sites in India Landmarks in India Meitei architecture Meitei pilgrimage sites Monuments and memorials in Manipur Monuments and memorials to Meitei people Religious places Tourist attractions in Manipur Temples in Manipur Bishnupur district
Thangjing Temple, Moirang
Engineering
182
17,066,948
https://en.wikipedia.org/wiki/Xylogics
Xylogics, Inc., formerly Xylogic Systems, Inc., was an American computer company independently active from 1971 to 1995. Originally headquartered in Needham, Massachusetts, the company produced a variety of hardware products for minicomputers, initially focusing on typesetting and word processing workstations based on DEC computers. The company also specialized in the design of disk controllers and networking hardware and software. The company was acquired by Bay Networks in 1995. History Xylogics was founded in 1971 by three former NASA employees: Laurence Liebson, Robert Bushkoff and Stephen Rotman. The company was originally named Xynetic Systems, but this name was already in use by a California company, so the group in Needham, Massachusetts, changed their name in late 1971 to Xylogic Systems. Their original business was the design and development of computerized newspaper typesetting and editing systems. The first system was developed for the Daytona Beach News Journal, with Farmington, New Mexico's Farmington Daily Times getting the second system. By 1972, Xylogics had grown to more than 15 people, and moved to Natick, Massachusetts. The company used the GRI minicomputer, and custom designed many circuit boards to support disk drives, paper tape punches and readers, and automatic capture of newswire service feeds. By 1974, the company had developed a CRT editing station, and offered systems of up to 4 computers and more than 50 terminals for newspaper or in-plant publishing to perform editing and typesetting. About this time, a second division was created to design and build disk controllers for DEC computers, derived in part from the successful designs and manufacturing capability developed for the newspaper business. In 1976, a major customer of Xylogics, Dymo Graphics Systems, purchased the newspaper product line and hired most of the original developers. Dymo Graphics, of Wilmington, MA was the first company to develop laser technology for typesetting applications. Dymo Graphic Systems combined their typesetting equipment business with the Xylogics editing systems, and by 1978 had over 100 turnkey typesetting systems in use worldwide. The Xylogic Systems typesetting capability was the first with WYSIWYG (What You See Is What You Get) printing capability for Tabloid size page layout, and later full page layout. Capability included on-line classified ad capture with automated pricing, in addition to full page markup and typesetting. In 1977, Dymo was purchased by Eselte Corporation, who wanted control of the highly successful "Dymo Label Maker" consumer product. Eselte sold the newspaper and typesetting business to ITEK Corporation in 1978, who wanted the laser technology IP. ITEK declined support to the newspaper and typesetting business, and in 1979, the newspaper product development and manufacturing staff of 150 engineers, technicians, assemblers, and field support personnel had dwindled to 2 by January 1980. Xylogics continued building disk and other controllers for DEC hardware. They also built serial terminal servers from 4-port to 72-port units under the product name Annex. Xylogics was acquired by Bay Networks in December 1995 which in turn was acquired by Nortel in June 1998. After Nortel's bankruptcy in 2009, support for the remaining Annex products ended up with Avaya. Xylogics was located in Burlington, Massachusetts. The Multibus based Xylogics 450 SMD and Xylogics 451 ESMD disk controllers along with Interphase Multibus SMD disk controllers were significant to the workstation and minicomputer industry during the 1980s as a low cost interface the low cost, high performance, high capacity SMD and ESMD disk drives available at the time. Sun-1, Sun-2 and Sun-3 servers and Silicon Graphics IRIS, HP/Apollo all used Xylogics 450 and 451 disk controllers. References 1971 establishments in Massachusetts 1995 disestablishments in Massachusetts American companies established in 1971 American companies disestablished in 1995 Computer companies established in 1971 Computer companies disestablished in 1995 Computer buses Computer storage devices Defunct computer companies of the United States Defunct computer hardware companies Defunct networking companies Defunct software companies of the United States Minicomputers Networking hardware companies Technology companies based in Massachusetts
Xylogics
Technology
884
46,416,037
https://en.wikipedia.org/wiki/Celltrion
Celltrion, Inc. () is a biopharmaceutical company headquartered in Incheon, South Korea. Celltrion Healthcare conducts worldwide marketing, sales, and distribution of biological medicines developed by Celltrion. Celltrion's founder, Seo Jung-jin, is the richest person in South Korea. Seo Jung-jin, its founder was awarded the 2021 EY World Entrepreneur Of The Year. History In 1999, Nexol, Inc. (now Celltrion Healthcare Co., Ltd.) was founded as a global business management consulting firm. In 2002, Celltrion, Inc. was founded as a biopharmaceutical company. In 2008, Nexol and Celltrion established a global distribution agreement. In 2009, distribution channels were established in America, Oceania, Europe (Hospira) and Nexol, Inc. renamed as Celltrion Healthcare Co., Ltd. In 2010, distribution channels were established in Japan (Nippon Kayaku), Commonwealth of Independent States (CIS), Eastern Europe, and the Middle East (Egis). In 2013, distribution channels were added in Europe (Mundipharma, Biogaran, and Kern). In September 30, 2024, Celltrion completed the establishment of its oversees corporation in Vietnam. 22 Oct. 2024. Celltrion secured a 100.4 billion won ($72.8 million) deal for contract development and manufacturing (CDMO) with TEVA Pharmaceuticals International for the migraine treatment Ajovy. In November 2024, Celltrion acquired iQone Healthcare Switzerland, a pharmaceutical company for around 30 billion won ($21 million). This move will help its expansion in Europe. Products The company's products are manufactured at mammalian cell culture facilities designed and built to comply with the United States FDA’s cGMP, and the European Medicines Agency’s GMP standards. Inline product Remsima (infliximab) is a biosimilar monoclonal antibody against tumor necrosis factor alpha (TNF-α), approved by the European Medicines Agency (EMA) for treatment of: rheumatoid arthritis, adult Crohn's disease, pediatric Crohn's disease, ulcerative colitis, pediatric ulcerative colitis, ankylosing spondylitis, psoriatic arthritis, and psoriasis. In 2012 Remsima was approved by the Republic of Korea's Ministry of Food and Drug Safety (MFDS), previously known as Korea Food and Drug Administration and in 2013 it became the world's first biosimilar monoclonal antibody (mAb) approved by the EMA. Herzuma is a biosimilar trastuzumab approved by the MFDS for treatment of early and advanced (metastatic) HER2+ breast cancer as well as advanced (metastatic) stomach cancer. Herzuma is a HER2+ breast cancer therapy designed to treat aggressive HER positive metastatic and adjuvant breast cancer, as well as HER2 positive adenocarcinoma of the stomach that has spread (metastatic or advanced gastric cancer). Truxima (previously known as CT-P10) is the first biosimilar of the reference monoclonal antibody rituximab that targets CD20 molecule primarily found on the surface of B-cells. Its target indications are rheumatoid arthritis, non-Hodgkin lymphoma and chronic lymphocytic leukemia. It was approved by the EMA in February 2017. See also List of pharmaceutical companies References External links Pharmaceutical companies of South Korea Life sciences industry Companies based in Incheon Biotechnology companies established in 1999 Biotechnology companies of South Korea South Korean brands South Korean companies established in 1999 Companies listed on the Korea Exchange Companies in the KOSPI 200 Companies in the S&P Asia 50
Celltrion
Biology
808
571,341
https://en.wikipedia.org/wiki/DOT%20%28graph%20description%20language%29
DOT is a graph description language, developed as a part of the Graphviz project. DOT graphs are typically stored as files with the .gv or .dot filename extension — .gv is preferred, to avoid confusion with the .dot extension used by versions of Microsoft Word before 2007. dot is also the name of the main program to process DOT files in the Graphviz package. Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented graph. Finally, others, such as lefty, dotty, and grappa, provide an interactive interface. The GVedit tool combines a text editor and a non-interactive viewer. Most programs are part of the Graphviz package or use it internally. DOT is historically an acronym for "DAG of tomorrow", as the successor to a DAG format and a dag program which handled only directed acyclic graphs. Syntax Graph types Undirected graphs At its simplest, DOT can be used to describe an undirected graph. An undirected graph shows simple relations between objects, such as reciprocal friendship between people. The graph keyword is used to begin a new graph, and nodes are described within curly braces. A double-hyphen (--) is used to show relations between the nodes. // The graph name and the semicolons are optional graph graphname { a -- b -- c; b -- d; } Directed graphs Similar to undirected graphs, DOT can describe directed graphs, such as flowcharts and dependency trees. The syntax is the same as for undirected graphs, except the digraph keyword is used to begin the graph, and an arrow (->) is used to show relationships between nodes. digraph graphname { a -> b -> c; b -> d; } Attributes Various attributes can be applied to graphs, nodes and edges in DOT files. These attributes can control aspects such as color, shape, and line styles. For nodes and edges, one or more attribute–value pairs are placed in square brackets [] after a statement and before the semicolon (which is optional). Graph attributes are specified as direct attribute–value pairs under the graph element, where multiple attributes are separated by a comma or using multiple sets of square brackets, while node attributes are placed after a statement containing only the name of the node, but not the relations between the dots. graph graphname { // This attribute applies to the graph itself size="1,1"; // The label attribute can be used to change the label of a node a [label="Foo"]; // Here, the node shape is changed. b [shape=box]; // These edges both have different line properties a -- b -- c [color=blue]; b -- d [style=dotted]; // [style=invis] hides a node. }HTML-like labels are supported, although initially Graphviz did not handle them. Comments DOT supports C and C++ style single line and multiple line comments. In addition, it ignores lines with a number sign symbol # as their first character, like many interpreted languages. Layout programs The DOT language defines a graph, but does not provide facilities for rendering the graph. There are several programs that can be used to render, view, and manipulate graphs in the DOT language: General Graphviz – a collection of CLI utilities and libraries to manipulate and render graphs into different formats like SVG, PDF, PNG etc. dot – CLI tool for conversion between and other formats JavaScript Canviza JavaScript library for rendering DOT files d3-graphviza JavaScript library based on Viz.js and D3.js that renders DOT graphs and supports animated transitions between graphs and interactive graph manipulation Vis.jsa JavaScript library that accept DOT as input for network graphs. Viz.js – a JavaScript port of Graphviz that provides a simple wrapper for using it in the browser. hpcc-js/wasm Graphviza fast WASM library for Graphviz similar to Viz.js Java Gephian interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs Grappaa partial port of Graphviz to Java graphviz-javaan open source partial port of Graphviz to Java available from github.com ZGRViewera DOT viewer Other Beluginga Python- & Google Cloud Platform-based viewer of DOT and Beluga extensions Delineatea Rust application for Linux than can edit fully-featured DOT graph with interactive preview, and export as PNG, SVG, or JPEG dot2texa program to convert files from DOT to PGF/TikZ or PSTricks, both of which are rendered in LaTeX OmniGrafflea digital illustration application for macOS that can import a subset of DOT, producing an editable document (but the result cannot be exported back to DOT) Tulipa software framework in C++ that can import DOT files for analysis VizierFXan Apache Flex graph rendering library in ActionScript Notes See also External links DOT tutorial and specification Drawing graphs with dot Node, Edge and Graph Attributes Node Shapes Gallery of examples Graphviz Online: instant conversion and visualization of DOT descriptions Boost Graph Library lisp2dot or tree2dot: convert Lisp programming language-like program trees to DOT language (designed for use with genetic programming) Mathematical software Graph description languages Graph drawing
DOT (graph description language)
Mathematics
1,210
66,277,135
https://en.wikipedia.org/wiki/Pachygonosaurus
Pachygonosaurus (meaning "wide angled [vertebrae] lizard") is a genus of ichthyosaur from Upper Silesia, Poland (then part of the German Empire). It was described in 1916 by Friedrich von Huene and it has one single species, Pachygonosaurus robustus, based solely on the holotype, composed of two vertebral centra discovered in 1910, with a further three vertebrae also possibly belonging to the genus. Nowadays, Pachygonosaurus is considered a nomen dubium. See also Timeline of ichthyosaur research References Cited bibliography Ichthyosaurs Nomina dubia Fossils of Poland Fossil taxa described in 1916 Ichthyosauromorph genera
Pachygonosaurus
Biology
146
11,817,082
https://en.wikipedia.org/wiki/Cercospora%20hayi
Cercospora hayi is a fungal plant pathogen. It can cause the brown spot disease in bananas. References hayi Fungal plant pathogens and diseases Fungus species
Cercospora hayi
Biology
35
65,885,589
https://en.wikipedia.org/wiki/Cistercian%20numerals
The medieval Cistercian numerals, or "ciphers" in nineteenth-century parlance, were developed by the Cistercian monastic order in the early thirteenth century at about the time that Arabic numerals were introduced to northwestern Europe. They are more compact than Arabic or Roman numerals, with a single glyph able to indicate any integer from 1 to 9,999. Digits are based on a horizontal or vertical stave, with the position of the digit on the stave indicating its place value (units, tens, hundreds or thousands). These digits are compounded on a single stave to indicate more complex numbers. The Cistercians eventually abandoned the system in favor of the Arabic numerals, but marginal use outside the order continued until the early twentieth century. History The digits and idea of forming them into ligatures were apparently based on a two-place (1–99) numeral system introduced into the Cistercian Order by John of Basingstoke, archdeacon of Leicester, who it seems based them on a twelfth-century English shorthand (ars notaria). In its earliest attestations, in the monasteries of the County of Hainaut, the Cistercian system was not used for numbers greater than 99, but it was soon expanded to four places, enabling numbers up to 9,999. The two dozen or so surviving Cistercian manuscripts that use the system date from the thirteenth to the fifteenth century, and cover an area from England to Italy, Normandy to Sweden. The numbers were not used for arithmetic, fractions or accounting, but indicated years, foliation (numbering pages), divisions of texts, the numbering of notes and other lists, indexes and concordances, arguments in Easter tables, and the lines of a staff in musical notation. Although mostly confined to the Cistercian order, there was some usage outside it. A late-fifteenth-century Norman treatise on arithmetic used both Cistercian and Indo-Arabic numerals. In one known case, Cistercian numerals were inscribed on a physical object, indicating the calendrical, angular and other numbers on the fourteenth-century astrolabe of Berselius, which was made in French Picardy. After the Cistercians had abandoned the system, marginal use continued outside the order. In 1533, Heinrich Cornelius Agrippa von Nettesheim included a description of these ciphers in his Three Books of Occult Philosophy. The numerals were used by wine-gaugers in the Bruges area at least until the early eighteenth century. In the late eighteenth century, Chevaliers de la Rose-Croix of Paris briefly adopted the numerals for mystical use, and in the early twentieth century Nazis considered using the numerals as Aryan symbolism. The modern definitive expert on Cistercian numerals is the mathematician and historian of astronomy, David A. King. Form A horizontal stave was most common while the numerals were in use among the Cistercians. A vertical stave was attested only in Northern France in the fourteenth and fifteenth centuries. However, eighteenth- and twentieth-century revivals of the system in France and Germany used a vertical stave. There is also some historical variation as to which corner of the number represented which place value. The place-values shown here were the most common among the Cistercians and the only ones used later. Using graphic substitutes with a vertical stave, the first five digits are 1, 2, 3, 4, 5. Reversing them forms the tens, 10, 20, 30, 40, 50. Inverting them forms the hundreds, 100, 200, 300, 400, 500, and doing both forms the thousands, 1,000, 2,000, 3,000, 4,000, 5,000. Thus (a digit 1 at each corner) is the number 1,111. (The exact forms varied by date and by monastery. For example, the digits shown here for 3 and 4 were in some manuscripts swapped with those for 7 and 8, and the 5's may be written with a lower dot ( etc.), with a short vertical stroke in place of the dot, or even with a triangle joining to the stave, which in other manuscripts indicated a 9.) Horizontal numbers were the same, but rotated 90 degrees counter-clockwise. (That is, for 1, for 10, for 100—thus for 101—and for 1,000, as seen above.) Omitting a digit from a corner meant a value of zero for that power of ten, but there was no digit zero. (That is, an empty stave was not defined.) Higher numbers When the system spread outside the order in the fifteenth and sixteenth centuries, numbers into the millions were enabled by compounding with the digit for "thousand". For example, a late-fifteenth century Norman treatise on arithmetic indicated 10,000 as a ligature of "1,000" wrapped under and around "10" (and similarly for higher numbers), and Noviomagus in 1539 wrote "million" by subscripting "1,000" under another "1,000". A late-thirteenth-century Cistercian doodle had differentiated horizontal digits for lower powers of ten from vertical digits for higher powers of ten, but that potentially productive convention is not known to have been exploited at the time; it could have covered numbers into the tens of millions (horizontal 100 to 103, vertical 104 to 107). A sixteenth-century mathematician used vertical digits for the traditional values, horizontal digits for millions, and rotated them a further 45° counter-clockwise for billions and another 90° for trillions, but it is not clear how the intermediate powers of ten were to be indicated and this convention was not adopted by others. The Ciphers of the Monks The Ciphers of the Monks: A Forgotten Number-notation of the Middle Ages, by David A. King and published in 2001, describes the Cistercian numeral system. The book received mixed reviews. Historian Ann Moyer lauded King for re-introducing the numerical system to a larger audience, since many had forgotten about it. Mathematician Detlef Spalt claimed that King exaggerated the system's importance and made mistakes in applying the system in the book devoted to it. Moritz Wedell, however, called the book a "lucid description" and a "comprehensive review of the history of research" concerning the monks' ciphers. Notes References External links FRB Cistercian font (OTF) at GitHub. Uses the Private Use Area, since Unicode has declined to assign character codes. Font characters are segments, to be combined into the complete numerals. Cistercian number generator at dCode. Uses digit shapes similar to the astrolabe (vertical stave, triangular 5). L2/20-290 Background for Unicode consideration of Cistercian numerals Cistercian Web Component for use on web pages. Includes a live updating Cistercian numeral clock. Numeral systems 13th-century introductions Writing systems introduced in the 2nd millennium
Cistercian numerals
Mathematics
1,465
1,029,697
https://en.wikipedia.org/wiki/Brood%20%28comics%29
The Brood are a fictional race of insectoid, parasitic, extraterrestrial beings appearing in American comic books published by Marvel Comics, especially Uncanny X-Men. Created by writer Chris Claremont and artist Dave Cockrum, they first appeared in The Uncanny X-Men #155 (March 1982). Concept and creation According to Dave Cockrum, the Brood were originally conceived to serve as generic subordinates for the main villain of The Uncanny X-Men #155: "We had Deathbird in this particular story and Chris [Claremont] had written into the plot 'miscellaneous alien henchmen.' So I had drawn Deathbird standing in this building under construction and I just drew the most horrible looking thing I could think of next to her." Biology Physical characteristics The Brood are an alien race of insectoid beings. They are a specialized race, one that has evolved to reproduce and consume any available resource. They are sadistic creatures that enjoy the suffering they cause others, especially the terror their infection causes their hosts. Despite their resemblance to insects, the Brood have endoskeletons as well as exoskeletons. Also unlike insects, they have fanged jaws instead of mandibles. Their skulls are triangular and flat, with a birthmark between their eyes. Their two front legs are tentacles they can use to manipulate objects. Due to their natural body armor and fangs, the Brood are very dangerous in combat. In addition, they have stingers that can deliver either paralyzing or killing poison. Reproduction The Brood reproduce asexually and have no clear gender. They reproduce by forcibly implanting their eggs into other sentient organism. Each host can only support one egg. Upon hatching, the host dies as the Brood egg releases mutagenic enzymes into the bloodstream. At the same time, the Broodling mentally attacks and assimilates its host. They use a hive mind to pass memory to their hosts, which also passes an individual's knowledge, given to a broodling, to the hive and back to the queen, meaning newborn brood know what any member of the race knows. Until the embryo gains the host's body the embryo can only gain temporary control of the host, often without the host noticing as the host is unaware when it loses control. If the host possesses any powers, the resultant Brood will inherit them. The persona of the host once the Brood is "born" appears to be extinguished, but in some cases, the host's will may be strong enough to survive and coexist with the Brood's. However it is implied that hosts with advanced healing ability are unable to turn, for example when an egg was implanted in Deadpool, instead of turning into a Brood, a small Brood burst out of Deadpool's body. Civilization The Brood have a civilization based on the typical communal insect societies, such as those of the bees and ants. The Empress is the absolute ruler, while the Queens lead individual Brood colonies and the "sleazoids" do all the work; despite their evil, they never rebel against their Queens, perhaps due to the latter's telepathic abilities. However, the Queens have no allegiance to each other. Some roles have proven to be flexible. The Empress is the ruler of the Brood and contains the species' hive mind. She exercises almost total control over her progeny, including determining which Brood become Queen and which remain Warrior-Prime. The Empress is larger than other Brood and has horns, whiskers, and telepathic powers. The Firstborn are the children and servants of the Empress. Because they are not born from hosts, they do not possess the Warrior-Prime ability to conceal their appearance by shifting into their host-forms. The Firstborn are larger than other Broods and possess biological armor and teleportation, but lack wings. The Brood Queens fulfill the mental command of the Empress and can communicate with their spawn via telepathy. Additionally, they lead Brood colonies and have venomous stingers. The Broodlings are Brood workers and warriors who are organized into several different roles, among them Weaponeers, Clan-Masters, Hunt-Masters, Huntsmen, Tech Handlers, and Scholars. Elite Broodlings are known as Warriors-Prime. The Brood King is a mutant Brood created when a King egg is implanted in its host. Unlike those infected with Queen or Drone eggs, the Brood King cannot infect others. It is later revealed that the King-type egg was created from Kree experimentation. Understanding the Brood's volatile nature, the Kree created an egg-like device that can disrupt the species' matriarchy, control them, and use them as weapons to disrupt rival advanced civilizations. Broo, a Brood drone who developed sentience, later eats the device, temporarily giving him its properties. Technology The Brood, like most alien species, possess advanced technology, however due to their nature it's unknown if they developed it themselves or assimilated it for their own benefit. These include: Interstellar warships: despite using the Acanti, the Brood also use actual ships, however with a mixture of organic and inorganic material. Energy-based weapons Psi-scream weapons: gun-like devices that attacks the minds of targets with subconscious fears and hatreds. Inhibitor fields that block telepathy. Nanotechnology Teleportation Fictional species biography The Brood are the Main Universe's first natural predators, spawned on a dark galaxy prior to the emergence of Galactus from his incubator. Their planet of origin is unknown, but it is rumored that the Brood originated from another dimension. They were eventually found and captured by the Kree Empire, along with other hive species, so they could weaponize them and use them against rival empires. The Supreme Intelligence approved of the idea, stating that they could be used against the Shi'ar Empire, although, he sees that it'll take millions of years to create a large enough army to fully be unleashed as a weapon against their enemies. In the next eight million years of experimentation, the Black Judges deemed the Brood a major success and were unleashed on the Shi'ar Galaxy where the Brood found certain large space-dwelling creatures that they decided to prey to use as living starships to infest neighboring star systems and initiating an intergalactic campaign to build a fearsome empire. These space-dwelling creatures included the whale-like Acanti and the shark-like Starsharks. Years later, Kree warrior Mar-Vell, has been ordered to make contact with the stranded Grand Admiral Devros on a planet in the Absolom Sector, a region known to be infested with Brood, Mar-Vell's team, which includes the medic Una and Colonel Yon-Rogg, was ambushed by Brood warriors after landing on the planet and taken prisoner by the Brood-infected Devros. The colony's Brood Queen impregnates each captive with Brood embryos, but Mar-Vell and Una manage to escape, destroy both leaders of the Brood colony, and ridding themselves of their infections using Una's modified omni-wave projector which had been designed to eliminate Brood embryos. After rescuing Colonel Yon-Rogg, the trio escape the planet and are rescued by the Shi'ar royal Deathbird. Deathbird later allies with The Brood to gain their help deposing her sister Lilandra Neramani as ruler of their empire. As a reward for their help, Deathbird gives the Brood Lilandra, the X-Men, and Carol Danvers, along with Fang of the Imperial Guard, to use as hosts. The Brood infect the entire party, except for Danvers, who they perform experiments on because of her half-human/half-Kree genes. Wolverine's adamantium skeleton allowed his healing ability to purge him of the embryo, and he helps the others escape. He is unable to save Fang, who becomes a Brood warrior before they leave. The Brood Queen orders her forces to find them, until she is contacted by the Queen embryo that is implanted in Cyclops. It explains that the X-Men are returning to Broodworld. Resigned to their dooms, the heroes help the Acanti recover the racial Soul, a supernatural force that must be passed from one Acanti leader ("The Prophet-Singer") to the next. The Soul is located in a crystalline part of the dead Acanti Prophet-Singer's brain. Afterwards, the Prophet-Singer leads the Acanti to safety in deep space. Returning to Earth with the Starjammers, the X-Men defeat and detain the Brood Queen infecting Charles Xavier. The advanced medical facilities at the Starjammers' disposal are able to transfer the consciousness of Xavier from the Brood Queen's body to a new cloned body, enabling Xavier to walk again. A Brood-filled starshark later crashes on Earth, leading to the infection of several nearby humans by the Brood. One of the victims is allowed to live as a human assistant, but when he leads the aliens to some mutants, the Brood infect him and the mutants as well. It is revealed that the Brood can morph into the host's form or a hybrid of the two forms. In the course of the battle, an Earth woman named Hannah Connover is infected with a queen, though this problem would not develop until later. Another branch of the Brood manage to land on Earth and infect more mutants, along with the Louisiana Assassins Guild of which X-Man Gambit is a member. The X-Men kill most of the infected people. They and Ghost Rider manage to rescue many of the Brood's other uninfected prisoners, only to have the "Spirit of Vengeance" become infected himself. Psylocke manages to separate Ghost Rider from the Brood host before it could kill Danny Ketch, the current host of the Ghost Rider, and he and the X-Men saved New Orleans. Hannah Connover, previously infected with a Queen, soon begins to demonstrate attributes of Brood. She uses her new-found "healing" powers to become a faith healer and cure many people with her reverend husband, but secretly her Brood nature causes her to infect many people with embryos. Across the Galaxy, on the "true" Brood Homeworld, the Brood Empress sends her "firstborn" Imperial Assassins to kill Hannah for going against the Empress' wishes. Unable to stop future waves of Assassins from coming, the X-Man, Iceman, freezes Connover, putting her in suspended animation and causing the current firstborn to kill themselves, as in their minds the mission was accomplished. Connover is assumed to still be in suspended animation with her Queen host in the custody of the X-Men. In Contest of Champions II, the Brood and the Badoon abduct several heroes and pose as a benevolent species willing to give the heroes access to advanced technology after competing against each other in a series of contests. However, in reality, the Brood intend to use Rogue, infested with a Brood Queen, to absorb the powers of the contest winners and become unstoppable. Fortunately, Iron Man realizes that the Brood are drugging food to amplify aggression- relying on his armor's own life-support systems to prevent him succumbing to the 'infection'- and is able to uncover the plot. Although the Queen had already absorbed the powers and skills of the various contest winners- in the form of Captain America, Thor, the Hulk, Spider-Man, Jean Grey and the Scarlet Witch-, the remaining heroes managed to defeat her. The Brood Queen was extracted from Rogue with the aid of Carol Danvers, who forced the Brood Queen to flee by threatening to kill Rogue. After confirming that Rogue was cured, the heroes returned home. A mixed team of X-Men and Fantastic Four was formed to investigate what happened to the NASA space station Simulacra, only to discover that it had been taken over by the Brood scouting party, leading the way to Earth for the Brood armies. After battling them, they left the station leaving the infected crew members alive despite the desires of Wolverine and Emma Frost to kill them due to the interference of the Invisible Woman. Soon a Brood invasion arrived at New York City. The X-Men and Fantastic Four defended the city from the Brood despite facing overwhelming odds. Using an enhanced Cerebro, Emma Frost projected a telepathic hallucination of the Phoenix and Galactus appearing in the city, which caused the Brood to panic and recall their forces to the dozens of Acanti ships after which they fled Earth. It was also revealed that at some point in the dawn of civilization during the year 2610 BC, a spaceship filled with Brood crash landed in Egypt, marking the end of the second great dynasty. They went as far as turning a Pharaoh into one of their own and it also would have been the end of days if not for Imhotep and a group of soldiers, among them En Sabah Nur, who were able to successfully fend off the invasion. Imhotep himself killed the Queen. The Brood return to Earth in the Ms. Marvel series and battle Carol Danvers, who as Binary played a key role in their earlier defeat. Strangely enough, none of the Brood present recognize who she is, possibly because of her inability to fully access her cosmic powers, which also changed her physical appearance. The Brood are also stalked and summarily exterminated by the alien hunter called Cru, with whom Ms. Marvel also came into violent contact. It later turned out that there had been escape pods from the Acanti, and the other one had the Brood Queen who had landed on Monster Island. Cru itself was back on Earth, having regenerated and was searching the Brood Queen. Ms. Marvel, seeing her as a threat, fought Cru again and in the process merged part of their minds temporarily making them unable to use their powers and therefore vulnerable to the Brood. The Brood Queen had established a nest on the island and infected the Moloids with their eggs. Ms. Marvel then discovered that the Brood Queen who ruled over the Brood of Sleazeworld had survived and now had a crystalline form. Upon arriving on the island, Operation: Lightning Storm strike team and Wonder Man battled the Brood, while Cru and Ms. Marvel having regained the ability to use their powers fought the Brood Queen. In the process Cru was killed, after which the Brood Queen was taken into space by Ms. Marvel who destroyed her with a nuclear weapon. During the invasion of Annihilus and his Annihilation Wave, the Brood were decimated, and the species is now on the brink of extinction. Some Brood appear in the arena of planet Saakar in the Planet Hulk storyline of The Incredible Hulk, one of them even becoming a main character. A Brood referred to as "No-Name", who becomes a genetic queen because their race is becoming rarer, becomes the lover of insect king Miek and also appears in World War Hulk. When it is discovered that Miek was the one who let the Hulk's shuttle explode, No-Name and Hulk attack Miek. Near the end of the War the "Earth Hive", the shared consciousness of every insect on Earth, use Humbug as a Trojan Horse to deal a crippling blow to No-Name, rendering her infertile and poisoning the last generation of hivelings, growing in Humbug's body. No-Name is a rarity among the Brood, as she learned to feel compassion for other living beings. The Brood reappeared once again in the pages of Astonishing X-Men, however these Brood are revealed to be actual genetically grown hybrids created by a geneticist known only as Kaga who started growing and redesigning them with missing data about post M-Day work on Henry McCoy's research computers. In the 2011 "Meanwhile" storyline Astonishing X-Men, S.W.O.R.D. scientists successfully find a way to remove a Brood embryo from a human host, but not before the Brood they are studying escape and attack, prompting a botched rescue mission led by Abigail Brand and another rescue mission led by the X-Men. Given the chance to lower the Brood's numbers further, they discovered that the Annihilation event had caused the interstellar ecosystem to destabilize, since the Brood, dangerous as they are, served as natural predators for even worse species. These remaining species are now breeding out of control and present a greater threat than the Brood ever did. With no other choice, the X-Men act to prevent the Brood extinction. According to Bishop, there would be a race of benevolent Brood in the future, prompting the X-Men to willingly serve as Brood hosts, so that they could instill them with the same compassion felt by No-Name. After being connected with the hive-mind, the X-Men learned of a nearby Brood who was born with the ability to feel compassion, making him the Brood equivalent of a mutant. While such Brood are typically destroyed upon hatching by their kind, this one was permitted to live due to the Brood's dwindling numbers. After rescuing the Brood mutant and defeating the Brood in battle and allowing them to escape, the X-Men had their Brood embryos removed, to be raised aboard the Peak, with the Brood mutant acting as their mentor. The 2012 X-Men subseries Wolverine and the X-Men featured a Broodling as a student at Wolverine's Jean Grey School for Higher Learning. Nicknamed "Broo" by Oya, the Broodling was a mutant, and both intelligent and non-violent able to wear clothing and glasses (which he felt made him look less frightening). Broo expressed a desire to join the Nova Corps. In a possible future timeline seen by Deathlok, Broo joined the X-Men. During the Age of Ultron storyline, it is revealed that while in a hidden S.H.I.E.L.D. substation decades in the past, the future-Wolverine released and was infected by a less menacing Brood. When he cut the embryo out of his body, the Brood Collective responded to the attack by altering the physical structure of all future Brood to the form it is now known for. During the Infinity storyline, a Brood Queen appeared as a member of the Galactic Council where she represents the Brood race, which indicates that the Brood Empress was apparently one of the casualties of the Annihilation Wave. She later made a deal with J'son, the former Emperor of the Spartoi Empire which consisted in J'son surrendered the planet Spartax to the Brood, and in exchange, J'son would acquire one planet for every ten worlds they conquered ever since. In Spider-Man and the X-Men, the Brood made a pact with the Symbiotes but ended up being betrayed and possessed until Spider-Man, with the help of the X-Men and S.W.O.R.D managed to defeat them. During The Black Vortex storyline, the deal between the Brood Queen and J'son is discovered as the Brood began their takeover of Spartax and use the entire planet as hosts. The plot is foiled once Kitty Pryde is cosmically powered by the Black Vortex and banishes the Brood from the planet. Later the Galactic Council manipulated Thanos into attacking the Earth so he could make way for them to raze the planet, the Queen of the Brood was killed by Angela before she and the other leaders of the Galactic Council could begin their attack. Dario Agger and Roxxon Energy Corporation managed to obtain some Brood. Using some parasites on some wolves, Agger and one of his scientists sent them to track down Weapon H. When Weapon H slayed them, Dario Agger had Brood Drones, Brood-infected Space Sharks, and a Brood-infected human riding an Acanti into attacking Weapon H. After the Brood Drones and Brood-Space Sharks are slain and the Acanti is knocked out, the Brood-infected human states to Weapon H that Roxxon wants to hire him. Weapon H stated that those who claim to help people will kill them anyway and has the Brood-infected human carry a message to Roxxon to leave him alone. Later a Brood Queen came to Earth to find the perfect host so she can initiate the process to become the new Brood Empress. Her attempts at targeting astronauts were thwarted by the discovery that J. Jonah Jameson is the perfect host but Frank Castle was able to interfere. When the New Mutants returned from a space adventure with a mysterious egg which turned out to be extremely valuable to rival alien nation-states of the Kree and the Shi'ar, as well as the Brood, as shown by the Kree and Shi'ar fighting over its location and the Brood invading Earth to get it back. The X-Men beat them back while Broo and his colleagues studied the egg and learn that it actually was the King Egg, a superweapon developed by Kree scientists thousands of years before the modern Marvel Universe on the Kree capital world of Hala, to foster the Brood race and instill a patriarchal element that, when activated, could give one member a supercharged version of the Empress' pheromonal control to turn the entire Brood species into a controllable army. This would allow the Kree to set the deadly predators against rival intergalactic powers and consume them. This leads every Brood Queen to send their swarms in pursuit of it, to prevent any loss of their power in the Brood hive-mind. The Brood initially attacked Earth, but a small team of X-Men including Cyclops, Jean Grey, Havok, Vulcan and Broo was able to get the King Egg off of Earth and lead the aliens off-planet. The Brood followed after them, with the X-Men eventually ending up alongside the Starjammers and members of the Shi'Ar Imperial Guard. After crash-landing on an abandoned planet, it initially looks like the assembled heroes are going to be overwhelmed and wiped out by the sheer magnitude of Brood attacking them. As the various factions of the Brood descended upon the X-Men in a massive battle, the Brood suddenly stopped in place. But just as the Brood surround them, to everyone's surprise, the Brood halt their attack when Broo eats the King Egg. This enhances Broo's biology, increasing his pheromone output to the point where even the Brood Queens become subservient of him. For all intents and purposes, eating the egg turned Broo into the Brood King. Some time later, the X-Men got a distress call from deep space and find that the galaxy’s Brood problem is not as solved as they’d thought! When the X-Men’s close friend Broo became the Brood King, he gained the ability to control the savage alien race he was both a part of and so different from. Now he is experiencing his own nightmare scenario, the Brood are killing his friends, and there is nothing he can do to stop it! Rogue Brood factions have begun running wild. It was soon revealed that Nightmare is the force behind the recent Brood expansion, using his abilities to usurp control of the race whenever Broo is asleep, and he's doing this specifically to take revenge on the X-Men for Jean Grey's earlier victory over him, while the new Brood Empress, unhappy with the fact Broo had control of the entire Brood, used this opportunity to break free of him. The Empress also blames the Kree for creating the King Egg that gave Broo control over her race, and has a convoluted scheme to convert superheroes into Brood to use them as an army against the Kree; the part-Kree Captain Marvel is particularly key to this plot. However the plan backfired when the Empress killed Binary, enraging Carol to the point she created a black hole which killed not only the Empress but her loyal Brood too. Known Brood The following characters are either Brood or were turned into Brood: Assassin – A Brood that was spawned from an Assassin's Guild member. Blake – A servant of Roxxon Energy Corporation who was infected by the Brood parasite to help apprehend Weapon H. Blindside – A Brood Mutant that can teleport. He was killed by Storm. Brickbat – A Brood Mutant with super-strength. He was killed when Havok collapsed a building where a support beam impaled him. Broo – A Brood born a mutant when held in the Pandora's Box Space Station. Broodskrulls – A group of Brood and Skrull Hybrids. Buchanan Mitty – Former Entymologist turned Brood. Deadpal – A small Brood born from Deadpool's body after a failed transformation. Devros – A former Kree turned Brood. Dive-Bomber – A Brood Mutant that can fly with the wings on its back. He was killed by Havok. Dzilòs – A Brood killed by Wolverine. Empress Brood – Fang – An Imperial Guard turned Brood. Haeg'Rill – One of the Brood who allied with Deathbird. Hannah Conover – A known Brood Queen who is married to William Conover. Harry Palmer – A human paramedic-turned-Brood who is the leader of the Brood Mutants where he infected different Mutants. He was killed by Wolverine. Josey Thomas – A human paramedic-turned-Brood who is Harry Palmer's partner in the Brood Mutants. She is later killed by the Empress Brood. Kam'N'Ehar – One of the Brood who allied with Deathbird. Karl Lykos Brood clone – A clone with mixture from both Sauron and Brood DNA, created by Kaga to join his army to annihilate the X-Men. Khasekhemwy Khasekhemui – A Pharaoh and ruler of Egypt during the Second Dynasty who was infected by the Brood. He and the Brood with him were killed by a coalition led by Imhotep. Krakoa Brood clone – A clone with a mixture from both Krakoa and Brood DNA, created by Kaga to join his army to annihilate the X-Men. Lockup – A Brood Mutant with a paralyzing touch. He was killed when Havok collapsed a stage on him and Spitball. Nassis – A former Shi'ar student turned Brood. No-Name – A Brood Queen that is a member of the Warbound. Queen of the Brood – An unnamed Brood Queen who is a member of the Galactic Council. Skur'kll – One of the Brood who allied with Deathbird. Spitball – Robert Delgado was a lawyer from Denver whose mutant powers allow him to spit plasma. He was among the mutants who were turned into Brood by Harry Palmer. During the fight with the X-Men, Spitball is killed when Havok collapsed a stage on him and Lockup. T'Crilēē – A hunt-master which contacted a Shi'ar vessel. Temptress – A Brood Mutant with pheromones that enable her to enslave anyone to her control. While having ensnared Psylocke and Rogue under her control, Temptress was killed by Wolverine. Tension – A Brood Mutant who can extend his arms to constrict anyone. After attacking Reverend William Conover, Tension was killed by Havok. Tuurgid – A former Frost Giant turned Brood. Whiphand – A Brood Mutant who can transform his arms into long bands of energy that can disrupt the neuro-functions of anyone. He was killed by Colossus who snapped his neck. Xzax – A Brood mercenary who is a member of Dracula's New Frightful Four. He was killed when Deadpool slammed him into a moving truck. Zen-Pram – A former Kree turned Brood. Other versions Age of Apocalypse In the Age of Apocalypse timeline, without the X-Men to aid them, part of the Shi'ar Imperium was consumed by the Brood, who infected its populace with Brood implants, including the still-captive Christopher Summers. Escaping to Earth, Summers fought to control his Brood implant, but was captured by Mister Sinister. Sinister turned him over to the Dark Beast, who then proceeded to experiment on him for years. Summers eventually escaped, and began infecting other humans (Including the AoA version of Joseph "Robbie" Robertson, as well as friends of Misty Knight and Colleen Wing). Ultimately, Corsair transformed into a Brood Queen and attempted to kill Alex but was killed by his son Cyclops. The Summers brothers cremated their father and indirectly deprived Sinister of the chance to carry out further tests on Brood DNA. Amalgam Comics In Amalgam Comics, the Brood is combined with Brother Blood to form Brother Brood, and with the Cult of Blood to form the Cult of Brood. The Brood appear alongside Brother Brood, but are presented as supernatural rather than extraterrestrial. Bishop's timeline According to the time-traveling X-Man Bishop there are benign factions of Brood in the future. It is speculated that these "good" Brood are originated from Hannah Connover. JLA/Avengers In JLA/Avengers, the Brood have a brief cameo scene, where they are seen attacking Mongul and apparently invading Warworld as the two universes begin to come together. WildC.A.T.s/X-Men In WildC.A.T.s/X-Men: The Silver Age, alien hybrids of the Brood and Daemonites are created by Mister Sinister. Ultimate Marvel In the Ultimate Marvel universe, the Brood appeared as a Danger Room training exercise during the Tempest arc of Ultimate X-Men. The Brood are later revealed to be creatures native to the mindscape, where the Shadow King dwells. X-Men: The End In X-Men: The End, taking place in a possible future, the Brood hatch a plan with Lilandra (possessed by Cassandra Nova). Nova plans to solidify her rule over Shi'ar space by smuggling an other-dimensional pure-Brood queen from an alternate universe. This realm is one where the X-Men failed to ever fight the Brood, they are described as 'pure'. This Brood Queen is implanted in Lilandra's sister, Deathbird. Marvel 2099 During the attack of insanity brought by Psiclone, the androgynous harlot Cash imagined the Brood between the races of extraterrestrials who swarmed the streets of Transverse City. X-Men '92 In the comic book series of X-Men '92, which is set in the X-Men animated series' universe, a cadre of Mutant Brood called X-Brood (composed of Hardside, Fastskin, Phader, Sharpwing and Openmind) were tracked down by the Shi'ar, until they were saved by the X-Men. Earth X In Earth X, while telling to Isaac Christians of the Dire Wraiths exiled by the Spaceknights in Limbo, Kyle Richmond mentions the Brood, when wondering why the invasion attempts were always done by shapeshifting races as the Skrulls, the Impossible Men or the Brood. Marvel Zombies: Resurrection In Marvel Zombies: Resurrection, the infection that has transformed most of Earth's heroes into zombie-like beings is revealed to be the result of a Brood infesting Galactus, which allowed the Brood to achieve a new state of being and expand their resources even further. Heroes Reborn (2021) In the 2021 "Heroes Reborn" comic, the Brood were responsible for infecting the Imperial Guard who were allied with Hyperion. In other media Television A heavily altered version of the Brood called the Colony appears in X-Men: The Animated Series. These versions are reptilian and possess metallic armor. The actual Brood cameo in an episode featuring Mojo. The Brood make a cameo appearance in the Avengers Assemble episode "Mojoworld". A member of the Brood makes a cameo appearance in the M.O.D.O.K. episode "Beware What from Portal Comes!". Video games A Brood Queen appears as a boss in X-Men (1994). The Brood appear in X-Men: Mutant Apocalypse. A species based on the Brood called the Cerci appear in X-Men Legends II: Rise of Apocalypse. They are genetically engineered, insectoid creatures with animal-like intelligence. The Brood appear in Marvel Heroes. Brood appears as a card in Marvel Snap. Collectibles One of the Marvel Milestone statues features Marc Silvestri's Brood-infected Wolverine cover for Uncanny X-Men #234. Brood Queen is one of the "build a figure" toys in the Marvel Legends series. Broodling toys have been produced by Toy Biz (winged, for their X-Men line) and Marvel Select Toys (unwinged and based on Fang's transformation, in a two pack with a Skrull warrior). References External links The Brood at Marvel.com The Brood at UncannyXmen.net Brood at Comic Vine Brood at Comic Book DB Fictional species and races Hive minds in fiction
Brood (comics)
Biology
6,812
349,755
https://en.wikipedia.org/wiki/Uniform%20norm
In mathematical analysis, the uniform norm (or ) assigns, to real- or complex-valued bounded functions defined on a set , the non-negative number This norm is also called the , the , the , or, when the supremum is in fact the maximum, the . The name "uniform norm" derives from the fact that a sequence of functions converges to under the metric derived from the uniform norm if and only if converges to uniformly. If is a continuous function on a closed and bounded interval, or more generally a compact set, then it is bounded and the supremum in the above definition is attained by the Weierstrass extreme value theorem, so we can replace the supremum by the maximum. In this case, the norm is also called the . In particular, if is some vector such that in finite dimensional coordinate space, it takes the form: This is called the -norm. Definition Uniform norms are defined, in general, for bounded functions valued in a normed space. Let be a set and let be a normed space. On the set of functions from to , there is an extended norm defined by This is in general an extended norm since the function may not be bounded. Restricting this extended norm to the bounded functions (i.e., the functions with finite above extended norm) yields a (finite-valued) norm, called the uniform norm on . Note that the definition of uniform norm does not rely on any additional structure on the set , although in practice is often at least a topological space. The convergence on in the topology induced by the uniform extended norm is the uniform convergence, for sequences, and also for nets and filters on . We can define closed sets and closures of sets with respect to this metric topology; closed sets in the uniform norm are sometimes called uniformly closed and closures uniform closures. The uniform closure of a set of functions A is the space of all functions that can be approximated by a sequence of uniformly-converging functions on For instance, one restatement of the Stone–Weierstrass theorem is that the set of all continuous functions on is the uniform closure of the set of polynomials on For complex continuous functions over a compact space, this turns it into a C* algebra (cf. Gelfand representation). Weaker structures inducing the topology of uniform convergence Uniform metric The uniform metric between two bounded functions from a set to a metric space is defined by The uniform metric is also called the , after Pafnuty Chebyshev, who was first to systematically study it. In this case, is bounded precisely if is finite for some constant function . If we allow unbounded functions, this formula does not yield a norm or metric in a strict sense, although the obtained so-called extended metric still allows one to define a topology on the function space in question; the convergence is then still the uniform convergence. In particular, a sequence converges uniformly to a function if and only if If is a normed space, then it is a metric space in a natural way. The extended metric on induced by the uniform extended norm is the same as the uniform extended metric on Uniformity of uniform convergence Let be a set and let be a uniform space. A sequence of functions from to is said to converge uniformly to a function if for each entourage there is a natural number such that, belongs to whenever and . Similarly for a net. This is a convergence in a topology on . In fact, the sets where runs through entourages of form a fundamental system of entourages of a uniformity on , called the uniformity of uniform convergence on . The uniform convergence is precisely the convergence under its uniform topology. If is a metric space, then it is by default equipped with the metric uniformity. The metric uniformity on with respect to the uniform extended metric is then the uniformity of uniform convergence on . Properties The set of vectors whose infinity norm is a given constant, forms the surface of a hypercube with edge length  The reason for the subscript “” is that whenever is continuous and for some , then where where is the domain of ; the integral amounts to a sum if is a discrete set (see p-norm). See also References Banach spaces Functional analysis Normed spaces Norms (mathematics)
Uniform norm
Mathematics
871
21,576,203
https://en.wikipedia.org/wiki/Stoneley%20wave
A Stoneley wave is a boundary wave (or interface wave) that typically propagates along a solid-solid interface. When found at a liquid-solid interface, this wave is also referred to as a Scholte wave. The wave is of maximum intensity at the interface and decreases exponentially away from it. It is named after the British seismologist Dr. Robert Stoneley (1894–1976), a lecturer in the University of Leeds, who discovered it on October 1, 1924. Occurrence and use Stoneley waves are most commonly generated during borehole sonic logging and vertical seismic profiling. They propagate along the walls of a fluid-filled borehole. They make up a large part of the low-frequency component of the signal from the seismic source and their attenuation is sensitive to fractures and formation permeability. Recent studies have found that Stoneley wave processing in borehole help to distinguish between fractured versus non-fractured coal seam. Therefore, analysis of Stoneley waves can make it possible to estimate these rock properties. The standard data processing of sonic logs to derive wave velocity and energy content is explained in and. Comparison to other waves A number of wave modes have been predicted based on the fluidity of the medium. Effects of permeability Permeability can influence Stoneley wave propagation in three ways. Stoneley waves can be partly reflected at sharp impedance contrasts such as fractures, lithology, or borehole diameter changes. Moreover, as formation permeability increases, Stoneley wave velocity decreases, thereby inducing dispersion. The third effect is the attenuation of Stoneley waves. References Surface waves
Stoneley wave
Physics
335
19,039,923
https://en.wikipedia.org/wiki/HD%2016955
HD 16955, also known as HR 803, is a double or multiple star. With an apparent visual magnitude of 6.376, is lies at or below the nominal limit for visibility with a typical naked eye. The measured annual parallax shift is 9.59 milliarcseconds, which yields an estimated distance of around 340 light years. The star is moving closer to the Sun with a heliocentric radial velocity of around -10 km/s. This is an A-type main-sequence star with a stellar classification of A3 V. Hauck et al. (1995) identified this as a Lambda Boötis star with a circumstellar shell, but this now appears to be unlikely. It has 2.25 times the mass of the Sun and is spinning rapidly with a projected rotational velocity of 175 km/s. The star is radiating about 27 times the Sun's luminosity from its photosphere at an effective temperature of roughly 8,450 K. HD 16955 has a magnitude 10.36 companion, component B, which is located, as of 2015, at an angular separation of 3.0 arcseconds along a position angle of 19°. This is the likely source for the detected X-ray emission with a luminosity of coming from these coordinates, since A-type stars are not expected to emit X-rays. Component C is a more distant magnitude 12.94 companion located at a separation of 51.10 arcseconds along a position angle of 92°, as of 2015. References A-type main-sequence stars Lambda Boötis stars Double stars Aries (constellation) Durchmusterung objects 016955 012744 0803
HD 16955
Astronomy
357
9,092,767
https://en.wikipedia.org/wiki/Alfred%20Kempe
Sir Alfred Bray Kempe FRS (6 July 1849 – 21 April 1922) was a mathematician best known for his work on linkages and the four colour theorem. Biography Kempe was the son of the Rector of St James's Church, Piccadilly, the Rev. John Edward Kempe. Among his brothers were Sir John Arrow Kempe and Harry Robert Kempe. He was educated at St Paul's School, London and then studied at Trinity College, Cambridge, where Arthur Cayley was one of his teachers. He graduated BA (22nd wrangler) in 1872. Despite his interest in mathematics he became a barrister, specialising in the ecclesiastical law. He was knighted in 1913, the same year he became the Chancellor for the Diocese of London. He was also Chancellor of the dioceses of Newcastle, Southwell, St Albans, Peterborough, Chichester, and Chelmsford. He received the honorary degree DCL from the University of Durham and he was elected a Bencher of the Inner Temple in 1909. In 1876 he published his article On a General Method of describing Plane Curves of the nth degree by Linkwork, which presented a procedure for constructing a linkage that traces an arbitrary algebraic plane curve. This was a remarkable generalization of his work on the design of linkages to trace straight lines. This direct connection between linkages and algebraic curves is now called Kempe's universality theorem. While Kempe's proposed proof was flawed, the first complete proof was provided in 2002, based on his ideas. In 1877 Kempe discovered a new straight line linkage called the Quadruplanar inversor or Sylvester–Kempe Inversor and published his influential lectures on the subject. In 1879 Kempe wrote his famous "proof" of the four colour theorem, shown incorrect by Percy Heawood in 1890. Much later, his work led to fundamental concepts such as the Kempe chain and unavoidable sets. Kempe (1886) revealed a rather marked philosophical bent, and much influenced Charles Sanders Peirce. Kempe also discovered what are now called multisets, although this fact was not noted until long after his death. Kempe was elected a fellow of the Royal Society in 1881. He was Treasurer and vice-president of the Royal Society 1899–1919. He was a president of the London Mathematical Society from 1892 to 1894. He was also a mountain climber, mostly in Switzerland. His first wife was Mary, daughter of Sir William Bowman, 1st Baronet; she died in 1893. He then married, in 1897, Ida, daughter of Judge Meadows White, QC. He had two sons and one daughter. References External links From the Cornell University archives: A. B. Kempe (1877) How to draw a straight line; a lecture on linkages , London: Macmillan and Co. Found at Project Gutenberg: A. B. Kempe (1877) How to draw a straight line; a lecture on linkages, London: Macmillan and Co. Examples of Kempe's Universality Theorem, Mechanical computation and algebraic curves Automatic generation of Kempe Linkages for Algebraic Curves. 19th-century English mathematicians 20th-century English mathematicians 1849 births 1922 deaths Graph theorists Fellows of the Royal Society Alumni of Trinity College, Cambridge English mountain climbers
Alfred Kempe
Mathematics
666
71,514,674
https://en.wikipedia.org/wiki/Ant%20supercolony
An ant supercolony is an exceptionally large ant colony, consisting of a high number of spatially separated but socially connected nests of a single ant species (meaning that the colony is polydomous), spread over a large area without territorial borders. Supercolonies are typically polygynous, containing many egg-laying females (queens or gynes). Workers and queens from different nests within the same supercolony can freely move among the nests, and all workers cooperate indiscriminately with each other in collecting food and care of the brood, and show no apparent mutual aggressive behavior. As long as suitable unoccupied space with sufficient resources is available, supercolonies expand continuously through budding, as queens together with some workers migrate over short distances and establish a new connected nest. The supercolony can also expand over long distances through jump-dispersal, potentially ranging between continents. Jump-dispersal usually occurs unintentionally through human-mediated transport. A striking example of an ant species forming supercolonies across continents is the Argentine ant (Linepithema humile). The also highly invasive red imported fire ant (Solenopsis invicta) and Solenopsis geminata additionally use classic mating flights, thus using three primary modes of dispersal. Out of some 14,000 described ant species, supercolonialism is found in less than 1% of all ants. In general, ants that form supercolonies are invasive and harmful in the non-native environments. While not all supercolonial species are invasive and not all invasive ants are dominant, supercolonies are usually associated with invasive populations. Some invasive species are known to form supercolonies in their native habitat as well. In their native range, relatively small supercolonies are observed, whereas they are much larger, dominant and a threat for ecological diversity in their invasive range. Exceptions of species that form supercolonies without being invasive are mainly found in the genus Formica. Although supercolonies are mainly observed in relatively few ant species, similar unicolonial populations are also found in some species of the termite genus Reticulitermes. Unicoloniality versus supercoloniality Initially, it was hypothesized that unicoloniality is a characteristic of certain ant species in which all workers of that species are amicable, whatever their nest of origin. So, all members of the species would accept each other, irrespective of the nest of origin and irrespective of the distance between the nests. In contrast, multicoloniality is the common characteristic of ants to show all colonies being aggressive to each other, including different colonies of the same species. A supercolony would be a large aggregation of nests of a species that normally would exhibit multicoloniality, but in the case of a supercolony has all workers from all connected nests being non-aggressive to each other. The Argentine ant (Linepithema humile), forming megacolonies of spatially separate nests, was thought to be a perfect example of unicoloniality, never exhibiting multicoloniality. Giraud et al. (2002), however, discovered that L. humile also forms supercolonies that are aggressive to each other, so unicoloniality turned out to be limited. They hypothesized that the difference between supercoloniality and unicoloniality is not clear-cut, but that they are rather points on a continuum between two extremes: multicoloniality with all colonies generally being aggressive to each other, contrasted with unicoloniality with absolute absence of aggression between colonies, and supercoloniality somewhere in between. Therefore, Pedersen et al. (2006) redefined supercoloniality and unicoloniality as follows: Supercolony: A colony that contains such a large number of nests that direct cooperative interactions are impossible between individuals in distant nests. There are no behavioral boundaries (aggression) within the supercolony. Unicolonial: A unicolonial species is one that can form supercolonies. A unicolonial population consists of one or several supercolonies. They suggest that the success of invasive ants such as L. humile stem from the ecological conditions in the introduced range that allow to dramatically extend the dimension of supercolonies, rather than from a shift in social organization in the invaded habitat. Supercolonies in termites Although supercolonies are mainly observed in relatively few ant species, similar unicolonial populations are also found in some species of the termite genus Reticulitermes. In France, a supercolony of the invasive termite species Reticulitermes urbis was observed, covering about seven hectares, similar to an ant supercolony. Invasive unicolonial metapopulations of Reticulitermes flavipes in Toronto, Canada are described in 2012. They can cover tens of kilometers, number hundreds of thousands or millions of individuals and show lack of intercolony aggression. Especially in urban habitats they form area-wide supercolonies. Examples Species known to form supercolonies are: (see also the list on AntWiki) Anoplolepis gracilipes (yellow crazy ant) Formica Formica aquilonia Formica exsecta Formica obscuripes Formica polyctena Formica yessensis Iridomyrmex purpureus (meat ant) Lasius neglectus Lepisiota Lepisiota canescens Lepisiota frauenfeldi Lepisiota incisa Linepithema humile (Argentine ant) Monomorium pharaonis Myrmica rubra Nylanderia fulva (tawny crazy ant) Paratrechina longicornis Pheidole megacephala Plagiolepis Plagiolepis alluaudi (little yellow ant) Plagiolepis invadens Plagiolepis pygmaea Plagiolepis schmitzii Polyrhachis robsoni Pseudomyrmex veneficus Solenopsis Solenopsis geminata (tropical fire ant) Solenopsis invicta (red imported fire ant, or RIFA) Solenopsis richteri (black imported fire ant, or BIFA) Solenopsis saevissima (pest fire ant) Tapinoma Tapinoma darioi Tapinoma ibericum Tapinoma magnum Tapinoma sessile Technomyrmex albipes Tetramorium alpestre Vollenhovia emeryi Wasmannia auropunctata Formica A supercolony of Formica obscuripes in the US state of Oregon, consisting of more than 200 nests and an estimated population of 56 million individuals was described in 1997. The Ishikari supercolony of Formica yessensis on Hokkaido, Japan comprise estimated more than 45,000 nests, more than 300,000,000 workers and more than 1,000,000 queens. Tapinoma Three of the four species identified in the `Tapinoma nigerrimum complex´ are known as supercolonial: T. darioi, T. ibericum and T. magnum. Tapinoma nigerrimum is monodomous to moderately polydomous, multicolonial, and supercoloniality is unknown.Tapinoma sessile'' lives in its natural habitat in small colonies. Invaded in urban areas, it exhibits extreme polygyny and polydomy and becomes a dominant invasive pest. Dependent on the season, the number of nests in the colony may alternately fuse into one or a few in winter and grow from spring, to reach maximum nest density in summer. Their early-season population growth is exponential. In general, T. sessile colonies move on a regular basis. They establish trails between nest and food resources, and to colonise new areas. References Supercolony Superorganisms
Ant supercolony
Biology
1,657
54,560,830
https://en.wikipedia.org/wiki/International%20System%20for%20Human%20Cytogenomic%20Nomenclature
The International System for Human Cytogenomic Nomenclature (ISCN; previously the International System for Human Cytogenetic Nomenclature) is an international standard for human chromosome nomenclature, which includes band names, symbols, and abbreviated terms used in the description of human chromosome and chromosome abnormalities. The ISCN has been used as the central reference among cytogeneticists since 1960. Abbreviations of this system include a minus sign (-) for chromosome deletions, and del for deletions of parts of a chromosome. Revision history ISCN (2024). S. Karger Publishing. ISCN (2020). S. Karger Publishing. ISCN (2016). S. Karger Publishing. ISCN (2013). S. Karger Publishing. ISCN (2009). S. Karger Publishing. ISCN (2005). S. Karger Publishing. ISCN (1995). S. Karger Publishing. ISCN (1991). S. Karger Publishing. ISCN (1985). S. Karger Publishing. ISCN (1981). S. Karger Publishing. ISCN (1978). S. Karger Publishing. Paris Conference (1971): "Standardization in Human Cytogenetics." (PDF) Birth Defects: Original Article Series, Vol 8, No 7 (The National Foundation, New York 1972) Chicago Conference (1966): "Standardization in Human Cytogenetics." Birth Defects: Original Article Series, Vol 2, No 2 (The National Foundation, New York 1966). London Conference (1963): "London Conference on the Normal Human Karyotype." Cytogenetics 2:264–268 (1963) Denver Conference (1960): "A proposed standard system of nomenclature of human mitotic chromosomes." The Lancet 275.7133 (1960): 1063-1065. See also Locus (genetics) Cytogenetic notation References External links About the ISCN recommendations - Human Genome Variation Society Cytogenetics Biological nomenclature
International System for Human Cytogenomic Nomenclature
Biology
435
48,681,017
https://en.wikipedia.org/wiki/A-971432
A-971432 is an orally bioavailable selective agonist of sphingosine-1-phosphate receptor 5 (S1PR5) discovered at AbbVie. It was discovered using high-throughput chemistry. S1P5 agonists have been proposed as an innovative mechanism for the treatment of neurodegenerative disorders such as Alzheimer's disease and lysosomal storage disorders such as Niemann–Pick disease. Stimulation of S1PR5 with A-971432 has been shown to preserve blood-brain barrier integrity and exert a therapeutic effect in an animal model of Huntington's disease. References Further reading Azetidines Carboxylic acids Phenol ethers Chloroarenes
A-971432
Chemistry
155
2,126,864
https://en.wikipedia.org/wiki/Alpheus%20Spring%20Packard
Alpheus Spring Packard Jr. LL.D. (February 19, 1839 – February 14, 1905) was an American entomologist and palaeontologist. He described over 500 new animal species – especially butterflies and moths – and was one of the founders of The American Naturalist. Early life He was the son of Alpheus Spring Packard Sr. (1798–1884) and the brother of William Alfred Packard. He was born in Brunswick, Maine, and was Professor of Zoology and Geology at Brown University in Providence, Rhode Island, from 1878 until his death. He was a vocal proponent of Neo-Lamarckism during the eclipse of Darwinism. Career His chief work was the classification and anatomy of arthropods, and contributions to economic entomology, zoogeography, and the phylogeny and metamorphoses of insects. Packard was appointed to the United States Entomological Commission in 1877 where he served with Charles Valentine Riley and Cyrus Thomas. He wrote school textbooks, such as Zoölogy for High Schools and Colleges (eleventh edition, 1904). His Monograph of the Bombycine Moths of North America was published in three parts (1895, 1905, 1915, edited by T. D. A. Cockerell). He was elected as a member to the American Philosophical Society in 1878. Personal life He married Elizabeth Darby Walcott, daughter of Samuel B. Walcott in October 1867 in Salem, Massachusetts. They would have four children: Martha Walcott, Alpheus Appleton, Elizabeth Darby, and Frances Elizabeth. Elizabeth Darby would die at the age of eight. He died on February 14, 1905, in Providence, Rhode Island, with his wife and children outliving him. Writings Report on the insects collected on the Penobscot and Alleguash Rivers, during August and September, 1861, Sixth Annual Report of the Secretary of the Maine Board of Agriculture, Augusta, Maine (pp. 373-376) (1861) Guide to the Study of Insects (1869; third edition, 1872) The Mammoth Cave and its Inhabitants (1872), with F. W. Putnam Life-History of Animals (1876) A Naturalist on the Labrador Coast (1891) Lamarck, the Founder of Evolution: His Life and Work (1901), French translation, 1903. Notes References External links The entomological writings of Dr. Alpheus Spring Packard Gallica Two works by Packard Brunoniana Biography Nomina circumscribentia insectorum On the phylogeny of the Lepidoptera. Zoologischer Anzeiger, 18 (465): 228-236 1895. 1839 births 1905 deaths American lepidopterists American naturalists American science writers Harvard University alumni Writers from Brunswick, Maine Appleton family Bowdoin College alumni Lamarckism Brown University faculty
Alpheus Spring Packard
Biology
575
36,623,707
https://en.wikipedia.org/wiki/Trimethylene%20carbonate
Trimethylene carbonate, or 1,3-propylene carbonate, is a 6-membered cyclic carbonate ester. It is a colourless solid that upon heating or catalytic ring-opening converts to poly(trimethylene carbonate) (PTMC). Such polymers are called aliphatic polycarbonates and are of interest for potential biomedical applications. An isomeric derivative is propylene carbonate, a colourless liquid that does not spontaneously polymerize. Preparation This compound may be prepared from 1,3-propanediol and ethyl chloroformate (a phosgene substitute), or from oxetane and carbon dioxide with an appropriate catalyst: HOC3H6OH + ClCO2C2H5 → C3H6O2CO + C2H5OH + HCl C3H6O + CO2 → C3H6O2CO This cyclic carbonate undergoes ring-opening polymerization to give poly(trimethylene carbonate), abbreviated PTMC. Medical devices The polymer PTC is of commercial interest as a biodegradable polymer with biomedical applications. A block copolymer of glycolic acid and trimethylene carbonate (TMC) is the material of the Maxon suture, a monofilament resorbable suture which was introduced in the mid-1980s. The same material is used in other resorbable medical devices. See also Ethylene carbonate References Carbonate esters Monomers
Trimethylene carbonate
Chemistry,Materials_science
311
59,611
https://en.wikipedia.org/wiki/Ionization
Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected. Uses Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application. Production of ions Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization. Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years. Adiabatic ionization Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state. The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche. Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used. Ionization energy of atoms The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells. Semi-classical description of ionization Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier. Quantum mechanical description of ionization The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC), but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines, generalized Sturmians or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice. In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible. Tunnel ionization Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI). Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter. The rate of MPI on atom with an ionization potential in a linearly polarized laser with frequency is given by where is the Keldysh parameter, , is the peak electric field of the laser and . The coefficients , and are given by The coefficient is given by where Quasi-static tunnel ionization The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when approaches zero. The rate of QST is given by As compared to the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable. Strong field approximation for the ionization rate The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by where: with being the ponderomotive energy, is the minimum number of photons necessary to ionize the atom, is the double Bessel function, with the angle between the momentum of the electron, p, and the electric field of the laser, F, FT is the three-dimensional Fourier transform, and incorporates the Coulomb correction in the SFA model. Population trapping In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by photons and the resonant state undergo an avoided crossing at the resonance intensity . The minimum distance, , at the avoided crossing is proportional to the generalized Rabi frequency, coupling the two states. According to Story et al., the probability of remaining in the ground state, , is given by where is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where , then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized. In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping". We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin. Non-sequential multiple ionization The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel there is another channel which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states; where is the rate of quasi-static tunneling to i'th charge state and are some constants depending on the wavelength of the laser (but not on the pulse duration). Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative. The electron rescattering model was independently developed by Kuchiev, Schafer et al, Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential () of the laser. Corkum's model places a cut-off limit on the minimum intensity ( is proportional to intensity) where ionization due to re-scattering can occur. The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization. Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process. Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism: MPI of electrons from the inner orbitals of the molecule which results in a molecular ion in ro-vibrational levels of an excited electronic state; Rapid radiationless transition to the high-lying ro-vibrational levels of a lower electronic state; and Subsequent dissociation of the ion to different fragments through various fragmentation channels. The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required. Kramers–Henneberger frame The Kramers–Henneberger(KF) frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian: In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field: where for a monochromatic plane wave. By applying a transformation to the laboratory frame equal to the quiver motion one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point : The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are and (sometimes called the “excursion amplitude’, obtained from ). From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ (the cycle-average of the oscillating potential). The interpretation of the presence of is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory and can be seen as the potential of the smeared out nuclear charge along its trajectory. The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory. The KF frame was successfully applied for different problems as well e.g. for higher-hamonic generation from a metal surface in a powerful laser field Dissociation – distinction A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs. See also Above threshold ionization Double ionization Chemical ionization Electron ionization Ionization chamber – Instrument for detecting gaseous ionization, used in ionizing radiation measurements Ion source Photoionization Thermal ionization Townsend avalanche – The chain reaction of ionization occurring in a gas with an applied electric field Poole–Frenkel effect Table References External links Ions Molecular physics Atomic physics Physical chemistry Quantum chemistry Mass spectrometry
Ionization
Physics,Chemistry
4,774
75,004,059
https://en.wikipedia.org/wiki/Susana%20F.%20Huelga
Susana F. Huelga is a Spanish physicist, and Professor at the Institute of Theoretical Physics of Ulm University. She is notable for her contributions to the field of quantum information theory. These include quantum metrology in the presence of Markovian and non-Markovian environments, the theory of open quantum systems, numerical methods for their description in the presence of structured environments, the characterization, quantification and detection of non-Markovianity and fundamental contributions to quantum effects in biological systems. Education She obtained her MSc in 1990 and her Doctorate in 1995 in physics from the Universidad de Oviedo, where she worked with Miguel Ferrero and Emilio Santos. Her thesis, Optical experiments for the study of fundamental quantum properties consisted of two parts, each proposing an optical experiment. The first part is a proposal for an experiment to test "Bell's inequality capable of closing the existing exits in the atomic cascade experiments already carried out". The second part is a proposal for the contrast of the Leggett–Garg inequality in an experiment. Career After finishing her doctorate she spent a postdoc at the Clarendon Laboratory of Oxford University 1996 - 1997 and held a position as Profesor Titular at Universidad de Oviedo. She joined the Faculty of the Department of Physics, Astronomy and Mathematics of the University of Hertfordshire as a Lecturer in 2000 and Reader in 2008. In October 2009 she accepted a Professorship at the Institute of Theoretical Physics of the Universität Ulm where she is still working. Personal She is married to physicist Martin Bodo Plenio. References External links at the Institute of Theoretical Physics of Ulm University Spanish physicists Living people 21st-century women physicists Quantum physicists Year of birth missing (living people)
Susana F. Huelga
Physics
351
56,084,743
https://en.wikipedia.org/wiki/3D%20selfie
A 3D selfie is a 3D-printed scale replica of a person or their face. These three-dimensional selfies are also known as 3D portraits, 3D figurines, 3D-printed figurines, mini-me figurines and miniature statues. In 2014 a first 3D printed bust of a President, Barack Obama, was made. 3D-digital-imaging specialists used handheld 3D scanners to create an accurate representation of the President. Description The capture of a subject as a 3D model can be accomplished in many ways. One of the methods, is called photogrammetry. Many systems use one or more digital cameras to take 2D pictures of the subject, under normal lighting, under projected light patterns, or a combination of these. Inexpensive systems use a single camera which is moved around the subject in 360° at various heights, over minutes, while the subject stays immobile. More elaborate systems have a vertical bar of cameras rotate around the subject, usually achieving a full scan in 10 seconds. Most expensive systems have an enclosed 3D photo booth with 50 to 100 cameras statically embedded in walls and the ceiling, firing all at once, eliminating differences in image capture caused by movements of the subject. A piece of software then reconstructs a 3D model of the subject from these pictures. One of the 3D photo booth, which creates life-like portraits, is called Veronica Chorographic Scanner. The scanner participated in the project of Royal Academy of Arts, where people could have themselves scanned. The scanner utilized 8 cameras taking 96 photographs of a person from each angle. Photogrammetry scanning is generally considered more life-like, than scanning with 3D scanners. Mobile based Photogrammetry apps such as Qlone can also be used for 3D capturing a person. Another method for capturing a 3D selfie uses dedicated 3D scanning equipment which may more accurately capture geometry and texture, but take longer to perform. Scanners may be handheld, tripod mounted or fitted to another system that will allow the full geometry of a person to be captured. One of the well-known full body 3D scanners are Shapify booth, based on Artec Eva 3D scanners, Cobra body scanner by PICS-3D and Twindom Twinstant Mobile. Production of 3D selfies is enabled by 3D printing technologies. This includes the ability to 3D print in full color using gypsum-based binder jetting techniques, giving the figurine a sandstone-like texture and look. Other 3D printing process may be used depending on the desired result. These products can also be produced in a full colour resin format using Mimaki technology, both of which processes can be found in Selftraits 3D Selfie products. See also 3D reconstruction Digitization Depth map Full body scanner Photogrammetry Range imaging References 3D printing Computer vision Applications of computer vision Image processing 3D imaging Self-portraits Narcissism 3D scanners Photogrammetry Selfies
3D selfie
Engineering
592
74,918,885
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S25
The Samsung Galaxy S25 is a series of high-end Android-based smartphones developed, manufactured, and marketed by Samsung Electronics as part of its flagship Galaxy S Series. They collectively serve as the successor to the Samsung Galaxy S24 series. The S25, S25+ and S25 Ultra models were announced on January 22, 2025, at the Galaxy Unpacked event in San Jose, California and are expected to be released on February 7, 2025. Additionally, a phone called Galaxy S25 Edge was teased, however further details about the phone are not yet known. Lineup The Samsung Galaxy S25 lineup includes 4 different models of phones, the Samsung Galaxy S25, Samsung Galaxy S25+, Samsung Galaxy S25 Ultra, and the Samsung Galaxy S25 Edge. Although not much is known about the S25 Edge (as Samsung only teased it on stage), what we do know is that the Samsung Galaxy S25 Edge is a slimmer version of the normal S25, and marks a new addition to the line up of the S Series of smartphones and the return of Galaxy S Edge series since 2016's Samsung Galaxy S7 Edge. The S25 and S25+ share the same display size as their predecessors, whereas the top-end S25 Ultra has a slightly larger display. The flagship S25 features a flat 6.2 inch (155 mm) display. The S25+ has similar hardware, but with a larger 6.7 inch (168 mm) form factor. The S25 Ultra is slightly larger than its predecessor, with a screen size of 6.9 inches (175 mm). The S25 Ultra also features smoother edges, making it align more with the rest of the Galaxy S Series of smartphones. The S25 lineup of smartphones is powered by the Snapdragon 8 Elite chipset, which, unlike previous generations of the S Series, will be used in all S25 smartphones worldwide. Galaxy AI Samsung continues the development of Galaxy AI feature introduced with Galaxy S24 Series in 2024. Galaxy AI is the bundle of intellectual features of Artificial Intelligence deeply integrated into Galaxy devices. It's designed to be accessible, and improve user experience, communication, productivity and creativity. Some recent Galaxy AI features that were introduced with Galaxy S25 Series include: 1. Now Brief: Personalized Daily Digest The Now Brief feature delivers a tailored daily briefing, designed to provide users with the most relevant information at the right time. This includes updates on sleep scores, energy levels, weather forecasts, traffic conditions, meeting reminders, and daily summaries. The feature adapts over time to offer more accurate and context-specific data based on the user’s habits and routines. 2. Audio Eraser: Enhanced Video Audio Quality Audio Eraser leverages AI to improve the quality of videos by isolating and enhancing specific sounds. It allows users to reduce background noise and amplify desired sounds such as voices or music. This feature is integrated into the phone’s built-in video editor, making it a convenient tool for content creators and users who wish to refine their videos post-recording. 3. Drawing Assist: AI-Powered Artistic Transformation The Drawing Assist feature allows users to transform basic sketches into polished works of art. With a variety of artistic styles, including Illustration, Watercolor, 3D Cartoon, and Pop Art, the AI enhances hand-drawn sketches. Users can input prompts via text, voice, or sketches, enabling the creation of unique visuals with minimal effort. 4. Gemini deep integration: Both Personal assistant and AI chatbot Google Gemini, while available on other Android devices, is more deeply integrated into the system. Gemini can now connect to System services, such as Messages, Notes and Calendar. It takes action based on synergy between Google services, AI knowledge, and System apps. Examples: search for some information and save it as a note in Samsung Notes, or find an event online and add it to calendar, or summarize a video, or summarize a YouTube video and send it to a friend Here’s a breakdown of all Galaxy AI capabilities, mostly includes features introduced within 2024: 1. Circle to Search: Snap a circle around any text or image on your screen, and Galaxy AI will instantly search the web for related information.  2. Live Translate: Engage in real-time translations during phone calls or in-person conversations, bridging language gaps seamlessly.  3. Chat Assist: Craft messages with ease by receiving suggestions to improve tone, grammar, and spelling across various messaging apps.Additional feature includes translation of entire chats, designed to break communication barriers and communicate easily without switching apps 4. Note Assist: Transform your note-taking by summarizing, fixing grammar, formatting or translating lengthy documents, creating templates, and converting handwritten notes into text. 5. Transcript Assist: Convert voice memos into text transcripts and speaker tags, making it easier to review and share audio content. It is possible to Summarize or translate the Voice Memo transcript 6. Browsing Assist: Summarize or translate web pages within the Samsung Internet app, saving you time and enhancing your browsing experience.  7. Generative Photo Editing: Edit photos by removing or repositioning objects, and even generate new content to enhance your images.  8. Edit Suggestions: Receive intelligent recommendations to improve your photos, such as remastering, removing reflections, or adding artistic effects.  9. Generative Wallpaper: Create personalized wallpapers based on selected categories and keywords, adding a unique touch to your device.  10. Instant Slow-mo: Transform standard videos into slow-motion clips just by holding finger on video, allowing you to capture and share moments in a new light. You can also share the Slow-mo—applied video with a click 11. Call Assist: enhances communication by providing real-time audio translation during voice calls in native Phone calls and calls from messenger apps. This functionality supports 20 languages offline and locally, facilitating seamless conversations between speakers of different languages. Design The S25 and S25+ smartphones have an aluminium body and a glass back, similar to the design of their predecessors. They both use Gorilla Glass Victus 2 for their protection. They come in 4 (four) standard colours: Icy Blue, Mint, Navy and Silver Shadow, with an additional 3 (three) colours being available only through Samsung's online website: Pink Gold, Coral Red and Blue Black. The S25 Ultra has a titanium body, and a glass back, similar to the S24 Ultra. The S25 Ultra comes in 4 (four) standard colours: Titanium Silver Blue, Titanium Black, Titanium White Silver and Titanium Grey, as well as 3 (three) additional colours that are only available through Samsung's online website: Titanium Jade Green, Titanium Jet Black and Titanium Pink Gold. Specifications Display The S25 series of phones use a "Dynamic LPTO AMOLED 2X" display with HDR10+ support, a 120 Hz refresh rate and 2600 nits peak brightness. Additionally, the S25 and S25+ phones feature a Corning Gorilla Glass Victus 2 as protection for the display, whereas the S25 Ultra has a Corning Gorilla Glass Armor 2 as it's protection for the display. All phones feature an ultra-sonic in-screen fingerprint sensor. The display on the S25 series of phones has the capability to go up to 120 Hz.ModelDisplay sizeResolutionDensityAspect ratioMax refresh rateVariable refresh rateShapeS252340x1080~416 ppi19.5:9120 Hz1 Hz to 120 HzRounded corners, flat sidesS25+3120×1440~513 ppiS25 Ultra~505 ppi Camera The Galaxy S25 and S25+ both have a 50 MP wide sensor, a 10 MP 3x telephoto lens and a 12 MP ultrawide sensor. On the other hand, the S25 Ultra has a 200 MP wide sensor, a 50 MP 5x periscope telephoto lens, a 10 MP 3x telephoto lens, and a 50 MP ultrawide sensor. All 3 models of phones feature a 12 MP front-facing camera (also known as a selfie camera). From upgrades, the S25 Series now support LOG video shooting format, the professional grade standard, giving more freedom for Editing Batteries There were no improvements in battery capacity compared to the S24 predecessors. Memory and storage Unlike previous generations, all Samsung Galaxy S25 phones launched with 12 GB of RAM. While you can't customize the amount of RAM on an S25, like previous generations of the S Series of phones, you can customize the amount of storage. Software The Galaxy S25 phones were launched with Android 15, and Samsung One UI 7. Samsung has promised 7 years of OS and security updates to the S25 series of phones (meaning support may end in 2032). The devices are additionally shipped with Galaxy AI, Samsung's advanced AI features on Galaxy devices. This year, Galaxy AI being a relatively young technology, now has new updates with S25 Series References Android (operating system) devices Samsung Galaxy Flagship smartphones Samsung smartphones Mobile phones with multiple rear cameras Mobile phones introduced in 2025 Phablets Mobile phones with 8K video recording Mobile phones with stylus
Samsung Galaxy S25
Technology
1,928
19,870
https://en.wikipedia.org/wiki/Meson
In particle physics, a meson () is a type of hadronic subatomic particle composed of an equal number of quarks and antiquarks, usually one of each, bound together by the strong interaction. Because mesons are composed of quark subparticles, they have a meaningful physical size, a diameter of roughly one femtometre (10 m), which is about 0.6 times the size of a proton or neutron. All mesons are unstable, with the longest-lived lasting for only a few tenths of a nanosecond. Heavier mesons decay to lighter mesons and ultimately to stable electrons, neutrinos and photons. Outside the nucleus, mesons appear in nature only as short-lived products of very high-energy collisions between particles made of quarks, such as cosmic rays (high-energy protons and neutrons) and baryonic matter. Mesons are routinely produced artificially in cyclotrons or other particle accelerators in the collisions of protons, antiprotons, or other particles. Higher-energy (more massive) mesons were created momentarily in the Big Bang, but are not thought to play a role in nature today. However, such heavy mesons are regularly created in particle accelerator experiments that explore the nature of the heavier quarks that compose the heavier mesons. Mesons are part of the hadron particle family, which are defined simply as particles composed of two or more quarks. The other members of the hadron family are the baryons: subatomic particles composed of odd numbers of valence quarks (at least three), and some experiments show evidence of exotic mesons, which do not have the conventional valence quark content of two quarks (one quark and one antiquark), but four or more. Because quarks have a spin , the difference in quark number between mesons and baryons results in conventional two-quark mesons being bosons, whereas baryons are fermions. Each type of meson has a corresponding antiparticle (antimeson) in which quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding antiparticle, the negative pion (), is made of one up antiquark and one down quark. Because mesons are composed of quarks, they participate in both the weak interaction and strong interaction. Mesons with net electric charge also participate in the electromagnetic interaction. Mesons are classified according to their quark content, total angular momentum, parity and various other properties, such as C-parity and G-parity. Although no meson is stable, those of lower mass are nonetheless more stable than the more massive, and hence are easier to observe and study in particle accelerators or in cosmic ray experiments. The lightest group of mesons is less massive than the lightest group of baryons, meaning that they are more easily produced in experiments, and thus exhibit certain higher-energy phenomena more readily than do baryons. But mesons can be quite massive: for example, the J/Psi meson () containing the charm quark, first seen 1974, is about three times as massive as a proton, and the upsilon meson () containing the bottom quark, first seen in 1977, is about ten times as massive as a proton. History From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the "meson" as the carrier of the nuclear force that holds atomic nuclei together. If there were no nuclear force, all nuclei with two or more protons would fly apart due to electromagnetic repulsion. Yukawa called his carrier particle the meson, from μέσος mesos, the Greek word for "intermediate", because its predicted mass was between that of the electron and that of the proton, which has about 1,836 times the mass of the electron. Yukawa or Carl David Anderson, who discovered the muon, had originally named the particle the "mesotron", but he was corrected by the physicist Werner Heisenberg (whose father was a professor of Greek at the University of Munich). Heisenberg pointed out that there is no "tr" in the Greek word "mesos". The first candidate for Yukawa's meson, in modern terminology known as the muon, was discovered in 1936 by Carl David Anderson and others in the decay products of cosmic ray interactions. The "mu meson" had about the right mass to be Yukawa's carrier of the strong nuclear force, but over the course of the next decade, it became evident that it was not the right particle. It was eventually found that the "mu meson" did not participate in the strong nuclear interaction at all, but rather behaved like a heavy version of the electron, and was eventually classed as a lepton like the electron, rather than a meson. Physicists in making this choice decided that properties other than particle mass should control their classification. There were years of delays in the subatomic particle research during World War II (1939–1945), with most physicists working in applied projects for wartime necessities. When the war ended in August 1945, many physicists gradually returned to peacetime research. The first true meson to be discovered was what would later be called the "pi meson" (or pion). During 1939–1942, Debendra Mohan Bose and Bibha Chowdhuri exposed Ilford half-tone photographic plates in the high altitude mountainous regions of Darjeeling, and observed long curved ionizing tracks that appeared to be different from the tracks of alpha particles or protons. In a series of articles published in Nature, they identified a cosmic particle having an average mass close to 200 times the mass of electron. This discovery was made in 1947 with improved full-tone photographic emulsion plates, by Cecil Powell, Hugh Muirhead, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains. Some of those mesons had about the same mass as the already-known mu "meson", yet seemed to decay into it, leading physicist Robert Marshak to hypothesize in 1947 that it was actually a new and different meson. Over the next few years, more experiments showed that the pion was indeed involved in strong interactions. The pion (as a virtual particle) is also used as force carrier to model the nuclear force in atomic nuclei (between protons and neutrons). This is an approximation, as the actual carrier of the strong force is believed to be the gluon, which is explicitly used to model strong interaction between quarks. Other mesons, such as the virtual rho mesons are used to model this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions. For a while in the past, the word meson was sometimes used to mean any force carrier, such as "the Z meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks. Overview Spin, orbital angular momentum, and total angular momentum Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of  . Quarks are fermions—specifically in this case, particles having spin Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections, either or Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length with three possible spin projections and and their combination is called a vector meson or spin-1 triplet. If two quarks have oppositely aligned spins, the spin vectors add up to make a vector of length and only one spin projection called a scalar meson or spin-0 singlet. Because mesons are made of one quark and one antiquark, they are found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below). There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and also comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of the two intrinsic angular momentums (spin) and the orbital angular momentum. It can take any value from up to in increments of 1. Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the  = 1;  = 0 and  = 0;  = 0, which corresponds to  = 1 and  = 0, although they are not the only ones. It is also possible to obtain  = 1 particles from  = 0 and  = 1. How to distinguish between the  = 1,  = 0 and  = 0,  = 1 mesons is an active area of research in meson spectroscopy. -parity -parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just "parity". If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation). Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively  = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively  = +). For mesons, parity is related to the orbital angular momentum by the relation: where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent. As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1). C-parity -parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If then, the meson is " even" ( = +1). On the other hand, if then the meson is " odd" ( = −1). -parity rarely is studied on its own, but more commonly in combination with P-parity into CP-parity. -parity was originally thought to be conserved, but was later found to be violated on rare occasions in weak interactions. -parity -parity is a generalization of the -parity. Instead of simply comparing the wavefunction after exchanging quarks and antiquarks, it compares the wavefunction after exchanging the meson for the corresponding antimeson, regardless of quark content. If then, the meson is " even" ( = +1). On the other hand, if then the meson is " odd" ( = −1). Isospin and charge Original isospin model The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed that they were actually the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937. When the first mesons were discovered, they too were seen through the eyes of isospin and so the three pions were believed to be the same particle, but in different isospin states. The mathematics of isospin was modeled after the mathematics of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Because the "pion particle" had three "charged states", it was said to be of isospin Its "charged states" , , and , corresponded to the isospin projections and respectively. Another example is the "rho particle", also with three charged states. Its "charged states" , , and , corresponded to the isospin projections and respectively. Replacement by the quark model This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the , , and quarks). The success of the isospin model is now understood to be an artifact of the similar masses of the and quarks. Because the and quarks have similar masses, particles made of the same number of them also have similar masses. The exact and quark composition determines the charge, because quarks carry charge whereas quarks carry charge . For example, the three pions all have different charges = a quantum superposition of ) and states but they all have similar masses ( ) as they are each composed of a same total number of up and down quarks and antiquarks. Under the isospin model, they were considered a single particle in different charged states. After the quark model was adopted, physicists noted that the isospin projections were related to the up and down quark content of particles by the relation where the -symbols are the count of up and down quarks and antiquarks. In the "isospin picture", the three pions and three rhos were thought to be the different states of two particles. However, in the quark model, the rhos are excited states of pions. Isospin, although conveying an inaccurate picture of things, is still used to classify hadrons, leading to unnatural and often confusing nomenclature. Because mesons are hadrons, the isospin classification is also used for them all, with the quantum number calculated by adding for each positively charged up-or-down quark-or-antiquark (up quarks and down antiquarks), and for each negatively charged up-or-down quark-or-antiquark (up antiquarks and down quarks). Flavour quantum numbers The strangeness quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower (more negative) the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds nonet figures). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb nonets. Because only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers only works well for the nonets made of one u, one d and one other quark and breaks down for the other nonets (for example ucb nonet). If the quarks all had the same mass, their behaviour would be called symmetric, because they would all behave in exactly the same way with respect to the strong interaction. However, as quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken. It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, , T) by the Gell-Mann–Nishijima formula: where S, C, , and T represent the strangeness, charm, bottomness and topness flavour quantum numbers respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations: meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content: Classification Mesons are classified into groups according to their isospin (I), total angular momentum (J), parity (P), G-parity (G) or C-parity (C) when applicable, and quark (q) content. The rules for classification are defined by the Particle Data Group, and are rather convoluted. The rules are presented below, in table form for simplicity. Types of meson Mesons are classified into types according to their spin configurations. Some specific configurations are given special names based on the mathematical properties of their spin configuration. Nomenclature Flavourless mesons Flavourless mesons are mesons made of pair of quark and antiquarks of the same flavour (all their flavour quantum numbers are zero: = 0, = 0, = 0, = 0). The rules for flavourless mesons are: In addition When the spectroscopic state of the meson is known, it is added in parentheses. When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses. When the meson is in its ground state, nothing is added in parentheses. Flavoured mesons Flavoured mesons are mesons made of pair of quark and antiquarks of different flavours. The rules are simpler in this case: The main symbol depends on the heavier quark, the superscript depends on the charge, and the subscript (if any) depends on the lighter quark. In table form, they are: In addition If P is in the "normal series" (i.e., P = 0+, 1−, 2+, 3−, ...), a superscript ∗ is added. If the meson is not pseudoscalar (P = 0−) or vector (P = 1−), is added as a subscript. When the spectroscopic state of the meson is known, it is added in parentheses. When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses. When the meson is in its ground state, nothing is added in parentheses. Exotic mesons There is experimental evidence for particles that are hadrons (i.e., are composed of quarks) and are color-neutral with zero baryon number, and thus by conventional definition are mesons. Yet, these particles do not consist of a single quark/antiquark pair, as all the other conventional mesons discussed above do. A tentative category for these particles is exotic mesons. There are at least five exotic meson resonances that have been experimentally confirmed to exist by two or more independent experiments. The most statistically significant of these is the Z(4430), discovered by the Belle experiment in 2007 and confirmed by LHCb in 2014. It is a candidate for being a tetraquark: a particle composed of two quarks and two antiquarks. See the main article above for other particle resonances that are candidates for being exotic mesons. List Pseudoscalar mesons [a] Makeup inexact due to non-zero quark masses. [b] PDG reports the resonance width (Γ). Here the conversion τ =  is given instead. [c] Strong eigenstate. No definite lifetime (see kaon notes below) [d] The mass of the and are given as that of the . However, it is known that a difference between the masses of the and on the order of exists. [e] Weak eigenstate. Makeup is missing small CP–violating term (see notes on neutral kaons below). Vector mesons [f] PDG reports the resonance width (Γ). Here the conversion τ =  is given instead. [g] The exact value depends on the method used. See the given reference for detail. Notes on neutral kaons There are two complications with neutral kaons: Due to neutral kaon mixing, the and are not eigenstates of strangeness. However, they are eigenstates of the weak force, which determines how they decay, so these are the particles with definite lifetime. The linear combinations given in the table for the and are not exactly correct, since there is a small correction due to CP violation. See CP violation in kaons. Note that these issues also exist in principle for other neutral, flavored mesons; however, the weak eigenstates are considered separate particles only for kaons because of their dramatically different lifetimes. See also Mesonic molecule Standard Model Footnotes References External links — Compiles authoritative information on particle properties — An interactive visualisation allowing physical properties to be compared Further reading Pauli, Wolfgang (1948) Meson Theory of Nuclear Forces, Interscience Publishers, Inc. New York Bosons Hadrons Force carriers
Meson
Physics
4,728
44,632,967
https://en.wikipedia.org/wiki/Psilocybe%20singularis
Psilocybe singularis is a species of psilocybin mushroom in the family Hymenogastraceae. Found in Oaxaca, Mexico, where it grows on bare clay soil in mesophytic forest, it was described as new to science in 2004. See also List of Psilocybe species List of psilocybin mushrooms References External links Entheogens Fungi described in 2004 Psychoactive fungi singularis Psychedelic tryptamine carriers Fungi of Mexico Taxa named by Gastón Guzmán Fungi without expected TNC conservation status Fungus species
Psilocybe singularis
Biology
111
43,144,065
https://en.wikipedia.org/wiki/Samsung%20Gear%20Live
The Samsung Gear Live is an Android Wear-based smartwatch announced and released by Samsung and Google on June 25, 2014. It was released along with the LG G Watch as launch devices for Android Wear, a modified version of Android designed specifically for smartwatches and other wearables. Gear Live is the 5th device launched in the Samsung Gear family of wearables. It is compatible with all smartphones running Android 4.3 or higher that support Bluetooth Smart. Gear Live was initially available in the United States and Canada at US$199 on the Google Play Store, and from Google's Play Store in the UK for £169. As of July 2014, the Gear Live was also available in Australia, France, Germany, India, Ireland, Italy, Japan, South Korea, and Spain. Hardware It is IP67 certified for dust and water resistance. It also has a steel exterior and a user-replaceable 22mm strap. The watch features a power button and heart rate monitor. Software The notification system is based on Google Now technology, enabling it to accept, receive, transduce and ultimately process spoken commands given by the user. Reception JR Raphael of Computerworld preferred the Gear Live's illuminated display compared to the LG G Watch, more distinctive design and the heart-rate sensor but did not like the poor outdoor visibility, the hard to use charger, awkward watch band and that it includes a redundant preinstalled stopwatch application. See also Moto 360 References External links Android (operating system) devices Products introduced in 2014 Wear OS devices Smartwatches Samsung wearable devices
Samsung Gear Live
Technology
329
66,494,849
https://en.wikipedia.org/wiki/Pliofilm
Pliofilm was a plastic wrap made by the Goodyear Tire and Rubber Company at plants in the US state of Ohio. Invented in the early 1930s, it was made by dissolving rubber in a benzene solvent and treating it with gaseous hydrochloric acid. Pliofilm was more stable in a range of humidities than earlier cellulose-based wraps and became popular as a food wrap. Its manufacture exposed workers to carcinogenic benzene and, when an additive was used to improve durability, caused dermatitis. Production of Pliofilm was hampered during World War II because the Japanese occupation of much of Southeast Asia cut off much of the rubber supply. During the war years production was given over entirely to military purposes, with Pliofilm being used to wrap machinery and to waterproof firearms. After the war a plant was opened in Wolverhampton, England, and commercial production continued until the late 1980s. Manufacture Pliofilm is a transparent film made of rubber hydrochloride. It is impermeable to water and water vapour and non-flammable. Pliofilm was manufactured by dissolving natural rubber in the solvent benzene. The solution was kept in a tank at around and treated with gaseous hydrochloric acid. The material was then neutralised with an alkali. The product was cast as a sheet on an endless belt which passed through a dryer that drove off the solvent. The finished product was around 30% chlorine. It could be made thinner by stretching whilst being heated and thicknesses of were sold. Thicker sheets could be produced by laminating the product, combining several sheets with the use of rubber cement. History and uses Pliofilm was invented by Harold J. Osterhof at the Goodyear Tire and Rubber Company in the early 1930s and first marketed in 1934. The product found early use as a food wrap, its very low oxygen permeability helping to keep foods fresh. Its clinginess and better stability at a range of humidities was an advantage over the cellulose wrapping films used previously; Pliofilm became about as popular as Cellophane by 1937 and had supplanted cellulose films by 1942. Pliofilm could also function as a shrink wrap and was marketed as a means to reseal bottles (it was advised to place the Pliofilm over an embroidery hoop and to heat it while twisting the bottle). The material was also used to manufacture aprons and protective sleeves to protect factory workers from hazardous substances. Pliofilm saw widespread use during World War II as a means of protecting tools and engines during shipping. For aviation parts a modified product was produced; a chemical known as RMF was added in quantities of 1–5% to make the product less susceptible to deterioration by ultra-violet light. RMF led to dermatitis in workers who had contact with it. The United States Public Health Service investigated the factories involved and recommended that workers wear protective sleeves made from ordinary Pliofilm. The manufacturing process also caused workers to become exposed to benzene. A study of Pliofilm workers at Goodyear's Akron and St. Marys, Ohio, plants between 1936 and 1976 was used as the basis for determining the cancer slope factor and occupational exposure standards for benzene. The United States Armed Forces used Pliofilm to waterproof firearms during World War II amphibious landings. Sleeves were produced in three sizes to suit pistols, rifles, and sub-machine guns and were sealed by tying a knot in the sleeve or with an elastic band. It was intended that soldiers would tear off the sleeve after landing, though some troops kept them on inland due to fields having been flooded by the Germans as a defensive measure. The Pliofilm usually trapped enough air to keep the firearm buoyant if dropped in water. Because the sleeve prevented use of the weapon's regular sling some troops fashioned ad-hoc slings from rope that could be used over the Pliofilm. The Houston Chronicle series "D-Day In Color" noted that Pliofilm wrapped around weaponry is evident in an image of United States Army infantry at the Normandy landings. Pliofilm manufacture was hindered by the Japanese occupation of rubber-producing countries in Southeast Asia. Commercial outputs were stopped and the entire production given over to military uses, leading to a large commercial demand and a backlog of orders after the war's end. In the post-war years Pliofilm saw use as a food wrap, to package drugs and textiles, and as a means of laminating paper. It was also marketed as Vitafilm. Production was extended abroad to the Goodyear factory in Wolverhampton, England, in the late 1940s. The United States Mint switched its packaging for mint set coins from Cellophane to Pliofilm in 1955. The American Chemical Society awarded Harold J. Osterhof the 1971 Charles Goodyear Medal for inventing Pliofilm. It remained commercially available in 1987. References Packaging materials Food storage Transparent materials Organic polymers Rubber
Pliofilm
Physics,Chemistry
1,021
24,673,328
https://en.wikipedia.org/wiki/Marine%20mucilage
Marine mucilage, also referenced as sea snot or sea saliva, is thick, gelatinous organic matter found around the world's oceans, lately observed in the Mediterranean Sea. Marine mucilage carries diverse microorganisms. Triggers that cause it to form include increased phosphorus, drought conditions, and climate change. Its effects are widespread, affecting fishing industries, smothering sea life, and spreading bacteria and viruses. Citizens and governments around the world are working to institute countermeasures, including treatment, seawater cleanup, and other public policies. Composition Marine mucilage has many components, including diverse microorganisms including viruses and prokaryotes, debris, proteins, minerals, and exopolymeric compounds with colloidal properties. Although various historical definitions have not consolidated, it is agreed that mucilages are complex chemical substances, as well as complex natural materials. Its composition can change over time. Causes Marine mucilage appears following an increase of phosphorus. In one 2021 case phosphorus values were three to four times higher than the previous year. Other excess nutrients combined with drought conditions and prolonged warm sea temperatures and calm weather contributed. Marine mucilage is also produced by phytoplankton when they are stressed. Anthropogenic global climate change is likely increasing marine mucilage. Warmer, slower moving waters increase production and allow it to accumulate in massive sheets. In the Mediterranean Sea, the frequency of marine mucilage events increases with warm temperature anomalies. Marine mucilage and biogeochemistry Marine mucilage is a natural occurrence in marine environments, but its presence in excessive amounts can indicate environmental stress and poor water quality. Biogeochemistry plays a crucial role in the formation and dynamics of marine mucilage. Factors such as nutrient availability, temperature, salinity, and microbial activity influence the production and degradation of organic matter that contributes to mucilage formation. Excessive nutrients, often from Anthropogenic sources such as agricultural runoff and wastewater discharge, can accelerate phytoplankton growth and mucilage formation, leading to eutrophication. Understanding how mucilage interacts with biogeochemistry is vital for monitoring and managing coastal ecosystems. Recent studies have utilized advanced remote sensing techniques, such as Sentinel-2 satellite imagery, to map mucilage distribution and assess environmental conditions. These images, combined with advanced processing techniques, allowed them to notice subtle changes in water quality and identify areas affected by mucilage accumulations. Through the use of spectral indices such as Normalized Difference Turbidity Index (NDTI), Normalized Difference Water Index (NDWI), and Automated Mucilage Extraction Index (AMEI). By employing spectral indices and deep learning methods like Convolutional Neural Networks (CNNs), researchers can improve mucilage detection over large areas. By integrating remote sensing data with biogeochemical models and field observations, researchers can gain insight into the underlying mechanisms that drive mucilage formation and develop strategies to mitigate its effects on coastal environments. The carbon cycle is affected by the marine mucilage. The release of dissolved organic carbon (DOC) from mucilage contributes to the organic carbon reserve in the marine ecosystem. This infusion of organic carbon stimulates the growth and metabolism of microbial communities in and around the mucilage. As these microbes consume DOC, they respire and convert organic carbon into carbon dioxide (CO2) through microbial respiration. This cycle contributes to the exchange of CO2 between the ocean and the atmosphere, potentially affecting atmospheric CO2 levels and global carbon budgets. Mucilage events affect the efficiency of the biological pump, a vital mechanism in the ocean carbon cycle. The biological pump explains how carbon moves from the ocean surface to its depths through the sinking of organic particles such as marine snow and phytoplankton. By trapping organic matter and microorganisms, mucilage can accelerate the sinking rate of organic particles and facilitate their transfer to deeper ocean layers. History Marine mucilage was first reported in 1729. The Deepwater Horizon oil spill in the Gulf of Mexico created large amounts of marine mucilage. Scientists are not sure of the mechanism for this, but one theory asserts that a massive kill of microscopic marine life created a "blizzard" of marine snow. Scientists worry that the mass of marine mucilage could pose a biohazard to surviving marine life in the area. Marine mucilage left by the spill likely resulted in the loss of sea life in the Gulf, as evidenced by a dead field of deepwater coral 11 kilometers from the Deepwater Horizon station. The Mediterranean experienced the worse effects of marine mucilage in 2021. Exponential growth afflicted the Mediterranean and other seas. In early 2021, marine mucilage spread in the Sea of Marmara, due to pollution from wastewater dumped into seawater, which led to the proliferation of phytoplankton, and threatened the marine biome. The port of Erdek at the Sea of Marmara was covered by mucilage. Turkish workers embarked on a massive effort to vacuum it up in June 2021. Yalıköy port in Ordu Province witnessed accumulating mucilage in June 2021, in the Black Sea. Fines were issued to companies discovered to be dumping wastewater. Effects Increasing marine mucilage has become an issue in public health, economic, and environmental matters. Excessive marine mucilage was observed as early as 2009. Public health While marine mucilage is not toxic to humans, public health concerns are associated with it. Due to its complex makeup, marine mucilage contains pathogenic bacteria and transports marine diseases. The majority of such diseases affect both marine invertebrates and vertebrates. Economic Marine mucilage has had an impact on economies around the world, especially those that revolve around the Mediterranean. Marine mucilage has long been seen as a nuisance to the fishing industry, as it clogs fishing nets. Coastal towns that rely on tourism suffer from unappealing waters. Marine mucilage produce an offensive smell and makes the ocean unsuitable for bathing. Environmental Marine mucilage can coat the gills of sea creatures subsumed in it, cutting off oxygen and killing them. Marine mucilage floating on the surface also can significantly limit sunlight that nourishes coral and vegetation. Countermeasures Countermeasures include collecting marine mucilage from the sea surface and laying barriers on the sea surface to prevent it from spreading. Long-term countermeasures include improving wastewater treatment, creating marine protected areas, and limiting climate change. Another approach involves attracting activity such as tourism that prevents the water from stagnating for long periods. Introducing marine species that can consume excessive nutrients. See also References Aquatic ecology Biological oceanography Wikipedia Student Program
Marine mucilage
Biology
1,406
51,687,009
https://en.wikipedia.org/wiki/Oracle%20Cloud
Oracle Cloud is a cloud computing service offered by Oracle Corporation providing servers, storage, network, applications and services through a global network of Oracle Corporation managed data centers. The company allows these services to be provisioned on demand over the Internet. Oracle Cloud provides infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and data as a service (DaaS). These services are used to build, deploy, integrate, and extend applications in the cloud. This platform supports numerous open standards (SQL, HTML5, REST, etc.), open-source applications (Kubernetes, Spark, Hadoop, Kafka, MySQL, Terraform, etc.), and a variety of programming languages, databases, tools, and frameworks including Oracle-specific, open source, and third-party software and systems. Services Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) Oracle's cloud infrastructure was made generally available (GA) on October 20, 2016 under the name "Oracle Bare Metal Cloud Services." Oracle Bare Metal Cloud Services was rebranded as Oracle Cloud Infrastructure in 2018 and dubbed Oracle's "Generation 2 Cloud" at Oracle OpenWorld 2018. Oracle Cloud Infrastructure offerings include the following services: Compute: The company provides Virtual Machine Instances to provide different shapes (VM sizes) catering to different types of workloads and performance characteristics. They also provide on-demand Bare metal servers and Bare metal GPU servers, without a hypervisor. In 2016, Oracle Cloud Infrastructure launched with bare metal instances with Intel processors. These first bare metal instances offered were powered by Intel servers. In 2018, Oracle Cloud added bare metal instances powered by AMD processors, followed by Ampere Cloud-native processors in 2021. In 2021, Oracle also released its first VM-based compute instances based on Arm processors. Storage: The platform provides block volumes, file storage, object storage, and archive storage for database, analytics, content, and other applications across common protocols and APIs. Networking: This cloud platform provides network with fully configurable IP addresses, subnets, routing, and firewalls to support new or existing private networks with end-to-end security. Governance: For auditing, identity and access management, the platform has data integrity checks, traceability, and access management features. Database Management / Data Management: Oracle offers a data management platform for database workloads as well as hyper-scale big data and streaming workloads including OLTP, data warehousing, Spark, machine learning, text search, image analytics, data catalog, and deep learning. The platform allows Oracle, MySQL, and NoSQL databases to be deployed on demand as managed cloud services. Oracle Databases uniquely offer the Oracle Autonomous Database (optimized for data warehouse, transaction processing, or JSON), the Exadata shape, as well as Real Application Clusters (RAC). Load Balancing: The cloud platform offers load balancing capability to automatically route traffic across fault domains and availability domains for high availability and fault-tolerance for hosted applications. Edge Services: These services can monitor the path between users and resources and adapt to changes and outages. They include Domain Name System (DNS) services from Oracle's acquisition of Dyn. FastConnect: The cloud platform provides private connectivity across on-premises and cloud networks through providers like Equinix, AT&T, and Colt. Application Development: For application development, the company’s cloud offers an open, standards-based application development platform to build, deploy, and manage API-first, mobile-first cloud applications. This platform supports container-native, cloud-native, and low code development. This platform also provides a DevOps platform for CI/CD, diagnostics for Java applications, and integration with SaaS and on-prem applications. Services include Java, mobile, digital assistants (evolution from chatbots), messaging, application container cloud, developer cloud, visual builder, API catalog, AI platform, DataScience.com (Oracle acquired) and blockchain. Integration: This is a platform offering with adapters to integrate on-premise and cloud applications. Capabilities include data integration and replication, API management, integration analytics, along with data migration and integration. They offer services such as data integration platform cloud, data integrator cloud service, GoldenGate cloud service, integration cloud, process cloud service, API platform cloud service, apiary cloud service, and SOA cloud service. Business Analytics: The company provides this business analytics platform which can analyze and generate insights from data across various applications, data warehouses, and data lakes. The services offered include analytics cloud, business intelligence, big data discovery, big data preparation, data visualization, and essbase. Security: The Oracle Cloud Platform provides identity and security applications for providing secure access and monitoring of hybrid cloud environment and addressing IT governance and compliance requirements. This platform delivers an identity SOC (Security Operations Center) through a combined offering of SIEM, UEBA, CASB, and IDaaS. The services offered include Identity Cloud Service and CASB Cloud Service. Management: The platform provides an integrated monitoring, management, and analytics platform. This platform also uses machine learning and big data on the operational data set. The platform is used to improve IT stability, prevent application outages, improve DevOps, and harden security. Services offered include Application Performance Monitoring, Infrastructure Monitoring, Log Analytics, Orchestration, IT Analytics, Configuration and Compliance, Security Monitoring, and Analytics. Content and Experience: This is a platform for content, website, and workflow management. This service is used to provide content collaboration and web presence. This tool comes integrated with Oracle on-premise and SaaS services. The services offered are Content and Experience Cloud, WebCenter Portal Cloud, and DIVA Cloud. In 2016, Oracle acquired Dyn, an internet infrastructure company. On May 16, 2018 Oracle announced that it had acquired DataScience.com, a privately held cloud workspace platform for data science projects and workloads. In April 2020, Oracle became the cloud infrastructure provider for Zoom, an online and video meeting platform. The same month, Nissan announced its migration to Oracle Cloud for its high-performance computing (HPC) workloads used for simulating the structural impacts of a car design. Xerox announced a partnership with Oracle Cloud in 2021, where Xerox will use Oracle’s cloud-computing capabilities within its business incubator. Software as a Service (SaaS) Oracle provides SaaS applications also known as Oracle Cloud Applications. These applications are offered across a variety of products, industrial sectors with various deployment options to adhere to compliance standards. The below list mentions Oracle Cloud Applications provided by Oracle Corporation. Customer Experience (CX) Human Capital Management (HCM) Enterprise Resource Planning (ERP) Supply Chain Management (SCM) Enterprise Performance Management (EPM) Internet of Things Applications (IoT) SaaS Analytics Data Industry Solutions (Communications, Financial Services, Consumer Goods, High Tech and Manufacturing, Higher Education, Hospitality, Utilities) Deployment (adhering to standards for sectors such as Financial Services, Retail Services, Public Sector, Defense) Block-Chain Cloud Service (in partnership with SAP, IBM and Microsoft) Blockchain Applications On July 28, 2016 Oracle bought NetSuite, the very first cloud company, for $9.3 billion. Data as a Service (DaaS) This platform is known as the Oracle Data Cloud. This platform aggregates and analyzes consumer data powered by Oracle ID Graph across channels and devices to create cross-channel consumer understanding. Deployment models Oracle Cloud is available in 44 regions as of July 2023, including North America, South America, UK, European Union, Middle East, Africa, India, Australia, Korea, and Japan. Oracle Cloud is available as a public cloud (Oracle-managed regions); to selected government agencies as an Oracle-managed government cloud in the United States (with FedRAMP High and DISA SRG IL5 compliance) and United Kingdom; and as a "private cloud" or "hybrid cloud" as an Oracle-managed database-only service or full-service dedicated region - what Oracle calls "Cloud at Customer." Architecture Oracle's public and government cloud is offered through a global network of Oracle-managed data centers, connected by an Oracle-managed backbone network. Oracle's Exadata Cloud at Customer leverages this network for control plane services. Oracle deploys their cloud in Regions, typically with two geographically distributed regions in each country for disaster resiliency with data sovereignty. Inside each Region are at least one fault-independent Availability Domain and three fault-tolerant Fault Domains per Availability Domain. Each Availability Domains contains an independent data center with power, thermal, and network isolation. Oracle Cloud hosts customer-accessible cloud infrastructure and platform services, as well as end-user accessible software as a service from these cloud regions. See also Oracle Advertising and Customer Experience (CX) Oracle Enterprise Resource Planning Cloud Oracle HCM Cloud References External links End-to-End Automation Testing for Oracle Cloud Oracle Corporation Cloud computing providers Cloud computing Cloud infrastructure Cloud platforms Cloud storage Internet properties established in 2010 Oracle Cloud Services
Oracle Cloud
Technology
1,895
36,038,696
https://en.wikipedia.org/wiki/Eta1%20Coronae%20Australis
{{DISPLAYTITLE:Eta1 Coronae Australis}} Eta1 Coronae Australis, Latinized from η1 CrA, is a suspected astrometric binary star system in the constellation of Corona Australis. It is visible to the naked eye as a dim, white-hued point of light with an apparent visual magnitude of 5.456. Parallax measurements put it at a distance of 317 light-years away from the Sun. The visible component is an A-type main-sequence star with a stellar classification of A3V, which indicates it is generating energy through core hydrogen fusion. It has broad spectrum absorption lines associated with its rotation period, having a projected rotational velocity of 122.3 km/s. The star is radiating 58 times the luminosity of the Sun from its photosphere at an effective temperature of 8,371 K. References A-type main-sequence stars Astrometric binaries Corona Australis Corona Australis, Eta1 Durchmusterung objects 7062 173715 092308
Eta1 Coronae Australis
Astronomy
222
21,064,035
https://en.wikipedia.org/wiki/Locks%20with%20ordered%20sharing
In databases and transaction processing the term Locks with ordered sharing comprises several variants of the two-phase locking (2PL) concurrency control protocol generated by changing the blocking semantics of locks upon conflicts. Further softening of locks eliminates thrashing. See also Autonomic computing References D. Agrawal, A. El Abbadi, A. E. Lang: The Performance of Protocols Based on Locks with Ordered Sharing, IEEE Transactions on Knowledge and Data Engineering, Volume 6, Issue 5, October 1994, pp. 805–818, Mahmoud, H. A., Arora, V., Nawab, F., Agrawal, D., & El Abbadi, A. (2014). Maat: Effective and scalable coordination of distributed transactions in the cloud. Proceedings of the VLDB Endowment, 7(5), 329-340. Data management Databases Concurrency control Transaction processing
Locks with ordered sharing
Technology
186
11,390,850
https://en.wikipedia.org/wiki/Hydraulic%20telegraph
A hydraulic telegraph () refers to two different semaphore systems involving the use of water-based mechanisms as a telegraph. The earliest one was developed in 4th-century BC Greece, while the other was developed in 19th-century AD Britain. The Greek system was deployed in combination with semaphoric fires, while the latter British system was operated purely by hydraulic fluid pressure. Although both systems employed water in their sending and receiver devices, their transmission media were completely different. The ancient Greek system transmitted its semaphoric information to the receiver visually, which limited its use to line-of-sight distances in good visibility weather conditions only. The 19th-century British system used water-filled pipes to effect changes to the water level in the receiver unit (similar to a transparent water-filled flexible tube used as a level indicator), thus limiting its range to the hydraulic pressure that could be generated at the transmitter's device. While the Greek device was extremely limited in the codes (and hence the information) it could convey, the British device was never deployed in operation other than for very short-distance demonstrations. Although the British device could be used in any visibility within its range of operation, it could not work in freezing temperatures without additional infrastructure to heat the pipes. This contributed to its impracticality. Greek hydraulic semaphore system The ancient Greek design was described in the 4th century BC by Aeneas Tacticus and the 3rd century BC by the historian Polybius. The system involved identical containers on separate hills, which are not connected to each other; each container would be filled with water, and a vertical rod floated within it. The rods were inscribed with various predetermined codes at various points along its height. To send a message, the sending operator would use a torch to signal the receiving operator; once the two were synchronized, they would simultaneously open the spigots at the bottom of their containers. Water would drain out until the water level reached the desired code, at which point the sender would signal with his torch, and the operators would simultaneously close their spigots. Thus the length of time between the sender's torch signals could be correlated with specific predetermined codes and messages. A contemporary description of the ancient telegraphic method was provided by Polybius. In The Histories, Polybius wrote: Modern experiments show that the data transfer rate can achieve 151 letter per hour. British hydraulic semaphore system The British civil engineer Francis Whishaw, who later became a principal in the General Telegraph Company, publicized a hydraulic telegraph in 1838 but was unable to deploy it commercially. By applying pressure at a transmitter device connected to a water-filled pipe which travelled all the way to a similar receiver device, he was able to effect a change in the water level which would then indicate coded information to the receiver's operator. The system was estimated to cost £200 per mile (1.6 km) and could convey a vocabulary of 12,000 words. The U.K.'s Mechanics Magazine in March 1838 described it as follows: The article concluded speculatively that the "... hydraulic telegraph may supersede the semaphore and the galvanic telegraph". See also Byzantine beacon system Fryctoria Heliograph Optical communication Signal lamp References External links Connected Earth History of telecommunications Telegraphy Optical communications Ancient Greek technology Ancient Greek military terminology Ancient Greek military equipment Communications in Greece Semaphore
Hydraulic telegraph
Engineering
700
52,993,129
https://en.wikipedia.org/wiki/Act%20Global
Act Global is an artificial turf manufacturer based in Austin, Texas. Its primary production facilities are located in Calhoun, Georgia. The company is best known for its brands in sports, Xtreme Turf and UBU. FIFA has certified the company as a "FIFA Quality Licensee" for football turf. History John Baize and Chris Clapham co-founded Act Global in 2004 to be a socially and environmentally responsible turf manufacturer. Since its inception, Act Global has spread worldwide with multiple manufacturing centers across the globe. Baize has served on the industry Synthetic Turf Council board of directors for nine years including as Chair. Baize also is the initial Chair of the global Synthetic Turf Council International. In 2009, FIFA certified Act Global as a "preferred producer" for football turf. In 2015, the company moved to its current U.S. production facility in Georgia. In 2016, Act Global acquired the UBU and Turfscape brands. In 2018, Act Global won the inaugural Synthetic Turf Council Philanthropy Award to recognize its charitable contributions and social outreach to communities. The company repeated this distinction in 2020 when it was again issued the STC Philanthropy Award. In 2020 and 2021, Act Global won the Synthetic Turf Council Sustainability Award to recognize the STC member organization that consistently, and through innovation, utilizes sustainable materials and processes. Products The company offers various kinds of artificial turfs, depending upon the use. Sports turf: The Xtreme Turf and UBU lines are intended for sports use, with different offerings for various sports and uses including football, soccer, baseball, field hockey, rugby, lacrosse, marching bands, cheer and multi-purpose events. Aviation turf: AvTurf is a patented system used for airport ground cover to minimize foreign debris, replace the natural habitat for birds and wildlife near runway safety areas and provide visual recognition and operational benefits. AvTurf is listed in the United States Federal Aviation Administration circular for Airside applications. Landfill closure: LiteEarth is a patented system used in the closure of landfills and land reclamation. Landscape turf: Turfscape is available for both residential and commercial spaces. This market sector focuses on maximizing land space and improving aesthetics and property values while saving water and lower maintenance burdens. References External links Artificial turf Companies based in Austin, Texas
Act Global
Chemistry
460
53,134,676
https://en.wikipedia.org/wiki/NGC%20413
NGC 413 is a spiral galaxy of type SB(r)c located in the constellation Cetus. It was discovered in 1886 by Francis Leavenworth. It was described by Dreyer as "extremely faint, pretty small, very little extended." Image gallery References External links 0413 Astronomical objects discovered in 1886 Cetus Barred spiral galaxies 004347
NGC 413
Astronomy
73
1,740,486
https://en.wikipedia.org/wiki/Operational%20data%20store
An operational data store (ODS) is used for operational reporting and as a source of data for the enterprise data warehouse (EDW). It is a complementary element to an EDW in a decision support environment, and is used for operational reporting, controls, and decision making, as opposed to the EDW, which is used for tactical and strategic decision support. An ODS is a database designed to integrate data from multiple sources for additional operations on the data, for reporting, controls and operational decision support. Unlike a production master data store, the data is not passed back to operational systems. It may be passed for further operations and to the data warehouse for reporting. An ODS should not be confused with an enterprise data hub (EDH). An operational data store will take transactional data from one or more production systems and loosely integrate it, in some respects it is still subject oriented, integrated and time variant, but without the volatility constraints. This integration is mainly achieved through the use of EDW structures and content. An ODS is not an intrinsic part of an EDH solution, although an EDH may be used to subsume some of the processing performed by an ODS and the EDW. An EDH is a broker of data. An ODS is certainly not. Because the data originates from multiple sources, the integration often involves cleaning, resolving redundancy and checking against business rules for integrity. An ODS is usually designed to contain low-level or atomic (indivisible) data (such as transactions and prices) with limited history that is captured "real time" or "near real time" as opposed to the much greater volumes of data stored in the data warehouse generally on a less-frequent basis. General use The general purpose of an ODS is to integrate data from disparate source systems in a single structure, using data integration technologies like data virtualization, data federation, or extract, transform, and load (ETL). This will allow operational access to the data for operational reporting, master data or reference data management. An ODS is not a replacement or substitute for a data warehouse or for a data hub but in turn could become a source. See also Some examples of ODS architecture patterns can be found in the article Architecture patterns. Enterprise architecture Third normal form (3NF) Further reading External links Bill Inmon Information Management article on the five classes of ODS Data management Data warehousing Management cybernetics
Operational data store
Technology
504
21,057,023
https://en.wikipedia.org/wiki/TB9Cs3H2%20snoRNA
TB9Cs3H2 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB9Cs3H2 is predicted to guide the pseudouridylation of LSU5 ribosomal RNA (rRNA) at residue Ψ1103. References Non-coding RNA
TB9Cs3H2 snoRNA
Chemistry
124
419,952
https://en.wikipedia.org/wiki/Calcium%20metabolism
Reabsorption Intestine Since about 15 mmol of calcium is excreted into the intestine via the bile per day, the total amount of calcium that reaches the duodenum and jejunum each day is about 40 mmol (25 mmol from the diet plus 15 mmol from the bile), of which, on average, 20 mmol is absorbed (back) into the blood. The net result is that about 5 mmol more calcium is absorbed from the gut than is excreted into it via the bile. If there is no active bone building (as in childhood), or increased need for calcium during pregnancy and lactation, the 5 mmol calcium that is absorbed from the gut makes up for urinary losses that are only partially regulated. Kidneys The kidneys filter 250 mmol of calcium ions a day in pro-urine (or glomerular filtrate), and resorbs 245 mmol, leading to a net average loss in the urine of about 5 mmol/d. The quantity of calcium ions excreted in the urine per day is partially under the influence of the plasma parathyroid hormone (PTH) level - high levels of PTH decreasing the rate of calcium ion excretion, and low levels increasing it. However, parathyroid hormone has a greater effect on the quantity of phosphate ions (HPO42−) excreted in the urine. Phosphates form insoluble salts in combination with calcium ions. High concentrations of HPO42− in the plasma, therefore, lower the ionized calcium level in the extra-cellular fluids. Thus, the excretion of more phosphate than calcium ions in the urine raises the plasma ionized calcium level, even though the total calcium concentration might be lowered. The kidney influences the plasma ionized calcium concentration in yet another manner. It processes vitamin D3 into calcitriol, the active form that is most effective in promoting the intestinal absorption of calcium. This conversion of vitamin D3 into calcitriol, is also promoted by high plasma parathyroid hormone levels. Excretion Intestine Most excretion of excess calcium is via the bile and feces, because the plasma calcitriol levels (which ultimately depend on the plasma calcium levels) regulate how much of the biliary calcium is reabsorbed from the intestinal contents. Kidneys Urinary excretion of calcium is normally about 5 mmol (200 mg) /day. This is less in comparison to what is excreted via the feces (15 mmol/day). Regulation The plasma ionized calcium concentration is regulated within narrow limits (1.3–1.5 mmol/L). This is achieved by both the parafollicular cells of the thyroid gland, and the parathyroid glands constantly sensing (i.e. measuring) the concentration of calcium ions in the blood flowing through them. High plasma level When the concentration of calcium rises, the parafollicular cells of the thyroid gland increase their secretion of calcitonin, a polypeptide hormone, into the blood. At the same time, the parathyroid glands reduce the secretion of parathyroid hormone (PTH), also a polypeptide hormone, into the blood. The resulting high levels of calcitonin in the blood stimulate osteoblasts in bone to remove calcium from blood plasma and deposit it as bone. The reduced levels of PTH inhibit removal of calcium from the skeleton. The low levels of PTH have several other effects: there is increased loss of calcium in the urine, but more importantly, the loss of phosphate ions through urine is inhibited. Phosphate ions will therefore be retained in the plasma where they form insoluble salts with calcium ions, thereby removing them from the ionized calcium pool in the blood. The low levels of PTH also inhibit the formation of calcitriol (not to be confused with calcitonin) from cholecalciferol (vitamin D3) by the kidneys. The reduction in the blood calcitriol concentration acts (comparatively slowly) on the epithelial cells (enterocytes) of the duodenum, inhibiting their ability to absorb calcium from the intestinal contents. The low calcitriol levels also act on bone causing the osteoclasts to release fewer calcium ions into the blood plasma. Low plasma level When the plasma ionized calcium level is low or falls the opposite happens. Calcitonin secretion is inhibited and PTH secretion is stimulated, resulting in calcium being removed from bone to rapidly correct the plasma calcium level. The high plasma PTH levels inhibit calcium loss via the urine while stimulating the excretion of phosphate ions via that route. They also stimulate the kidneys to manufacture calcitriol (a steroid hormone), which enhances the ability of the cells lining the gut to absorb calcium from the intestinal contents into the blood, by stimulating the production of calbindin in these cells. The PTH stimulated production of calcitriol also causes calcium to be released from bone into the blood, by the release of RANKL (a cytokine, or local hormone) from the osteoblasts which increases the bone resorptive activity by the osteoclasts. These are, however, relatively slow processes Thus fast short term regulation of the plasma ionized calcium level primarily involves rapid movements of calcium into or out of the skeleton. Long term regulation is achieved by regulating the amount of calcium absorbed from the gut or lost via the feces. Disorders Hypocalcemia (low blood calcium) and hypercalcemia (high blood calcium) are both serious medical disorders. Osteoporosis, osteomalacia and rickets are bone disorders linked to calcium metabolism disorders and effects of vitamin D. Renal osteodystrophy is a consequence of chronic kidney failure related to the calcium metabolism. A diet adequately rich in calcium may reduce calcium loss from bone with advancing (post-menopausal) age. A low dietary calcium intake may be a risk factor in the development of osteoporosis in later life; and a diet with sustained adequate amounts of calcium may reduce the risk of osteoporosis. Research The role that calcium might have in reducing the rates of colorectal cancer has been the subject of many studies. However, given its modest efficacy, there is no current medical recommendation to use calcium for cancer reduction. See also European Calcium Society Footnotes References External links Calcium at Lab Tests Online Physiology Calcium Human homeostasis Endocrine system
Calcium metabolism
Biology
1,373
47,069,179
https://en.wikipedia.org/wiki/XLR50
The XLR50 was a pump-fed liquid-propellant rocket engine burning RP-1 and LOX in a gas generator cycle developed by General Electric. It was used to power the first stage of the Vanguard rockets on the Vanguard project. As was common to engines based on the V-2 experience, the turbine was driven by steam generated by catalytic decomposition of H2O2 and the combustion chamber was regeneratively cooled. The engine was gimbaled to supply thrust vectoring. Also, the exhaust gases of the turbine were ducted to dual auxiliary nozzles that acted as verniers to enable roll control of the rocket. When the Vanguard rocket was selected as the first orbital launch vehicle for the US, Martin Company got the contract as prime contractor. Given the required thrust levels, the Viking propulsion (the Reaction Motors XLR10) was deemed insufficient, and instead, the General Electric proposal, based on the experience gained in the Hermes program, was considered more fitting and a less risky choice than Reaction Motors next project. Thus, on October 1, 1955 Martin purchase order 55-3516-CP was signed with General Electric for the X-405 engine for furnishing a self-contained unit which was to include the thrust structure, gimbal ring, engine components, and engine starting equipment. While the first two Vanguard flight, (TV-0 and TV-1), used Viking first stages, twelve X-405 were built and eleven flew on Vanguard rockets. References Further reading The Martin Company - Engineering Report No. 11022: The Vanguard Satellite Launching Vehicle - An Engineering Summary (April 1960) p. 40-51 Rocket engines using the gas-generator cycle Rocket engines using kerosene propellant
XLR50
Astronomy
353
22,695,696
https://en.wikipedia.org/wiki/%28Benzene%29chromium%20tricarbonyl
(Benzene)chromium tricarbonyl is an organometallic compound with the formula . This yellow crystalline solid compound is soluble in common nonpolar organic solvents. The molecule adopts a geometry known as “piano stool” because of the planar arrangement of the aryl group and the presence of three CO ligands as "legs" on the chromium-bond axis. Preparation (Benzene)tricarbonylchromium was first reported in 1957 by Fischer and Öfele, who prepared the compound by the carbonylation of bis(benzene)chromium. They obtained mainly chromium carbonyl (Cr(CO)) and traces of Cr(CH)(CO). The synthesis was optimized through the reaction of Cr(CO) and Cr(CH). For commercial purposes, a reaction of Cr(CO) and benzene is used: Cr(CO) + CH → Cr(CH)(CO) + 3 CO Applications Complexes of the type (Arene)Cr(CO)3 have been well investigated as reagents in organic synthesis.. The aromatic ring of (benzene)tricarbonylchromium is substantially more electrophilic than benzene itself, allowing it to undergo nucleophilic addition reactions. It is also more acidic, undergoing lithiation upon treatment with n-butyllithium. The resulting organolithium compound can then be used as a nucleophile in various reactions, for example, with trimethylsilyl chloride: (Benzene)tricarbonylchromium is a useful catalyst for the hydrogenation of 1,3-dienes. The product alkene results from 1,4-addition of hydrogen. The complex does not hydrogenate isolated double bonds. References Half sandwich compounds Organochromium compounds Carbonyl complexes
(Benzene)chromium tricarbonyl
Chemistry
384
57,593,702
https://en.wikipedia.org/wiki/Swinholide
Swinholides are dimeric 42 carbon-ring polyketides that exhibit a 2-fold axis of symmetry. Found mostly in the marine sponge Theonella, swinholides encompass cytotoxic and antifungal activities via disruption of the actin skeleton. Swinholides were first described in 1985 and the structure and stereochemistry were updated in 1989 and 1990, respectively. Thirteen swinholides have been described in the literature, including close structural compounds such as misakinolides/bistheonellides, ankaraholides, and hurgholide A It is suspected that symbiotic microbes that inhabit the sponges rather than the sponges themselves produce swinholides since the highest concentration of swinholides are found in the unicellular bacterial fraction of sponges and not in the sponge fraction or cyanobacteria fraction that also inhabit the sponges. From a marine field sample containing the cyanobacterium Symploca sp, Swinholide A has also been reported in literature. The structural analogs of swinholides, ankaraholides, were also found from the cyanobacterium Geitlerinema sp. in the same experimental study. Since sponges host a range of bacteria, including symbiotic cyanobacteria, it is often wondered how swinholides are produced. A study of the production of misakinolide revealed that it was attributed to the Theonella symbiont bacterium Candidatus Entotheonella via the discovery of a trans-AT polyketides synthase (PKS) biosynthesis gene cluster. This demonstrates that the true origin of swinholides is symbiotic bacteria that inhabit sponges. History Cyanobacteria are known to have a wide application scope due to their structurally varied secondary metabolites they produce. Among many of the secondary metabolites, polyketides have demonstrated vital bioactivities that can be applied to a number of fields. For example, many antifungal, antitumor, and antibiotic polyketides are found from plants, bacteria, and fungi. The synthesis of polyketides is well known: small monomeric compounds frame polyketides via elongation on multidomain PKS complexes. The PKSs can add either a malonyl, acyl, or derivative unit to the chain and they are classified into types 1-3, which are dependent on factors such as functionality and domain architecture. Type I PKSs include cis- and trans-acyltransferase (trans-AT) PKSs where each section of the cis-AT PKSs encodes a dedicated AT domain and the trans-AT PKSs have distinct ATs that are used in place of the cis-encoded AT domains. Structure The structures of swinholide, misakinolide, and luminaolide are shown below. (Figure 1 and Figure 2). Biosynthesis The swinholide biosynthesis gene cluster (swi) was located on a single scaffold by BLASTp searches against misakinolide biosynthesis cluster genes. This was chosen because of the close structural resemblance of these compounds. The swinholide biosynthesis gene cluster (85-kb) encodes for five PKS proteins, including SwiC to SwiG. This includes an AT enzyme, SwiG, which is a characteristic of trans-PKSs (Figure 3). The swinholide biosynthesis gene cluster codes for a trans-AT PKS and does not integrate AT domains similar to the phormidolide (phm), miskinolide (mis), tolytoxin (tto), luminaolide (lum), and nosperin (nsp) gene clusters. The swinholide biosynthesis gene cluster is also similar to the tto and lum gene clusters. The swi and mis clusters both include four large genes encoding PKS enzymes and a gene encoding for the AT protein, but the order of the genes differs (Figure 4). In the swi biosynthesis gene cluster, first gene, SwiC, is on the reverse strand and the other four genes are facing the forward direction. In the mis biosynthesis gene cluster, all genes are oriented in the same direction. Although this is different, both swi and mis biosynthesis gene clusters are composed of similar catalytic domains. One distinct characteristic of the swi biosynthesis enzymes are its domain order, nonelongating domains, and split modules. These are common features found in trans-PKSs. There are four nonelongating ketosynthases in the swi cluster that are not a factor of the polyketide chain synthesis. Three ketosynthases function to bind with modification enzymes and the fourth ketosynthase is found in the terminal section of SwiF. There are only minor differences between swi and mis: two acyl carrier proteins (ACPs) in the middle of the SwiC protein, instead of a single ACP in the found in the MisC protein (Figure 4). In their monomeric structures, swi and mis have two varying ring structures. In SwiF, the second and third dehydrotases (DHs) are located side-by-side (Figure 4). For mis, the same DH-like domains were identified, but the third DH is pyran synthase (PS), which creates the dihydropyran ring in the structure of mis. Further investigation revealed third DH in swi was a PS (Figure 3 and 5). The other ring formation in mis was hypothesized to be catalyzed by either accessory enzymes or DH in MisC. MisC and SwiC code for similar, but different DHs, but lack an overall PS domain in the dihydropyran ring formation. The DH domains from MisC and SwiC were found to lack the glycine in a specific motif. This, therefore, could indicate that a varying DH domain plays a vital role in ring formation. Despite the structural differences between swi and mis, the sequence identities of the genes differed from 73 to 85% even though there are structural similarities. Scytophycin, tolytoxin, and luminaolide biosynthesis cluster genes also encompassed high sequence identity to swi and mis. Although there is high sequence identity, SwiC and MisC proteins differ from alternative gene clusters, as shown in their chemical structures (Figure 1). Phylogenic analysis The structural variants of the swinholide biosynthesis gene clusters origins were elucidate through phylogenic studies. A phylogenetic tree of trans-encoded AT proteins showed that all six biosynthesis gene clusters were similar and assembled their own group. Scytophycin, luminaolide, and tolytoxin biosynthesis gene clusters were arranged together based on ketosynthase domains, and the misakinolide and swinholide biosynthesis gene clusters constituted their own category (Figure 5). References Polyketides
Swinholide
Chemistry
1,508
11,901,885
https://en.wikipedia.org/wiki/No%20instruction%20set%20computing
No instruction set computing (NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources. Overview NISC is a statically scheduled horizontal nanocoded architecture (SSHNA). The term "statically scheduled" means that the operation scheduling and Hazard handling are done by a compiler. The term "horizontal nanocoded" means that NISC does not have any predefined instruction set or microcode. The compiler generates nanocodes which directly control functional units, registers and multiplexers of a given datapath. Giving low-level control to the compiler enables better utilization of datapath resources, which ultimately result in better performance. The benefits of NISC technology are: Simpler controller: no hardware scheduler, no instruction decoder Better performance: more flexible architecture, better resource utilization Easier to design: no need for designing instruction-sets The instruction set and controller of processors are the most tedious and time-consuming parts to design. By eliminating these two, design of custom processing elements become significantly easier. Furthermore, the datapath of NISC processors can even be generated automatically for a given application. Therefore, designer's productivity is improved significantly. Since NISC datapaths are very efficient and can be generated automatically, NISC technology is comparable to high level synthesis (HLS) or C to HDL synthesis approaches. In fact, one of the benefits of this architecture style is its capability to bridge these two technologies (custom processor design and HLS). Zero instruction set computer In computer science, zero instruction set computer (ZISC) refers to a computer architecture based solely on pattern matching and absence of (micro-)instructions in the classical sense. These chips are known for being thought of as comparable to the neural networks, being marketed for the number of "synapses" and "neurons". The acronym ZISC alludes to reduced instruction set computer (RISC). ZISC is a hardware implementation of Kohonen networks (artificial neural networks) allowing massively parallel processing of very simple data (0 or 1). This hardware implementation was invented by Guy Paillet and Pascal Tannhof (IBM), developed in cooperation with the IBM chip factory of Essonnes, in France, and was commercialized by IBM. The ZISC architecture alleviates the memory bottleneck by blending pattern memory with pattern learning and recognition logic. Their massively parallel computing solves the by allotting each "neuron" its own memory and allowing simultaneous problem-solving the results of which are settled up disputing with each other. Applications and controversy According to TechCrunch, software emulations of these types of chips are currently used for image recognition by many large tech companies, such as Facebook and Google. When applied to other miscellaneous pattern detection tasks, such as with text, results are said to be produced in microseconds even with chips released in 2007. Junko Yoshida, of the EE Times, compared the NeuroMem chip with "The Machine", a machine capable of being able to predict crimes from scanning people's faces from the television series Person of Interest, describing it as "the heart of big data" and "foreshadow[ing] a real-life escalation in the era of massive data collection". History In the past, microprocessor design technology evolved from complex instruction set computer (CISC) to reduced instruction set computer (RISC). In the early days of the computer industry, compiler technology did not exist and programming was done in assembly language. To make programming easier, computer architects created complex instructions which were direct representations of high level functions of high level programming languages. Another force that encouraged instruction complexity was the lack of large memory blocks. As compiler and memory technologies advanced, RISC architectures were introduced. RISC architectures need more instruction memory and require a compiler to translate high-level languages to RISC assembly code. Further advancement of compiler and memory technologies leads to emerging very long instruction word (VLIW) processors, where the compiler controls the schedule of instructions and handles data hazards. NISC is a successor of VLIW processors. In NISC, the compiler has both horizontal and vertical control of the operations in the datapath. Therefore, the hardware is much simpler. However the control memory size is larger than the previous generations. To address this issue, low-overhead compression techniques can be used. See also C to HDL Content-addressable memory Reduced instruction set computer Complex instruction set computer Explicitly parallel instruction computing Minimal instruction set computer Very long instruction word One-instruction set computer TrueNorth References Further reading Chapter 2. External links US Patent for ZISC hardware, issued to IBM/G.Paillet on April 15, 1997 Image Processing Using RBF like Neural Networks: A ZISC-036 Based Fully Parallel Implementation Solving Real World and Real Complexity Industrial Problems by K. Madani, G. de Trémiolles, and P. Tannhof From CISC to RISC to ZISC by S. Liebman on lsmarketing.com Neural Networks on Silicon at aboutAI.net French Patent Request NISC for purely applicative engine - the sole operation of application (no lambda-calculus that is a particular case of quasi-applicative systems with two operations : application and abstraction - Curry 1958 p. 31) Electronic design Central processing unit Instruction processing
No instruction set computing
Engineering
1,126
48,707,385
https://en.wikipedia.org/wiki/Proton%20tunneling
Proton tunneling is a type of quantum tunneling involving the instantaneous disappearance of a proton in one site and the appearance of the same proton at an adjacent site separated by a potential barrier. The two available sites are bounded by a double well potential of which its shape, width and height are determined by a set of boundary conditions. According to the WKB approximation, the probability for a particle to tunnel is inversely proportional to its mass and the width of the potential barrier. Electron tunneling is well-known. A proton is about 2000 times more massive than an electron, so it has a much lower probability of tunneling; nevertheless, proton tunneling still occurs especially at low temperatures and high pressures where the width of the potential barrier is decreased. Proton tunneling is usually associated with hydrogen bonds. In many molecules that contain hydrogen, the hydrogen atoms are linked to two non-hydrogen atoms via a hydrogen bond at one end and a covalent bond at the other. A hydrogen atom without its electron is reduced to being a proton. Since the electron is no longer bound to the hydrogen atom in a hydrogen bond, this is equivalent to a proton resting in one of the wells of a double well potential as described above. When proton tunneling occurs, the hydrogen bond and covalent bonds are switched. Once proton tunneling occurs, the same proton has the same probability of tunneling back to its original site provided the double well potential is symmetrical. The base pairs of a DNA strand are connected by hydrogen bonds. In essence, the genetic code is contained by a unique arrangement of hydrogen bonds. It is believed that upon the replication of a DNA strand there is a probability for proton tunneling to occur which changes the hydrogen bond configuration; this leads to a slight alteration of the hereditary code which is the basis of mutations. Likewise, proton tunneling is also believed to be responsible for the occurrence of the dysfunction of cells (tumors and cancer) and ageing. Proton tunneling occurs in many hydrogen based molecular crystals such as ice. It is believed that the phase transition between the hexagonal (ice Ih) and orthorhombic (ice XI) phases of ice is enabled by proton tunneling. The occurrence of correlated proton tunneling in clusters of ice has also been reported recently. See also Quantum tunneling Hydrogen bond References Quantum mechanics Solid state engineering
Proton tunneling
Physics,Chemistry,Materials_science,Engineering
474
33,185,681
https://en.wikipedia.org/wiki/Centuriation
Centuriation (in Latin centuriatio or, more usually, limitatio), also known as Roman grid, was a method of land measurement used by the Romans. In many cases land divisions based on the survey formed a field system, often referred to in modern times by the same name. According to O. A. W. Dilke, centuriation combined and developed features of land surveying present in Egypt, Etruria, Greek towns and Greek countryside. Centuriation is characterised by the regular layout of a square grid traced using surveyors' instruments. It may appear in the form of roads, canals and agricultural plots. In some cases these plots, when formed, were allocated to Roman army veterans in a new colony, but they might also be returned to the indigenous inhabitants, as at Orange (France). The study of centuriation is very important for reconstructing landscape history in many former areas of the Roman empire. History The Romans began to use centuriation for the foundation, in the fourth century BCE, of new colonies in the ager Sabinus, northeast of Rome. The development of the geometric and operational characteristics that were to become standard came with the founding of the Roman colonies in the Po valley, starting with Ariminum (Rimini) in 268 BCE. The agrarian law introduced by Tiberius Gracchus in 133 BCE, which included the privatisation of the ager publicus, gave a great impetus to land division through centuriation. Centuriation was used later for land reclamation and the foundation of new colonies as well as for the allocation of land to veterans of the many civil wars of the late Republic and early Empire, including the battle of Philippi in 42 BCE. This is mentioned by Virgil, in his Eclogues, when he complains explicitly about the allocation of his lands near Mantua to the soldiers who had participated in that battle. Centuriation was widely used throughout Italy and also in some provinces. For example, careful analysis has identified, in the area between Rome and Salerno, 80 different centuriation systems created at different times. System and procedure Various land division systems were used, but the most common was known as the ager centuriatus system. The surveyor first identified a central viewpoint, the umbilicus. He then took up his position there and, looking towards the west, defined the territory with the following names: ultra, the land he saw in front of him; citra, the land behind him; dextera, the land to his right; sinistra, the land to his left. He then traced the grid using an instrument known as a groma, tracing two road axes perpendicular to each other: the first, generally oriented east–west, was called decumanus maximus, which was traced taking as reference the place where the sun rose in order to know exactly where east was; the second, with a north–south orientation, was called cardo maximus. Measurement instruments Groma Chorobates for levels Dioptra for levels and angles of slopes Orientation It has been suggested that the Roman centuriation system inspired Thomas Jefferson's proposal to create a grid of townships for survey purposes, which ultimately led to the United States Public Land Survey System. The similarity of the two systems is empirically obvious in certain parts of Italy, for example, where traces of centuriation have remained. However, Thrower points out that, unlike the later US system, "not all Roman centuriation displays consistent orientation". This is because, for practical reasons, the orientation of the axes did not always coincide with the four cardinal points and followed instead the orographic features of the area, also taking into account the slope of the land and the flow of rainwater along the drainage channels that were traced (centuriation of Florentia (Florence). In other cases, it was based on the orientation of existing lines of communication (centuriation along the Via Emilia) or other geomorphological features. Centuriation is typical of flat land, but centuriation systems have also been documented in hilly country. Centuriation of the surrounding territory Sometimes the umbilicus agri was located in a city or a castrum. This central point was generally referred to as groma, from the name of the instrument used by the gromatici (surveyors). In such cases, the grid was traced by extending the urban cardo maximus and the decumanus maximus through the gates of the city into the surrounding agricultural land. Parallel secondary roads (limites quintarii) were then traced on both sides of the initial axes at intervals of 100 actus (about 3.5 km). The territory was thus divided into square areas. The road network density was then increased with other roads parallel to those already traced at a distance from each other of 20 actus (710.40 m). Each of the square areas – 20 × 20 actus – resulting from this further division was called a centuria or century. This 200 jugera area of the centuria became prevalent in the period when the large areas of the Po Valley were delimited, while smaller centuries of 10 × 10 actus, as the name centuria suggests, had formerly been used. Contemporary Roman sources as well as modern archeological results suggest that centuria varied in size from 50 to 400 jugera, with some subdivisions using non-square plots. The land was divided after the completion of the roads. Each century was divided into 10 strips, lying parallel to the cardo and the decumanus, with a distance between them of 2 actus (71.04 m), thus forming 100 squares (heredia) of about 0.5 hectares each: 100 heredia = 1 centuria. Each heredium was divided in half along the north–south axis thus creating two jugera: one jugerum, from jugum (yoke), measured 2523 square metres, which was the amount of land that could be ploughed in one day by a pair of oxen. Regions where centuriation was used Even today, in some parts of Italy, the landscape of the plain is determined by the outcome of Roman centuriation, with the persistence of straight elements (roads, drainage canals, property divisions) which have survived territorial development and are often basic elements of urbanisation, at least until the twentieth century, when the human pressure of urban growth and infrastructures destroyed many of the traces scattered throughout the agricultural countryside. Significant examples of centuriation in Italy Cesena, and in particular the country to the north-east and north-west of the city; Central Romagna; Padua, eastern area of the province; in this area of Venetia, the geometrical layout of the landscape is known as the Graticolato Romano; Ager Campanus (Acerra, Capua, Nola, Atella); Florence (Florentia), first century CE, in the plain to the west to Prato and beyond. Province of Bergamo: There are still several easily identifiable traces, from the low plain almost to the foot of the hills, for example, the straight road of about ten kilometres between Spirano and Stezzano, through Comun Nuovo; there are also traces of agricultural centuriation identifiable in the street network of Treviglio. Traces of centuriation in Gallia Narbonensis (Southern France) Béziers Valence Orange (Orange B) Traces of centuriation in Hispania Tarraconensis Tarragona Empúries Girona Barcelona Cerdanya Isona (Pallars Jussà) Guissona Lleida els Prats de Rei (antiga Segarra romana) la Seu d'Urgell o Castellciutat (probable) Bages (probable) Castell-rosselló (probable) Traces of centuriation in Britannia (present-day southern and central Britain) Ripe, Sussex (probable) Great Wymondley, Hertfordshire (Roman field system identified, possibly part of a cadastre) Worthing, Sussex (probable) Traces of centuriation in Dacia (present-day south-western Romania) Sarmisegetuza (including pagii Micia and Aquae) Apulum (probable) See also Ancient Roman units of measurement Ancient Roman architecture, Roman roads Drainage and centuriation in the Po Valley and Po delta Ager Romanus Aerial archaeology References Bibliography In English: Oswald A. W. Dilke The Roman Land Surveyors, 1992 (1971), Norman Joseph William Thrower, Maps & civilization: cartography in culture and society, The University of Chicago Press, Chicago, 1972 In Italian: Umberto Laffi, Studi di storia romana e di diritto, 2001, Giacinto Libertini, Persistenza di luoghi e toponimi nelle terre delle antiche città di Atella e Acerrae, 1999 In French: A. Piganiol, « Les documents annexes du cadastre d'Orange », CRAI, 1954, 98–3, p. 302–310 lire en ligne In German: Further reading In English: In Catalan and Spanish: L'Avenç. Revista d'Història, núm. 167, febrer 1993. Dossier: "Els cadastres en època romana. Història i recerca", pàgs. 18–57. E. Ariño – J. M. Gurt – J. M. Palet, El pasado presente. Arqueología de los paisajes en la Hispania romana, Universidad de Salamanca – Universitat de Barcelona, Salamanca – Barcelona, 2004. In French: A. Caillemer, R. Chevalier, « Les centuriations de l'Africa vetus », Annales, 1954, 9–4, p. 433–460 lire en ligne André Chastagnol, « Les cadastres de la colonie romaine d'Orange », Annales, 1965, 20–1, p. 152–159 lire en ligne Col., « Fouilles d'un limes du cadastre B d'Orange à Camaret (Vaucluse) », DHA, 17–2, 1991, p. 224 lire en ligne Gérard Chouquer, François Favory, Les Paysages de l'Antiquité. Terres et cadastres de l'occident romain, Errance, Paris, 1991, 243 p. Gérard Chouquer, « Un débat méthodologique sur les centuriations », DHA, 1993, 19–2, p. 360–363 lire en ligne Claire Marchand, « Des centuriations plus belles que jamais ? Proposition d'un modèle dynamique d'organisation des formes », Études Rurales, 167–168, 2003, 3–4, p. 93–113 lire en ligne. L.R. Decramer, R. Elhaj, R. Hilton, A. Plas, « Approches géométrique des centuriations romaines. Les nouvelles bornes du Bled Segui », Histoire et Mesure, XVII, 1/2, 2002, p. 109–162 lire en ligne Gérard Chouquer, « Les transformations récentes de la centuriation. Une autre lecture de l'arpentage romain », Annales, 2008–4, p. 847–874. Ancient Roman architecture Ancient Roman geography Historical geography Urban planning Ancient Roman city planning
Centuriation
Engineering
2,421
17,555,258
https://en.wikipedia.org/wiki/Bakery%20mix
Bakery mix is an add water only pre-mixed baking product consisting of flour, dry milk, shortening, salt, and baking powder (a leavening agent). A bakery mix can be used to make a wide variety of baked goods from pizza dough to dumplings to pretzels. The typical flavor profile of bakery mix differs from that of pancake mix. Bakery mixes do not require refrigeration. History Chris Rutt and Charles Underwood of the Pearl Milling Company, developed Aunt Jemima, the first "ready mix." The baking mix was designed for people to just add water to create a mixture that could create pancakes. Carl Smith, a sales executive at General Mills, got the idea of selling a pre-mixed blend of flour, salt, baking powder, and lard to create biscuits from a chef on a train in 1930. After Smith pitched the idea for a biscuit mix to sell, head chemist of General Mills, Charlie Kress, created Bisquick. Bisquick entered the market in 1931. In the 1940s, Bisquick began using "a world of baking in a box," and printed recipes for other baked goods such as dumplings, muffins, and coffee cake. In 1933, Pittsburgh molasses company, P. Duff and Sons, patented the first cake mix after blending dehydrated molasses with dehydrated flour, sugar, eggs, and other ingredients. P. Duff and Sons created the cake mix to move surplus molasses, requiring 100 pounds of molasses for every 100 pounds of wheat flour. After World War Two, flour companies such as General Mills, and Pillsbury, began selling cake mixes due to the surplus of flour. By the 1950s, there were hundreds of cake mix companies. In 1948, Pillsbury introduced the first chocolate cake mix. Use of Eggs The Duff company patented a baking mixture requiring fresh eggs in 1935 with writing in the patent application, “The housewife and the purchasing public in general seem to prefer fresh eggs and hence the use of dried or powdered eggs is somewhat of a handicap from a psychological standpoint." Other companies would continue to use powdered eggs in their bakery mixes, and it's not until sales flattened between 1956 and 1960 that major food companies would revise their formula to incorporate fresh eggs. Ernest Dichter, an analyst for General Mills, interviewed women that used the cake mixes and reported that the simplicity of the mixes made women feel too self indulgent because there wasn't enough work involved. While some say that the difference caused by using fresh eggs was purely psychological, others argued that the inclusion of fresh eggs simply created better products. The cake mix with dried eggs frequently tasted of eggs, stuck to the pan and had poorer texture. References Food ingredients
Bakery mix
Technology
569
8,048,664
https://en.wikipedia.org/wiki/Link%20Access%20Procedure%20for%20Frame%20Relay
In wide area network computing, Link Access Procedure for Frame Relay (or LAPF) is part of the network's communications protocol which ensures that frames are error free and executed in the right sequence. LAPF is formally defined in the International Telecommunication Union standard Q.922. It was derived from IBM's Synchronous Data Link Control protocol, which is the layer 2 protocol for IBM's Systems Network Architecture developed around 1975. ITU used SDLC as a basis to develop LAPF for the Frame Relay environment, along with other equivalents: LAPB for the X.25 protocol stack, LAPM for the V.42 protocol, and LAPD for the ISDN protocol stack. In Frame Relay Local Management Interface (LMI) messages are carried in a variant of LAPF frames. LAPF corresponds to the OSI model Data Link layer. External links Cisco Frame Relay documentation Link access protocols Link protocols Frame Relay
Link Access Procedure for Frame Relay
Technology
191
4,330,927
https://en.wikipedia.org/wiki/Dark%20Horse%20%28astronomy%29
The Dark Horse Nebula or Great Dark Horse (sometimes called the Prancing Horse) is a large dark nebula that, from Earth's perspective, obscures part of the upper central bulge of the Milky Way. The Dark Horse lies in the equatorial constellation Ophiuchus (the Serpent Bearer), near its borders with the more famous constellations Scorpius and Sagittarius. It is a large, visible feature of the Milky Way's Great Rift, uniting several individually catalogued dark nebulae, including the Pipe Nebula. It is visible from Earth only on clear moonless nights without light pollution and with low humidity. Name This region of dark nebulae is called Dark Horse because it resembles the side silhouette of a horse and appears dark as compared with the background glow of stars and star clouds. It is also known as "Great" because it is one of the largest (in apparent size) groups of dark nebulae in the sky. Nearby nebulae The rear of the Great Dark Horse (its rump and hind legs), is also known as the Pipe Nebula, which itself carries the designation B77, B78, and B59. (The 'B' numbers reference entries in the Barnard Catalogue of dark nebulae.) The Snake Nebula (B72) is by comparison a small S-shaped nebula emerging from the west side of the northern part of the bowl of the Pipe (B77). Barnard 68 is another named dark patch of molecular gas and dust appearing in the Dark Horse Nebula. See also Coalsack Nebula Great Rift (astronomy) References Dark Horse Nebula Ophiuchus
Dark Horse (astronomy)
Astronomy
332
287,801
https://en.wikipedia.org/wiki/Hypermedia
Hypermedia, an extension of hypertext, is a nonlinear medium of information that includes graphics, audio, video, plain text and hyperlinks. This designation contrasts with the broader term multimedia, which may include non-interactive linear presentations as well as hypermedia. The term was first used in a 1965 article written by Ted Nelson. Hypermedia is a type of multimedia that features interactive elements, such as hypertext, buttons, or interactive images and videos, allowing users to navigate and engage with content in a non-linear manner. The World Wide Web is a classic example of hypermedia to access web content, whereas a conventional cinema presentation is an example of standard multimedia, due to its inherent linearity and lack of interactivity via hyperlinks. The first hypermedia work was, arguably, the Aspen Movie Map. Bill Atkinson's HyperCard popularized hypermedia writing, while a variety of literary hypertext and non-fiction hypertext works, demonstrated the promise of hyperlinks. Most modern hypermedia is delivered via electronic pages from a variety of systems including media players, web browsers, and stand-alone applications (i.e., software that does not require network access). Audio hypermedia is emerging with voice command devices and voice browsing. Development tools Hypermedia may be developed in a number of ways. Any programming tool can be used to write programs that link data from internal variables and nodes for external data files. Multimedia development software such as Adobe Flash, Adobe Director, Macromedia Authorware, and MatchWare Mediator may be used to create stand-alone hypermedia applications, with emphasis on entertainment content. Some database software, such as Visual FoxPro and FileMaker Developer, may be used to develop stand-alone hypermedia applications, with emphasis on educational and business content management. Hypermedia applications may be developed on embedded devices for the mobile and the digital signage industries using the Scalable Vector Graphics (SVG) specification from W3C (World Wide Web Consortium). Software applications, such as Ikivo Animator and Inkscape, simplify the development of hypermedia content based on SVG. Embedded devices, such as the iPhone, natively support SVG specifications and may be used to create mobile and distributed hypermedia applications. Hyperlinks may also be added to data files using most business software via the limited scripting and hyperlinking features built in. Documentation software, such as the Microsoft Office Suite and LibreOffice, allow for hypertext links to other content within the same file, other external files, and URL links to files on external file servers. For more emphasis on graphics and page layout, hyperlinks may be added using most modern desktop publishing tools. This includes presentation programs, such as Microsoft PowerPoint and LibreOffice Impress, add-ons to print layout programs such as Quark Immedia, and tools to include hyperlinks in PDF documents such as Adobe InDesign for creating and Adobe Acrobat for editing. Hyper Publish is a tool specifically designed and optimized for hypermedia and hypertext management. Any HTML editor may be used to build HTML files, accessible by any web browser. CD/DVD authoring tools, such as DVD Studio Pro, may be used to hyperlink the content of DVDs for DVD players or web links when the disc is played on a personal computer connected to the internet. Learning There have been a number of theories concerning hypermedia and learning. One important claim in the literature on hypermedia and learning is that it offers more control over the instructional environment for the reader or student. Another claim is that it levels the playing field among students of varying abilities and enhances collaborative learning. A claim from psychology includes the notion that hypermedia more closely models the structure of the brain, in comparison with printed text. Application programming interfaces Hypermedia is used as a medium and constraint in certain application programming interfaces. HATEOAS, Hypermedia as the Engine of Application State, is a constraint of the REST application architecture where a client interacts with the server entirely through hypermedia provided dynamically by application servers. This means that in theory no API documentation is needed, because the client needs no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia. In other service-oriented architectures (SOA), clients and servers interact through a fixed interface shared through documentation or an interface description language (IDL). See also Cybertext Electronic literature Hyperland is a 1990 documentary film that focuses on Douglas Adams and explains adaptive hypertext and hypermedia. Metamedia References Further reading External links Hypertext
Hypermedia
Technology
934
53,267,005
https://en.wikipedia.org/wiki/Design%20for%20verification
Design for verification (DfV) is a set of engineering guidelines to aid designers in ensuring right first time manufacturing and assembly of large-scale components. The guidelines were developed as a tool to inform and direct designers during early stage design phases to trade off estimated measurement uncertainty against tolerance, cost, assembly, measurability and product requirements. Background Increased competition in the aerospace market has placed additional demands on aerospace manufacturers to reduce costs, increase product flexibility and improve manufacturing efficiency. There is a knowledge gap within the sphere of digital to physical dimensional verification and on how to successfully achieve dimensional specifications within real-world assembly factories that are subject to varying environmental conditions. The DfV framework is an engineering principle to be used within low rate and high value and complexity manufacturing industries to aid in achieving high productivity in assembly via the effective dimensional verification of large volume structures, during final assembly. The DfV framework has been developed to enable engineers to design and plan the effective dimensional verification of large volume, complex structures in order to reduce failure rates and end-product costs, improve process integrity and efficiency, optimise metrology processes, decrease tooling redundancy and increase product quality and conformance to specification. The theoretical elements of the DfV methods were published in 2016, together with their testing using industrial case studies of representative complexity. The industrial tests published on ScienceDirect proved that by using the new design for verification methods alongside the traditional ‘design for X’ toolbox, the resultant process achieved improved tolerance analysis and synthesis, optimized large volume metrology and assembly processes and more cost-effective tool and jig design. See also Design for assembly Design for inspection Design for manufacturability Design for X References Quality control
Design for verification
Engineering
351
60,105,855
https://en.wikipedia.org/wiki/Phosphine%20imide
In chemistry a phosphine imide (sometimes abbreviated to phosphinimide) also known as a iminophosphorane is a functional group with the formula R3P=NR. While structurally related to phosphine oxide its chemistry has more in common with phosphonium ylides. Anions of this group, with the structure R3P=N−, are called phosphinoimidates and are used as ligands to form phosphinimide complexes which are highly active catalysts in some olefin polymerization reactions. Synthesis Phosphine imides can be isolated as intermediates in the Staudinger reaction and have also been prepared by the action of hydroxylamine-O-sulfonic acid on phosphines, proceeding via a p-aminophosphonium salt. Reactions and applications The functional group will readily hydrolyse to give a phosphine oxide and an amine R3P=NR' + H2O → R3P=O + R'NH2 Phosphinimide ligands of the general formula NPR3− form transition metal phosphinimide complexeses. Some of these complexes are potential catalysts for the synthesis of polyethylene. See also Aminophosphine Phosphazene References Functional groups
Phosphine imide
Chemistry
274
65,894,995
https://en.wikipedia.org/wiki/Eunuchs%3A%20India%27s%20Third%20Gender
Eunuchs: India's Third Gender is a 1991 ethnographic film documenting the lives of two castrated men, Kiran and Dinesh, who share their experiences after undergoing castration. The film covers a variety of topics, including gender, abuse of power, sexuality, homophobia, discrimination, and cultural anthropology. Synopsis The ethnographic film uses a linear narrative covering three main themes: love, history, and exclusion within the Indian society, all presented in a non-chronological order. Creating and maintaining an acceptable relationship with a eunuch is difficult, but Kiran and Dinesh loved each other very much. Their love is so strong that once Dinesh leaves for work as a driver, Kiran cannot function without him. She doesn't eat without Dinesh and only happy when they are together. Although eunuchs may live freely in India, they are subject to a widespread discrimination. Additionally, romantic relationships among eunuchs are considered a taboo. Kiran has gone through castration to live fully as a eunuch. Eunuch castration is a highly symbolic act, and the surgical removal of male genitalia is prominent in the eunuch gender and community. Castration is usually performed after living in the eunuch community for many years. Although the community creates a sense of belonging, members are a subject to marginalization and discrimination brought upon them by the rest of society. However, many do not go through castration, feminizing surgery, hormone medication, growing hair, donning female attire, or other aspects of living as a eunuch. Kiran lives in Kathiawar, a place where eunuchs are not welcome. Harish, an aspiring eunuch, frequents Kiran's home and has been taken under his wing. Harish had kept his desire to be a eunuch, a secret from his wife and children in fear of ruining his relationship with them. His family was not open to eunuchs and so he was worried that they would not accept his wishes. Eight months earlier, Harish's wife had left him and their children to go live with her parents. When she came back, Harish had almost gone through the process of castration. Harish's wife is at a crossroads. She is aware that he will never stop living as a eunuch, but the negative social stigma around gender is something she cannot fully accept. In the state of Rajasthan, a eunuch community exists, overseen by Sharada Bai, the guru, and leader of over one-hundred eunuchs. She lives in a mansion with eight disciples and holds the power to appoint one-hundred other eunuchs from neighbouring territories. Eunuchs possess strong family ties, and being a disciple in Sharada Bai's family means she becomes a parent. The mansion in which the eunuchs lived is lavished with a rich history and cannot be sold or destroyed. Looking over the mansion is a sign of honour to the past and the history of a eunuch's purpose. The tradition of greeting the guru in the morning by bowing and touching the feet is a sign of respect, alongside castration, which is a sign of loyalty. Sharada Bai is claimed to have palliative effects on her family members, who look to the guru for guidance and hope. Most cities in modern India aren't accepting of eunuchs, mostly due to cultural and religious prejudices. In Bombay, the guru Regamath lives with fourteen eunuch disciples. Bombay is more expensive than Rajasthan, and the eunuchs' only source of income is prostitution. Every evening the eunuchs head to the red-light district to sell themselves. Along with prostitution, eunuchs also engage in begging and clapping to intimidate the public into giving them money. They also lift their frilly garments to show their genitalia as another form of intimidation. This behaviour and occupation is one that causes more resentments and discriminations between modern eunuchs and modern Indian society. Production Eunuchs: India's Third Gender was produced by assistant producers Surinder Puri and Aruna Har Parsed. Parsed also narrates the film. BBC Elstree Centre, in the United Kingdom, is the production company behind the film. Michael Yorke, an anthropologist, directed and originated the concept for the documentary film. Background Director Michael Yorke was always fascinated with Indian culture. In 1962, he spent time hitchhiking in India, and he experienced the society, culture, and people close up. Yorke's main goal in all of his ethnographies is for the audience to explore the "Wiktionary: other-mother". The success of Eunuchs: India's Third Gender derived from Yorke's ongoing fascination and excitement which is evident in the ethnography. According to Yorke, eunuch subjects are intelligent and analytical. Whenever he visited India for his fieldwork, they were fascinating, welcoming, and informative. A western observer like Yorke is always treated kindly by the eunuch community, which played a large part in successfully creating the film. A film review conducted by anthropologist Pauline Kolenda discussed Yorke's film along with Jareena: Portrait of a Hijda. Both films display the eunuch or hijra community in South Asia. Eunuchs: India's Third Gender along with Jareena: Portrait of a Hijda broadened discussion on sexuality and gender. Release The film was released in 1991 and was televised on the BBC Network. It was later released on DVD and can be found on various university resource engines and in digital archives. Thirty years after the original 1991 debut, Yorke did a screening of the film at Lamaakan's open theatre. Reception Eunuchs: India's Third Gender was Michael Yorke's most significant success. However, public reception of the film was mixed. See also Hijra India Ethnography Castration References External links Michael Yorke's 2005 film Impact of Covid-19 on Hijras Eunuchs Gender Sexuality in India
Eunuchs: India's Third Gender
Biology
1,246
63,465,176
https://en.wikipedia.org/wiki/NGC%20991
NGC 991 is an intermediate spiral galaxy the constellation Cetus. This galaxy was discovered by astronomer William Herschel in 1785. One supernova has been observed in NGC 991: SN 1984L (typeIb, mag. 14) was discovered by Robert Evans on 28 August 1984. See also List of NGC objects (1–1000) References External links Intermediate spiral galaxies Cetus 0991 009846
NGC 991
Astronomy
86
43,038,076
https://en.wikipedia.org/wiki/Ledol
Ledol is a poisonous sesquiterpene that can cause cramps, paralysis, and delirium. Caucasian peasants used Rhododendron plants for these effects in shamanistic rituals. Sources Ledol is found in labrador tea, an herbal tea (not a true tea) made from three closely related species: Rhododendron tomentosum – Northern Labrador tea, previously Ledum palustre Rhododendron groenlandicum – Bog Labrador tea, previously Ledum groenlandicum or Ledum latifolium Rhododendron columbianum – Western Labrador tea, or trapper's tea, previously Ledum glandulosum Ledol is also found in the essential oil of priprioca at a concentration of around 4%. Ledol is also found to varying concentrations in the following plants: References Entheogens Deliriants Plant toxins Sesquiterpenes Cyclopropanes
Ledol
Chemistry
200
6,207,163
https://en.wikipedia.org/wiki/Seismic%20magnitude%20scales
Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking (quaking) caused by an earthquake at a given location. Magnitudes are usually determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary based on what aspect of the seismic waves are measured and how they are measured. Different magnitude scales are necessary because of differences in earthquakes, the information available, and the purposes for which the magnitudes are used. Earthquake magnitude and ground-shaking intensity The Earth's crust is stressed by tectonic forces. When this stress becomes great enough to rupture the crust, or to overcome the friction that prevents one block of crust from slipping past another, energy is released, some of it in the form of various kinds of seismic waves that cause ground-shaking, or quaking. Magnitude is an estimate of the relative "size" or strength of an earthquake, and thus its potential for causing ground-shaking. It is "approximately related to the released seismic energy". Intensity refers to the strength or force of shaking at a given location, and can be related to the peak ground velocity. With an isoseismal map of the observed intensities (see illustration) an earthquake's magnitude can be estimated from both the maximum intensity observed (usually but not always near the epicenter), and from the extent of the area where the earthquake was felt. The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source, while sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area. Magnitude scales An earthquake radiates energy in the form of different kinds of seismic waves, whose characteristics reflect the nature of both the rupture and the earth's crust the waves travel through. Determination of an earthquake's magnitude generally involves identifying specific kinds of these waves on a seismogram, and then measuring one or more characteristics of a wave, such as its timing, orientation, amplitude, frequency, or duration. Additional adjustments are made for distance, kind of crust, and the characteristics of the seismograph that recorded the seismogram. The various magnitude scales represent different ways of deriving magnitude from such information as is available. All magnitude scales retain the logarithmic scale as devised by Charles Richter, and are adjusted so the mid-range approximately correlates with the original "Richter" scale. Most magnitude scales are based on measurements of only part of an earthquake's seismic wave-train, and therefore are incomplete. This results in systematic underestimation of magnitude in certain cases, a condition called saturation. Since 2005 the International Association of Seismology and Physics of the Earth's Interior (IASPEI) has standardized the measurement procedures and equations for the principal magnitude scales, , , , and . "Richter" magnitude scale The first scale for measuring earthquake magnitudes, developed in 1935 by Charles F. Richter and popularly known as the "Richter" scale, is actually the , label ML or ML. Richter established two features now common to all magnitude scales. First, the scale is logarithmic, so that each unit represents a ten-fold increase in the amplitude of the seismic waves. As the energy of a wave is proportional to A1.5, where A denotes the amplitude, each unit of magnitude represents a 101.5 ≈ 32-fold increase in the seismic energy (strength) of an earthquake. Second, Richter arbitrarily defined the zero point of the scale to be where an earthquake at a distance of 100 km makes a maximum horizontal displacement of 0.001 mm (1 μm, or 0.00004 in.) on a seismogram recorded with a Wood-Anderson torsion seismograph. Subsequent magnitude scales are calibrated to be approximately in accord with the original "Richter" (local) scale around magnitude 6. All "Local" (ML) magnitudes are based on the maximum amplitude of the ground shaking, without distinguishing the different seismic waves. They underestimate the strength: of distant earthquakes (over ~600 km) because of attenuation of the S waves, of deep earthquakes because the surface waves are smaller, and of strong earthquakes (over M ~7) because they do not take into account the duration of shaking. The original "Richter" scale, developed in the geological context of Southern California and Nevada, was later found to be inaccurate for earthquakes in the central and eastern parts of the North American continent (everywhere east of the Rocky Mountains) because of differences in the continental crust. All these problems prompted the development of other scales. Most seismological authorities, such as the United States Geological Survey, report earthquake magnitudes above 4.0 as moment magnitude (below), which the press describes as "Richter magnitude". Other "local" magnitude scales Richter's original "local" scale has been adapted for other localities. These may be labelled "ML", or with a lowercase "l", either Ml, or Ml. (Not to be confused with the Russian surface-wave MLH scale.) Whether the values are comparable depends on whether the local conditions have been adequately determined and the formula suitably adjusted. Japan Meteorological Agency magnitude scale In Japan, for shallow (depth < 60 km) earthquakes within 600 km, the Japanese Meteorological Agency calculates a magnitude labeled MJMA, MJMA, or MJ. (These should not be confused with moment magnitudes JMA calculates, which are labeled Mw(JMA) or M(JMA), nor with the Shindo intensity scale.) JMA magnitudes are based (as typical with local scales) on the maximum amplitude of the ground motion; they agree "rather well" with the seismic moment magnitude in the range of 4.5 to 7.5, but underestimate larger magnitudes. Body-wave magnitude scales Body-waves consist of P waves that are the first to arrive (see seismogram), or S waves, or reflections of either. Body-waves travel through rock directly. mB scale The original "body-wave magnitude" – mB or mB (uppercase "B") – was developed by and to overcome the distance and magnitude limitations of the scale inherent in the use of surface waves. is based on the P and S waves, measured over a longer period, and does not saturate until around M 8. However, it is not sensitive to events smaller than about M 5.5. Use of as originally defined has been largely abandoned, now replaced by the standardized scale. mb scale The mb or mb scale (lowercase "m" and "b") is similar to , but uses only P waves measured in the first few seconds on a specific model of short-period seismograph. It was introduced in the 1960s with the establishment of the World-Wide Standardized Seismograph Network (WWSSN); the short period improves detection of smaller events, and better discriminates between tectonic earthquakes and underground nuclear explosions. Measurement of has changed several times. As originally defined by mb was based on the maximum amplitude of waves in the first 10 seconds or more. However, the length of the period influences the magnitude obtained. Early USGS/NEIC practice was to measure on the first second (just the first few P waves), but since 1978 they measure the first twenty seconds. The modern practice is to measure short-period scale at less than three seconds, while the broadband scale is measured at periods of up to 30 seconds. mbLg scale The regional mbLg scale – also denoted mb_Lg, mbLg, MLg (USGS), Mn, and mN – was developed by for a problem the original ML scale could not handle: all of North America east of the Rocky Mountains. The ML scale was developed in southern California, which lies on blocks of oceanic crust, typically basalt or sedimentary rock, which have been accreted to the continent. East of the Rockies the continent is a craton, a thick and largely stable mass of continental crust that is largely granite, a harder rock with different seismic characteristics. In this area the ML scale gives anomalous results for earthquakes that by other measures seemed equivalent to quakes in California. Nuttli resolved this by measuring the amplitude of short-period (~ 1 second) Lg waves, a complex form of the Love wave that, although a surface wave, he found provided a result more closely related to the scale than the scale. Lg waves attenuate quickly along any oceanic path, but propagate well through the granitic continental crust, and MbLg is often used in areas of stable continental crust; it is especially useful for detecting underground nuclear explosions. Surface-wave magnitude scales Surface waves propagate along the Earth's surface, and are principally either Rayleigh waves or Love waves. For shallow earthquakes the surface waves carry most of the energy of the earthquake, and are the most destructive. Deeper earthquakes, having less interaction with the surface, produce weaker surface waves. The surface-wave magnitude scale, variously denoted as Ms, MS, and Ms, is based on a procedure developed by Beno Gutenberg in 1942 for measuring shallow earthquakes stronger or more distant than Richter's original scale could handle. Notably, it measured the amplitude of surface waves (which generally produce the largest amplitudes) for a period of "about 20 seconds". The scale approximately agrees with at ~6, then diverges by as much as half a magnitude. A revision by , sometimes labeled MSn, measures only waves of the first second. A modification – the "Moscow-Prague formula" – was proposed in 1962, and recommended by the IASPEI in 1967; this is the basis of the standardized Ms20 scale (Ms_20, Ms(20)). A "broad-band" variant (Ms_BB, Ms(BB)) measures the largest velocity amplitude in the Rayleigh-wave train for periods up to 60 seconds. The MS7 scale used in China is a variant of Ms calibrated for use with the Chinese-made "type 763" long-period seismograph. The MLH scale used in some parts of Russia is actually a surface-wave magnitude. Moment magnitude and energy magnitude scales Other magnitude scales are based on aspects of seismic waves that only indirectly and incompletely reflect the force of an earthquake, involve other factors, and are generally limited in some respect of magnitude, focal depth, or distance. The moment magnitude scale – Mw or Mw – developed by seismologists Thomas C. Hanks and Hiroo Kanamori, is based on an earthquake's seismic moment, M0, a measure of how much work an earthquake does in sliding one patch of rock past another patch of rock. Seismic moment is measured in newton-meters (N m or N⋅m) in the SI, or dyne-centimeters (dyn⋅cm; ) in the older CGS system. In the simplest case the moment can be calculated knowing only the amount of slip, the area of the surface ruptured or slipped, and a factor for the resistance or friction encountered. These factors can be estimated for an existing fault to determine the magnitude of past earthquakes, or what might be anticipated for the future. An earthquake's seismic moment can be estimated in various ways, which are the bases of the Mwb, Mwr, Mwc, Mww, Mwp, Mi, and Mwpd scales, all subtypes of the generic Mw scale. See for details. Seismic moment is considered the most objective measure of an earthquake's "size" in regard of total energy. However, it is based on a simple model of rupture, and on certain simplifying assumptions; it does not account for the fact that the proportion of energy radiated as seismic waves varies among earthquakes. Much of an earthquake's total energy as measured by is dissipated as friction (resulting in heating of the crust). An earthquake's potential to cause strong ground shaking depends on the comparatively small fraction of energy radiated as seismic waves, and is better measured on the energy magnitude scale, Me. The proportion of total energy radiated as seismic waves varies greatly depending on focal mechanism and tectonic environment; and for very similar earthquakes can differ by as much as 1.4 units. Despite the usefulness of the scale, it is not generally used due to difficulties in estimating the radiated seismic energy. Two earthquakes differing greatly in the damage done In 1997 there were two large earthquakes off the coast of Chile. The magnitude of the first, in July, was estimated at , but was barely felt, and only in three places. In October a quake in nearly the same location, but twice as deep and on a different kind of fault, was felt over a broad area, injured over 300 people, and destroyed or seriously damaged over 10,000 houses. As can be seen in the table below, this disparity of damage done is not reflected in either the moment magnitude () nor the surface-wave magnitude (). Only when the magnitude is measured on the basis of the body-wave () or the seismic energy () is there a difference comparable to the difference in damage. Rearranged and adapted from Table 1 in . Seen also in . Energy class (K-class) scale K (from the Russian word класс, 'class', in the sense of a category) is a measure of earthquake magnitude in the energy class or K-class system, developed in 1955 by Soviet seismologists in the remote Garm (Tajikistan) region of Central Asia; in revised form it is still used for local and regional quakes in many states formerly aligned with the Soviet Union (including Cuba). Based on seismic energy (K = log ES, in Joules), difficulty in implementing it using the technology of the time led to revisions in 1958 and 1960. Adaptation to local conditions has led to various regional K scales, such as KF and KS. K values are logarithmic, similar to Richter-style magnitudes, but have a different scaling and zero point. K values in the range of 12 to 15 correspond approximately to M 4.5 to 6. M(K), M(K), or possibly MK indicates a magnitude M calculated from an energy class K. Tsunami magnitude scales Earthquakes that generate tsunamis generally rupture relatively slowly, delivering more energy at longer periods (lower frequencies) than generally used for measuring magnitudes. Any skew in the spectral distribution can result in larger, or smaller, tsunamis than expected for a nominal magnitude. The tsunami magnitude scale, Mt, is based on a correlation by Katsuyuki Abe of earthquake seismic moment () with the amplitude of tsunami waves as measured by tidal gauges. Originally intended for estimating the magnitude of historic earthquakes where seismic data is lacking but tidal data exist, the correlation can be reversed to predict tidal height from earthquake magnitude. (Not to be confused with the height of a tidal wave, or run-up, which is an intensity effect controlled by local topography.) Under low-noise conditions, tsunami waves as little as 5 cm can be predicted, corresponding to an earthquake of M ~6.5. Another scale of particular importance for tsunami warnings is the mantle magnitude scale, Mm. This is based on Rayleigh waves that penetrate into the Earth's mantle, and can be determined quickly, and without complete knowledge of other parameters such as the earthquake's depth. Duration and coda magnitude scales Md designates various scales that estimate magnitude from the duration or length of some part of the seismic wave-train. This is especially useful for measuring local or regional earthquakes, both powerful earthquakes that might drive the seismometer off-scale (a problem with the analog instruments formerly used) and preventing measurement of the maximum wave amplitude, and weak earthquakes, whose maximum amplitude is not accurately measured. Even for distant earthquakes, measuring the duration of the shaking (as well as the amplitude) provides a better measure of the earthquake's total energy. Measurement of duration is incorporated in some modern scales, such as and . Mc scales usually measure the duration or amplitude of a part of the seismic wave, the coda. For short distances (less than ~100 km) these can provide a quick estimate of magnitude before the quake's exact location is known. Macroseismic magnitude scales Magnitude scales generally are based on instrumental measurement of some aspect of the seismic wave as recorded on a seismogram. Where such records do not exist, magnitudes can be estimated from reports of the macroseismic events such as described by intensity scales. One approach for doing this (developed by Beno Gutenberg and Charles Richter in 1942) relates the maximum intensity observed (presumably this is over the epicenter), denoted I0 (capital I with a subscripted zero), to the magnitude. It has been recommended that magnitudes calculated on this basis be labeled Mw(I0), but are sometimes labeled with a more generic Mms. Another approach is to make an isoseismal map showing the area over which a given level of intensity was felt. The size of the "felt area" can also be related to the magnitude (based on the work of and ). While the recommended label for magnitudes derived in this way is M0(An), the more commonly seen label is Mfa. A variant, MLa, adapted to California and Hawaii, derives the local magnitude (ML) from the size of the area affected by a given intensity. MI (upper-case letter "I", distinguished from the lower-case letter in Mi) has been used for moment magnitudes estimated from isoseismal intensities calculated per . Peak ground velocity (PGV) and peak ground acceleration (PGA) are measures of the force that causes destructive ground shaking. In Japan, a network of strong-motion accelerometers provides PGA data that permits site-specific correlation with different magnitude earthquakes. This correlation can be inverted to estimate the ground shaking at that site due to an earthquake of a given magnitude at a given distance. From this a map showing areas of likely damage can be prepared within minutes of an actual earthquake. Other magnitude scales Many earthquake magnitude scales have been developed or proposed, with some never gaining broad acceptance and remaining only as obscure references in historical catalogs of earthquakes. Other scales have been used without a definite name, often referred to as "the method of Smith (1965)" (or similar language), with the authors often revising their method. On top of this, seismological networks vary on how they measure seismograms. Where the details of how a magnitude has been determined are unknown, catalogs will specify the scale as "unknown" (variously Unk, Ukn, or UK). In such cases, the magnitude is considered generic and approximate. An Mh ("magnitude determined by hand") label has been used where the magnitude is too small or the data too poor (typically from analog equipment) to determine a Local magnitude, or multiple shocks or cultural noise complicates the records. The Southern California Seismic Network uses this "magnitude" where the data fail the quality criteria. A special case is the Seismicity of the Earth catalog of . Hailed as a milestone as a comprehensive global catalog of earthquakes with uniformly calculated magnitudes, they never published the full details of how they determined those magnitudes. Consequently, while some catalogs identify these magnitudes as MGR, others use UK (meaning "computational method unknown"). Subsequent study found many of the values to be "considerably overestimated". Further study has found that most of the magnitudes "are basically for large shocks shallower than 40 km, but are basically for large shocks at depths of 40–60 km". Gutenberg and Richter also used an italic, non-bold M without subscript – also used as a generic magnitude, and not to be confused with the bold, non-italic M used for moment magnitude – and a "unified magnitude" m (bolding added). While these terms (with various adjustments) were used in scientific articles into the 1970s, they are now only of historical interest. An ordinary (non-italic, non-bold) capital "M" without subscript is often used to refer to magnitude generically, where an exact value or the specific scale used is not important. See also Magnitude of completeness Epicentral distance Citations General and cited sources . . . . . . . . . . . . . . . . . . . . , NUREG/CR-1457. . Also available here (sections renumbered). . . . . . . , 310p. . . . . . . . . . . . . . . . . External links Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes. Seismology measurement Seismology Earthquake engineering
Seismic magnitude scales
Engineering
4,461
29,176,975
https://en.wikipedia.org/wiki/Fontana%20Modern%20Masters
The Fontana Modern Masters was a series of pocket guides on writers, philosophers, and other thinkers and theorists who shaped the intellectual landscape of the twentieth century. The first five titles were published on 12 January 1970 by Fontana Books, the paperback imprint of William Collins & Co, and the series editor was Frank Kermode, who was Professor of Modern English Literature at University College London. The books were very popular with students, who "bought them by the handful", according to Kermode, and they were instantly recognisable by their eye-catching covers, which featured brightly coloured abstract art and sans-serif typography. Art as book covers The Fontana Modern Masters occupy a unique place in publishing history – not for their contents but their covers, which draw on the following developments in twentieth-century art and literature: Twentieth-century geometric abstraction, colour-field painting and hard-edge painting. Op Art, and in particular the work of Victor Vasarely. The English beatnik Brion Gysin's cut-up technique as popularized by William Burroughs. The cover concept was the brainchild of Fontana's art director John Constable, who had been experimenting with a cover treatment based on cut-ups of The Mud Bath, a key work of British geometric abstraction by the painter David Bomberg. However, a visit to the Grabowski Gallery in London introduced Constable to the work of Oliver Bevan, a graduate of the Royal College of Art in 1964, whose optical and geometric paintings were influenced by Vasarely's Op Art. On seeing Bevan's work, Constable commissioned him to create the covers for the first ten Fontana Modern Masters, which Bevan painted as rectilinear arrangements of tesselating blocks. Each cover was thus a piece of abstract art, but as an incentive for readers to buy all ten books the covers could be arranged to create a larger, composite artwork. The "set of ten" books appeared in 1970–71 but overran when Joyce was published with the same cover as Guevara: Camus by Conor Cruise O'Brien, 1970 Chomsky by John Lyons, 1970 Fanon by David Caute, 1970 Guevara by Andrew Sinclair, 1970 Lévi-Strauss by Edmund Leach, 1970 Lukács by George Lichtheim, 1970 Marcuse by Alasdair MacIntyre, 1970 McLuhan by Jonathan Miller, 1971 Orwell by Raymond Williams, 1971 Wittgenstein by David Pears, 1971 Joyce by John Gross, 1971 A second "set of ten" featuring a new Bevan cut-up was published in 1971–73 but the inclusion of Joyce in the first "set of ten" left this second set one book short: Freud by Richard Wollheim, 1971 Reich by Charles Rycroft, 1971 Yeats by Denis Donoghue, 1971 Gandhi by George Woodcock, 1972 Lenin by Robert Conquest, 1972 Mailer by Richard Poirier, 1972 Russell by A J Ayer, 1972 Jung by Anthony Storr, 1973 Lawrence by Frank Kermode, 1973 A third "set of ten" featuring Bevan's kinetic Pyramid painting began to appear in 1973–74 but Constable left before the set was complete and his replacement, Mike Dempsey, scrapped the set-of-ten incentive after eight books: Beckett by A Alvarez, 1973 Einstein by Jeremy Bernstein, 1973 Laing by Edgar Z. Friedenberg, 1973 Popper by Bryan Magee, 1973 Kafka by Erich Heller, 1974 Le Corbusier by Stephen Gardiner, 1974 Proust by Roger Shattuck, 1974 Weber by Donald G MacRae, 1974 Dempsey switched the covers to a white background and commissioned a new artist James Lowe, whose cover art for the next eight books in 1975-76 was based on triangles: Eliot by Stephen Spender, 1975 Marx by David McLellan, 1975 Pound by Donald Davie, 1975 Sartre by Arthur C Danto, 1975 Artaud by Martin Esslin, 1976 Keynes by D. E. Moggridge, 1976 Saussure by Jonathan Culler, 1976 Schoenberg by Charles Rosen, 1976 Nine more books appeared in 1977–79 with cover art by James Lowe based on squares: Engels by David McLellan, 1977 Gramsci by James Joll, 1977 Durkheim by Anthony Giddens, 1978 Heidegger by George Steiner, 1978 Nietzsche by J P Stern, 1978 Trotsky by Irving Howe, 1978 Klein by Hanna Segal, 1979 Pavlov by Jeffrey A Gray, 1979 Piaget by Margaret A Boden, 1979 Dempsey left Fontana Books in 1979 but continued to oversee the Modern Masters series until a new art director, Patrick Mortimer, was appointed in 1980. Four more books followed under Mortimer with cover art by James Lowe based on circles: Evans-Pritchard by Mary Douglas, 1980 Darwin by Wilma George, 1982 Barthes by Jonathan Culler, 1983 Adorno by Martin Jay, 1984 The cover concept was dropped after this and a new design was used that featured a portrait of the Modern Master as a line drawing or later a tinted photograph, and mixed serif and sans-serif typefaces, upright and italic fonts, block capitals, lowercase letters and faux handwriting. The design was used for reprints and six new titles: Foucault by J. G. Merquior, 1985 Derrida by Christopher Norris, 1987 Winnicott by Adam Phillips, 1988 Lacan by Malcolm Bowie, 1991 Arendt by David Watson, 1992 Berlin by John Gray, 1995 Book covers as art Fontana's use of art as book covers went full circle in 2003-05 when the British conceptual artist Jamie Shovlin "reproduced" the covers of the forty-eight Fontana Modern Masters from Camus to Barthes as a series of flawed paintings (the titles are missing and the colours have run) in watercolour and ink on paper, each measuring 28 x 19 cm. However, Shovlin also noticed ten forthcoming titles listed on the books' front endpapers which, for reasons unknown, had not been published: Dostoyevsky by Harold Rosenberg Fuller by Allan Temko Jakobson by Thomas A Sebeok Kipling by Lionel Trilling Mann by Lionel Trilling Merleau-Ponty by Hubert Dreyfus Needham by George Steiner Sherrington by Jonathan Miller Steinberg by John Hollander Winnicott by Masud Khan (this was published with a different author, as listed in the previous section) Shovlin then set out to paint these "lost" titles and thus "complete" the series. To do this he devised a "Fontana Colour Chart" based on the covers of the published books, and a scoring system that – like his paintings – was deliberately flawed. Given these flaws, and those in Fontana's original series, the absence of any modern masters from the visual arts is notable, since Matisse was one of four "forthcoming titles" that Shovlin had apparently overlooked: Benjamin by Samuel Weber Erikson by Robert Lifton Ho by David Halberstam Matisse by David Sylvester Benjamin and Matisse have since been included in a new series of seventeen large Fontana Modern Masters that Shovlin painted in 2011-12. These use a similar scoring system to his watercolours of 2003–05 and a new "Acrylic Variations Colour Wheel". The paintings are acrylic on canvas and each measures 210 x 130 cm: Arendt by David Watson (Variation 1) Benjamin by Samuel Weber (Variation 3) Berlin by John Gray (Variation 1) Derrida by Christopher Norris (Variation 3) Dostoyevsky by Harold Rosenberg (Variation 1) Foucault by J. G. Merquior (Variation 1B) Fuller by Allan Temko (Variation 3) Jakobson by Krystyna Pomorska (Variation 2) Kipling by Lionel Trilling (Variation 2) Lacan by Malcolm Bowie (Variation 1) Mann by Lionel Trilling (Variation 1A) Matisse by David Sylvester (Variation 1A) Merleau-Ponty by H. P. Dreyfus (Variation 1) Needham by George Steiner (Variation 3A) Sherrington by Jonathan Miller (Variation 3) Steinberg by John Hollander (Variation 3B) Winnicott by Adam Phillips (Variation 3) See also Foucault - one of the books in the series Past Masters References External links Fontana Modern Masters or books, art, and books as art: a cover story Book series Book arts Book design Typography British non-fiction books Book publishing companies of the United Kingdom
Fontana Modern Masters
Engineering
1,748
227,214
https://en.wikipedia.org/wiki/Nekton
Nekton or necton (from the ) is any aquatic organism that can actively and persistently propel itself through a water column (i.e. swimming) without touching the bottom. Nektons generally have powerful tails and appendages (e.g. fins, pleopods, flippers or jet propulsion) that make them strong enough swimmers to counter ocean currents, and have mechanisms for sufficient lift and/or buoyancy to prevent sinking. Examples of extant nektons include most fish (especially pelagic fish like tuna and sharks), marine mammals (cetaceans, sirenias and pinnipeds) and reptiles (specifically sea turtles), penguins, coleoid cephalopods (squids and cuttlefish) and several species of decapod crustaceans (specifically prawns, shrimps and krills). The term was proposed by German biologist Ernst Haeckel to differentiate between the active swimmers in a body of water, and the planktons that were passively carried along by the current. As a guideline, nektonic organisms have a high Reynolds number (greater than 1000) and planktonic organisms a low one (less than 10). Some organisms begin their life cycle as planktonic eggs and larvae, and transition to nektonic juveniles and adults later on in life, sometimes making distinction difficult when attempting to classify certain plankton-to-nekton species as one or the other. For this reason, some biologists avoid using this term. History The term was first proposed and used by the German biologist Ernst Haeckel in 1891 in his article Plankton-Studien where he contrasted it with plankton, the aggregate of passively floating, drifting, or somewhat motile organisms present in a body of water, primarily tiny algae and bacteria, small eggs and larvae of marine organisms, and protozoa and other minute consumers. Today it is sometimes considered an obsolete term because it often does not allow for the meaningful quantifiable distinction between these two groups. The colonization of water column is extremely crucial and important for the evolution of marine animals. During the Devonian Nekton Revolution (DNR) well known as the ‘age of fishes’ accounted more than eighty-five percent of nekton were widespread during the Carboniferous period, that took place during the Paleozoic era. Some biologists no longer use it. Definition As a guideline, nekton are larger and tend to swim largely at biologically high Reynolds numbers (>103 and up beyond 109), where inertial flows are the rule, and eddies (vortices) are easily shed. Plankton, on the other hand, are small and, if they swim at all, do so at biologically low Reynolds numbers (0.001 to 10), where the viscous behavior of water dominates, and reversible flows are the rule. Organisms such as jellyfish and others are considered plankton when they are very small and swim at low Reynolds numbers, and considered nekton as they grow large enough to swim at high Reynolds numbers. Many animals considered classic examples of nekton (e.g., Mola mola, squid, marlin) start out life as tiny members of the plankton and then, it was argued, gradually transition to nekton as they grow. Oceanic nekton Oceanic nekton comprises aquatic animals largely from three clades: Vertebrates (phylum Chordata), particularly pelagic fish, cetaceans and sea turtles, form the largest contribution; these animals have endoskeletons made of bones and cartilages and propel themselves via a powerful tail and fan/paddle-shaped appendages such as fins, flippers or webbed feet. Cephalopods (phylum Mollusca), specifically decapodiform coleoids such as squids and cuttlefish, are pelagic nektons that swim using a combination of jet propulsion and fins. Octopodiform coleoids such as octopuses can also swim quite robustly, but they are mostly benthic ambush predators using arms to crawl around. Crustaceans (phylum Arthropoda), especially dendrobranchiates and eucarids such as prawns, shrimps and krills can swim actively using specialized legs known as pleopods (a.k.a. swimmerets) and a "tail fan" formed by the telson and uropods. Benthic decapods such as lobsters and crayfish, though normally move around by walking, can also temporarily swim fast backwards as an escape response. Some crab species can also swim in open waters using the last pair of legs (pereiopods) for paddling. There are organisms whose initial life stage is identified as being planktonic but when they grow and increase in body size they become gradually more nektonic. A typical example is the medusa of the jellyfish, which can actively propel itself (though generally insufficient to overcome strong currents). See also Neuston (organisms, including microscopic, living at the surface of the water) Plankton (organisms, including microscopic, floating and drifting within water) Benthos (organisms, including microscopic, living at the bottom of a body of water) References External links Stefan Nehring and Ute Albrecht (1997): "Hell und das redundante Benthon: Neologismen in der deutschsprachigen Limnologie". In: Lauterbornia H. 31: 17–30, Dinkelscherben, December 1997 E-Text (PDF-Datei) Aquatic ecology Aquatic organisms Oceanographical terminology
Nekton
Biology
1,176
1,966,049
https://en.wikipedia.org/wiki/Gauge%20%28firearms%29
The gauge (in American English or more commonly referred to as bore in British English) of a firearm is a unit of measurement used to express the inner diameter (bore diameter) and other necessary parameters to define in general a smoothbore barrel (compare to caliber, which defines a barrel with rifling and its cartridge). The gauge of a shotgun is a list that includes all necessary data to define a functional barrel. For example, the dimension of the chamber, the shotgun bore dimension and the valid proof load and commercial ammunition, as defined globally by the C.I.P.; defined in Great Britain by the Rules, regulations and scales applicable to the proof of small arms (2006) of The London Proof House and The Birmingham Proof House, as referred in the Gun Barrel Proof Act 1978, Paragraph 6; and defined in the United States by SAAMI Z299.2 – 2019. Historical development The concept of using a material property to define a bore diameter was used before the term gauge, in the end of the 16th century. The term gauge in connection of firearms was first used in the book, A Light to the Art of Gunnery (1677). Gauge was determined from the weight of a solid sphere of lead that will fit the bore of the firearm and is expressed as the multiplicative inverse of the sphere's weight as a fraction of a pound, e.g., a one-twelfth pound lead ball fits a 12-gauge bore. Therefore with a 12-gauge, it would take 12 balls of lead of the same size as the 12 gauge shotgun's inner bore diameter to weigh . The term is related to the measurement of cannons, which were also measured by the weight of their iron round shot; an eight-pounder would fire an ball. Therefore, a 12 gauge is larger than a 16 gauge. Due to problems defining a pound, and to get pure lead, the Gun Barrel Proof Act 1855 defined a gauge as a list of defined values. Gauge is commonly used today in reference to shotguns, though historically it was first used in muzzle-loading long guns such as muskets, then later on in breech-loading long guns including single-shot and double rifles, which were made in sizes up to 2 bore during their heyday in the mid to late 19th century, being originally loaded as black powder cartridges. These very large and heavy rifles, called "elephant guns", were intended for use primarily in regions of Africa and Asia for hunting large dangerous game animals. Gauge is commonly abbreviated as "ga.", "ga", or "G". Bore sizing Since shotguns were not originally intended to fire solid projectiles, but rather a compressible mass of shot, the actual diameter of the bore can vary. The fact that most shotgun bores are not cylindrical also causes deviations from the ideal bore diameter. The chamber of the gun is larger, to accommodate the thickness of the shotshell walls, and a "forcing cone" in front of the chamber reduces the diameter down to the bore diameter. The forcing cone can be as short as a fraction of an inch, or as long as a few inches on some firearms. At the muzzle end of the barrel, the choke can constrict the bore even further, so measuring the bore diameter of a shotgun is not a simple process, as it must be done away from either end. Shotgun bores are commonly "overbored" or "backbored", meaning that most of the bore (from the forcing cone to the choke) is slightly larger than the value given by the formula. This is claimed to reduce felt recoil and improve patterning. The recoil reduction is due to the larger bore producing a slower acceleration of the shot, and the patterning improvements are due to the larger muzzle diameter for the same choke constriction, which results in less shot deformation. A 12-gauge shotgun, nominally , can range from a tight to an extreme overbore of . Some also claim an increased velocity with the overbored barrels, up to , which is due to the larger swept volume of the overbored barrel. Once only found in expensive custom shotguns, overbored barrels are now becoming common in mass-marketed guns. Aftermarket backboring is also commonly done to reduce the weight of the barrel and move the center of mass backward for a better balance. Factory overbored barrels generally are made with a larger outside diameter, and will not have this reduction in weight—though the factory barrels will be tougher, since they have a normal barrel wall thickness. Firing slugs from overbored barrels can result in very inconsistent accuracy, as the slug may be incapable of obturating to fill the oversized bore. Gauges in use The six most common shotgun gauges, in descending order of size, are the 10 gauge, 12 gauge, 16 gauge, 20 gauge, 28 gauge, and .410 bore. By far the most popular is the 12 gauge, particularly in the United States. The 20-gauge shotgun is the next most popular size, being favored by shooters uncomfortable with the weight of a 12-gauge gun, and is popular for upland game hunting. The next most popular sizes are the .410 bore and the 28 gauge. The least popular sizes are the 10 gauge and the 16 gauge; while far less common than the other four gauges, they are still commercially available. Shotguns and shells exceeding 10 gauge, such as the 8 gauge, 6 gauge, 4 gauge, and 2 gauge are historically important in the United Kingdom and elsewhere in mainland Europe. Today, they are rarely manufactured. These shells are usually black powder paper or brass cartridges, as opposed to modern smokeless powder plastic or wax cartridges. The 18, 15, 11, 6, 3, and 2 gauge shells are the rarest of all; owners of these types of rare shotguns will usually have their ammunition custom loaded by a specialist in rare and custom bores. The 14 gauge has not been loaded in the United States since the early 20th century, although the hull is still made in France. The very small 24 and 32 gauges are still produced and used in some European and South American countries. Punt guns, which use very large shells, are rarely encountered. Also seen in limited numbers are smoothbore firearms in calibers smaller than .360 such as .22 Long Rifle (UK No. 1 bore) and 9mm Flobert rimfire (UK No. 3 bore), designed for short-range pest control and garden guns. The No. 2 bore (7 mm) has long been obsolete. All three of these rimfires are available in shot and BB-cap. Gauge and shot type The 10 gauge narrowly escaped obsolescence when steel and other nontoxic shot became required for waterfowl hunting, since the larger shell could hold the much larger sizes of low-density steel shot needed to reach the ranges necessary for waterfowl hunting. The move to steel shot reduced the use of 16 and 20 gauges for waterfowl hunting, and the shorter, , 12-gauge shells as well. However, the 12-gauge shell, with its higher SAAMI pressure rating of compared to standard and 12-gauge shells with their lower pressure rating of , began to approach the performance of the 10-gauge shells with a pressure rating of . Newer nontoxic shots, such as bismuth or tungsten-nickel-iron alloys, and even tungsten-polymer blends, regain much or all of the performance loss, but are much more expensive than steel or lead shot. However, laboratory research indicates that tungsten alloys can actually be quite toxic internally. Bore sizes used in the United Kingdom Legend: left side is the bore size, right side is the case length 2 bore: 4 bore: 6 bore 8 bore: 10 bore: 12 bore: 14 bore: 16 bore: 20 bore: 24 bore: 28 bore: 32 bore: .410 bore: .360 bore: 9 mm (No. 3 bore) short rimfire, 9 mm (No. 3 bore) long rimfire 7 mm (No. 2 bore) rimfire 6 mm (No. 1 bore) short rimfire, 6 mm (No. 1 bore) long rimfire Conversion guide The table below lists various gauge sizes with weights. The bores marked * are found in punt guns, obsolete, or rare weapons only. However, 4 gauge was sometimes found used in blunderbuss guns made for coach defense and protection against piracy. The .410 and 23 mm are exceptions; they are actual bore sizes, not gauges. If the .410 bore and 23 mm diameters were measured using more traditional means, they would be equivalent to 67.62 gauge (.410 bore) and 6.278 gauge (23 mm), respectively. Note: Use of this table for estimating bullet masses for historical large-bore rifles is limited, as this table assumes the use of round ball, rather than conical bullets; for example, a typical 4-bore rifle from circa 1880 used a bullet, or sometimes slightly heavier, rather than using a round lead ball. (Round balls lose velocity faster than conical bullets and have much steeper ballistic trajectories beyond about ) In contrast, a 4-bore express rifle often used a bullet wrapped in paper to keep lead buildup to a minimum in the barrel. In either case, assuming a mass for a 4-bore rifle bullet from this table would be inaccurate, although indicative. References Shotgun cartridges Units of length
Gauge (firearms)
Mathematics
1,944
341,046
https://en.wikipedia.org/wiki/Green%20algae
The green algae (: green alga) are a group of chlorophyll-containing autotrophic eukaryotes consisting of the phylum Prasinodermophyta and its unnamed sister group that contains the Chlorophyta and Charophyta/Streptophyta. The land plants (Embryophytes) have emerged deep within the charophytes as a sister of the Zygnematophyceae. Since the realization that the Embryophytes emerged within the green algae, some authors are starting to include them. The completed clade that includes both green algae and embryophytes is monophyletic and is referred to as the clade Viridiplantae and as the kingdom Plantae. The green algae include unicellular and colonial flagellates, most with two flagella per cell, as well as various colonial, coccoid (spherical), and filamentous forms, and macroscopic, multicellular seaweeds. There are about 22,000 species of green algae, many of which live most of their lives as single cells, while other species form coenobia (colonies), long filaments, or highly differentiated macroscopic seaweeds. A few other organisms rely on green algae to conduct photosynthesis for them. The chloroplasts in dinoflagellates of the genus Lepidodinium, euglenids and chlorarachniophytes were acquired from ingested endosymbiont green algae, and in the latter retain a nucleomorph (vestigial nucleus). Green algae are also found symbiotically in the ciliate Paramecium, and in Hydra viridissima and in flatworms. Some species of green algae, particularly of genera Trebouxia of the class Trebouxiophyceae and Trentepohlia (class Ulvophyceae), can be found in symbiotic associations with fungi to form lichens. In general the fungal species that partner in lichens cannot live on their own, while the algal species is often found living in nature without the fungus. Trentepohlia is a filamentous green alga that can live independently on humid soil, rocks or tree bark or form the photosymbiont in lichens of the family Graphidaceae. Also the macroalga Prasiola calophylla (Trebouxiophyceae) is terrestrial, and Prasiola crispa, which live in the supralittoral zone, is terrestrial and can in the Antarctic form large carpets on humid soil, especially near bird colonies. Cellular structure Green algae have chloroplasts that contain chlorophyll a and b, giving them a bright green colour, as well as the accessory pigments beta carotene (red-orange) and xanthophylls (yellow) in stacked thylakoids. The cell walls of green algae usually contain cellulose, and they store carbohydrate in the form of starch. All green algae have mitochondria with flat cristae. When present, paired flagella are used to move the cell. They are anchored by a cross-shaped system of microtubules and fibrous strands. Flagella are only present in the motile male gametes of charophytes bryophytes, pteridophytes, cycads and Ginkgo, but are absent from the gametes of Pinophyta and flowering plants. Members of the class Chlorophyceae undergo closed mitosis in the most common form of cell division among the green algae, which occurs via a phycoplast. By contrast, charophyte green algae and land plants (embryophytes) undergo open mitosis without centrioles. Instead, a 'raft' of microtubules, the phragmoplast, is formed from the mitotic spindle and cell division involves the use of this phragmoplast in the production of a cell plate. Origins Photosynthetic eukaryotes originated following a primary endosymbiotic event, where a heterotrophic eukaryotic cell engulfed a photosynthetic cyanobacterium-like prokaryote that became stably integrated and eventually evolved into a membrane-bound organelle: the plastid. This primary endosymbiosis event gave rise to three autotrophic clades with primary plastids: the (green) plants (with chloroplasts) the red algae (with rhodoplasts) and the glaucophytes (with muroplasts). Evolution and classification Green algae are often classified with their embryophyte descendants in the green plant clade Viridiplantae (or Chlorobionta). Viridiplantae, together with red algae and glaucophyte algae, form the supergroup Primoplantae, also known as Archaeplastida or Plantae sensu lato. The ancestral green alga was a unicellular flagellate. The Viridiplantae diverged into two clades. The Chlorophyta include the early diverging prasinophyte lineages and the core Chlorophyta, which contain the majority of described species of green algae. The Streptophyta include charophytes and land plants. Below is a consensus reconstruction of green algal relationships, mainly based on molecular data. The basal character of the Mesostigmatophyceae, Chlorokybophyceae and spirotaenia are only more conventionally basal Streptophytes. The algae of this paraphyletic group "Charophyta" were previously included in Chlorophyta, so green algae and Chlorophyta in this definition were synonyms. As the green algae clades get further resolved, the embryophytes, which are a deep charophyte branch, are included in "algae", "green algae" and "Charophytes", or these terms are replaced by cladistic terminology such as Archaeplastida, Plantae/Viridiplantae, and streptophytes, respectively. Reproduction Green algae are a group of photosynthetic, eukaryotic organisms that include species with haplobiontic and diplobiontic life cycles. The diplobiontic species, such as Ulva, follow a reproductive cycle called alternation of generations in which two multicellular forms, haploid and diploid, alternate, and these may or may not be isomorphic (having the same morphology). In haplobiontic species only the haploid generation, the gametophyte is multicellular. The fertilized egg cell, the diploid zygote, undergoes meiosis, giving rise to haploid cells which will become new gametophytes. The diplobiontic forms, which evolved from haplobiontic ancestors, have both a multicellular haploid generation and a multicellular diploid generation. Here the zygote divides repeatedly by mitosis and grows into a multicellular diploid sporophyte. The sporophyte produces haploid spores by meiosis that germinate to produce a multicellular gametophyte. All land plants have a diplobiontic common ancestor, and diplobiontic forms have also evolved independently within Ulvophyceae more than once (as has also occurred in the red and brown algae). Diplobiontic green algae include isomorphic and heteromorphic forms. In isomorphic algae, the morphology is identical in the haploid and diploid generations. In heteromorphic algae, the morphology and size are different in the gametophyte and sporophyte. Reproduction varies from fusion of identical cells (isogamy) to fertilization of a large non-motile cell by a smaller motile one (oogamy). However, these traits show some variation, most notably among the basal green algae called prasinophytes. Haploid algal cells (containing only one copy of their DNA) can fuse with other haploid cells to form diploid zygotes. When filamentous algae do this, they form bridges between cells, and leave empty cell walls behind that can be easily distinguished under the light microscope. This process is called conjugation and occurs for example in Spirogyra. Sex pheromone Sex pheromone production is likely a common feature of green algae, although only studied in detail in a few model organisms. Volvox is a genus of chlorophytes. Different species form spherical colonies of up to 50,000 cells. One well-studied species, Volvox carteri (2,000 – 6,000 cells) occupies temporary pools of water that tend to dry out in the heat of late summer. As their environment dries out, asexual V. carteri quickly die. However, they are able to escape death by switching, shortly before drying is complete, to the sexual phase of their life cycle that leads to production of dormant desiccation-resistant zygotes. Sexual development is initiated by a glycoprotein pheromone (Hallmann et al., 1998). This pheromone is one of the most potent known biological effector molecules. It can trigger sexual development at concentrations as low as 10−16M. Kirk and Kirk showed that sex-inducing pheromone production can be triggered experimentally in somatic cells by heat shock. Thus heat shock may be a condition that ordinarily triggers sex-inducing pheromone in nature. The Closterium peracerosum-strigosum-littorale (C. psl) complex is a unicellular, isogamous charophycean alga group that is the closest unicellular relative to land plants. Heterothallic strains of different mating type can conjugate to form zygospores. Sex pheromones termed protoplast-release inducing proteins (glycopolypeptides) produced by mating-type (-) and mating-type (+) cells facilitate this process. Physiology The green algae, including the characean algae, have served as model experimental organisms to understand the mechanisms of the ionic and water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials. References External links Green algae and cyanobacteria in lichens Green algae (UC Berkeley) Monterey Bay green algae Green algae Paraphyletic groups
Green algae
Biology
2,260
49,157,905
https://en.wikipedia.org/wiki/Sarcodon%20praestans
Sarcodon praestans is a species of tooth fungus in the family Bankeraceae. Found in Papua New Guinea, it was described as new to science in 1974 by Dutch mycologist Rudolph Arnold Maas Geesteranus. References External links Fungi described in 1974 Fungi of New Guinea praestans Fungus species
Sarcodon praestans
Biology
66
16,244,315
https://en.wikipedia.org/wiki/Polar%20Class
Polar Class (PC) refers to the ice class assigned to a ship by a classification society based on the Unified Requirements for Polar Class Ships developed by the International Association of Classification Societies (IACS). Seven Polar Classes are defined in the rules, ranging from PC 1 for year-round operation in all polar waters to PC 7 for summer and autumn operation in thin first-year ice. The IACS Polar Class rules should not be confused with International Code for Ships Operating in Polar Waters (Polar Code) by the International Maritime Organization (IMO). Background The development of the Polar Class rules began in the 1990s with an international effort to harmonize the requirements for marine operations in the polar waters in order to protect life, property and the environment. The guidelines developed by the International Maritime Organization (IMO), which were later incorporated in the Polar Code, made reference to the compliance with Unified Requirements for Polar Ships developed by the International Association of Classification Societies (IACS). In May 1996, an "Ad-Hoc Group to establish Unified Requirements for Polar Ships (AHG/PSR)" was established with one working group concentrating on the structural requirements and another working on machinery-related issues. The first IACS Polar Class rules were published in 2007. Prior to the development of the unified requirements, each classification society had their own set of ice class rules ranging from Baltic ice classes intended for operation in first-year ice to higher vessel categories, including icebreakers, intended for operations in polar waters. When developing the upper and lower boundaries for the Polar Classes, it was agreed that the highest Polar Class vessels (PC 1) should be capable of operating safely anywhere in the Arctic or the Antarctic waters at any time of the year while the lower boundary was set to existing tonnage operating during the summer season, most of which followed the Baltic ice classes with some upgrades and additions. The lowest Polar Class (PC 7) was thus set to the similar level with the Finnish-Swedish ice class 1A. The definition of operational conditions for each Polar Class was intentionally left vague due to the wide variety of ship operations carried out in polar waters. Definition Polar Class notations The IACS has established seven different Polar Class notations, ranging from PC 1 (highest) to PC 7 (lowest), with each level corresponding to operational capability and strength of the vessel. The description of ice conditions where ships of each Polar Class are intended to operate are based on World Meteorological Organization (WMO) Sea Ice Nomenclature. These definitions are intended to guide owners, designers and administrations in selecting the appropriate Polar Class to match the intended voyage or service of the vessel. Ships with sufficient power and strength to undertake "aggressive operations in ice-covered waters", such as escort and ice management operations, can be assigned an additional notation "Icebreaker". The two lowest Polar Classes (PC 6 and PC 7) are roughly equivalent to the two highest Finnish-Swedish ice classes (1A Super and 1A, respectively). However, unlike the Baltic ice classes intended for operation only in first-year sea ice, even the lowest Polar Classes consider the possibility of encountering multi-year ice ("old ice inclusions"). Requirements In the Polar Class rules, the hull of the vessel is divided longitudinally into four regions: "bow", "bow intermediate", "midbody" and "stern". All longitudinal regions except the bow are further divided vertically into "bottom", "lower" and "icebelt" regions. For each region, a design ice load is calculated based on the dimensions, hull geometry, and ice class of the vessel. This ice load is then used to determine the scantlings and steel grades of structural elements such as shell plating and frames in each location. The design scenario used to determine the ice loads is a glancing collision with a floating ice floe. In addition to structural details, the Polar Class rules have requirements for machinery systems such as the main propulsion, steering gear, and systems essential for the safety of the crew and survivability of the vessel. For example, propeller-ice interaction should be taken into account in the propeller design, cooling systems and sea water inlets should be designed to work also in ice-covered waters, and the ballast tanks should be provided with effective means of preventing freezing. Although the rules generally require the ships to have suitable hull form and sufficient propulsion power to operate independently and at continuous speed in ice conditions corresponding to their Polar Class, the ice-going capability requirements of the vessel are not clearly defined in terms of speed or ice thickness. In practice, this means that the Polar Class of the vessel may not reflect the actual icebreaking capability of the vessel. Polar Class ships The IACS Polar Class rules apply for ships contracted for construction on or after 1 July 2007. This means that while vessels built prior to this date may have an equivalent or even higher level of ice strengthening, they are not officially assigned a Polar Class and may not in fact fulfill all the requirements in the unified requirements. In addition, particularly Russian ships and icebreakers are assigned ice classes only according to the requirements of the Russian Maritime Register of Shipping, which maintains its own ice class rules parallel to the IACS Polar Class rules. Although numerous ships have been built to the two least hardened Polar Classes, PC6 and PC7, only a small number of ships have been assigned ice class PC5 or higher. Polar Class 5 A number of research vessels intended for scientific missions in the polar regions are built to PC5 rating: the South African S. A. Agulhas II in 2012, the American Sikuliaq in 2014, and the British RRS Sir David Attenborough in 2020. In addition, a PC5 Antarctic vessel Almirante Viel is under construction for the Chilean Navy . In 2012, the Royal Canadian Navy awarded a shipbuilding contract for the construction of six to eight Arctic Offshore Patrol Ships (AOPS) rated at PC5. , HMCS Harry DeWolf and HMCS Margaret Brooke have entered service, HMCS Max Bernays is undergoing post-acceptance trials, and HMCS William Hall, HMCS Frédérick Rolette and HMCS Robert Hampton Gray are under construction. Two additional ships have been ordered for the Canadian Coast Guard. , four cruise ships have been built with PC5 rating: National Geographic Endurance (delivered in 2020) and National Geographic Resolution (2021) for Lindblad Expeditions, and SH Minerva (2021) and SH Vega (2022) for Swan Hellenic. Polar Class 4 The 2012-built drillship Stena IceMAX has a hull strengthened according to PC4 requirements. However, the long and wide vessel does not feature an icebreaking hull and is designed to operate primarily in pre-broken ("managed") ice. The Canadian shipping company Fednav operates two PC4 rated bulk carriers, 2014-built Nunavik and 2021-built Arvik I. The 28,000-tonne vessels are primarily used to transport nickel ore from Raglan Mine in the Canadian Arctic. In 2015, the hull of the Finnish 1986-built icebreaker Otso was reinforced with additional steel to PC4 level to allow the vessel to support seismic surveys in the Arctic during the summer months. The Finnish LNG-powered icebreaker Polaris, built in 2016, is rated PC4 with an additional Lloyd's Register class notation "Icebreaker(+)". The latter part of the notation refers to additional structural strengthening based on analysis of the vessel's operational profile and potential ice loading scenarios. The interim icebreakers CCGS Captain Molly Kool, CCGS Jean Goodwill, and CCGS Vincent Massey, built in 2000–01 and acquired by the Canadian Coast Guard 2018, will be upgraded to PC4 rating as part of the vessels' conversion to Canadian service. The new PC4 polar logistics vessel of the Argentine Navy intended to complement the country's existing icebreaker ARA Almirante Irízar in Antarctica is currently in design stage. The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) is in the process of acquiring a new PC4 rated icebreaker for researching the Arctic region. The Swedish Maritime Administration is in the process of acquiring 2–3 new icebreakers rated PC4 Icebreaker(+). The first icebreaker is expected to enter service in 2027. The new Canadian Coast Guard Multi-Purpose Vessels (MPV) will be rated PC4 Icebreaker(+). Sixteen vessels will be built by Seaspan in the 2020s and 2030s, and the first vessel is expected to enter service in 2028. Polar Class 3 The first PC3 vessels were two heavy load carriers, Audax and Pugnax, built for the Netherlands-based ZPMC-Red Box Energy Services in 2016. The long and wide vessels, capable of breaking up to ice independently, were built for year-round transportation of LNG liquefaction plant modules to Sabetta. Although usually referred to by their Russian Maritime Register of Shipping ice class Arc7, the fifteen first-generation Yamalmax LNG carriers built in 2016–2019 as well as the arctic condensate tankers Boris Sokolov (built in 2018) and Yuriy Kuchiev (2019) serving the Yamal LNG project also have PC3 rating from Bureau Veritas. In April 2015, it was reported that Edison Chouest would build two PC3 anchor handling tug supply vessels (AHTS) for Alaskan operations. However, the construction of the vessels due for delivery by the end of 2016 was later cancelled following Shell Oil's decision to halt Arctic oil exploration. , three polar research vessels have been built with PC3 rating: Kronprins Haakon for the Norwegian Polar Institute in 2018, Xue Long 2 for the Polar Research Institute of China in 2019, and Nuyina for the Australian Antarctic Division in 2021. Kronprins Haakon also has the additional notation "Icebreaker" while Nuyina notation includes Lloyd's Register's "Icebreaker(+)" notation. The Finnish multipurpose icebreakers Fennica and Nordica, built in the early 1990s, were assigned PC3 rating as part of the vessels' Polar Code certification in 2019. , there are no PC3 rated vessels under construction. Polar Class 2 , the only PC2 rated vessel in service is the expedition cruise ship operated by the French company Compagnie du Ponant. The 270-passenger vessel, capable of breaking up to thick multi-year ice and taking passengers to the North Pole, was delivered in 2021. The United States Coast Guard has ordered two out of three planned PC2 rated heavy polar icebreakers referred to as Polar Security Cutters. Construction of the first vessel, , has been delayed by several years and now is not expected to be delivered to the U.S. Coast Guard until at least 2028. While the vessels these Polar Security Cutters are intended to replace, and , are sometimes referred to as s, these mid-1970s icebreakers do not carry a PC rating. The future Canadian Coast Guard polar icebreakers and are designed to PC2 rating with an additional notation "Icebreaker(+)". While a single vessel was initially scheduled for delivery in 2017, the National Shipbuilding Strategy has since been revised to include two such icebreakers, the first of which is planned to enter service by December 2029. Germany has signed the order of a replacement vessel for the 1982-built research icebreaker Polarstern in December 2024. While the old Polarstern was built to Germanischer Lloyd ice class ARC3, the replacement Polarstern 2 will be a PC2 ship. Polar Class 1 , no ships have been built, under construction or planned to PC1, the highest ice class specified by the IACS. Notes References External links Unified Requirements for Polar Class ships, International Association of Classification Societies (IACS) Shipbuilding Icebreakers Sea ice
Polar Class
Physics,Engineering
2,424
4,629,853
https://en.wikipedia.org/wiki/Calix%2C%20Inc.
Calix, Inc. is a telecommunications company that specializes in providing software platforms, systems, and services to support the delivery of broadband services. The company was founded in 1999 and is headquartered in San Jose, California. Calix provides cloud, software platforms, systems and services to communications service providers. Calix maintains facilities in Petaluma, CA, Minneapolis, MN, San Jose, CA, Richardson, TX in the US and facilities in Nanjing, China and Bangalore, India. Industry News With Amazon Alexa, Calix Translates Smart Home Opp Into Reality Amazon expands white-box Alexa device lineup with new home hub and gateway hardware How Verizon & Calix Unlocked NGPON2 Acquisitions history In 2006, Calix purchased Optical Solutions, Inc., based in Minneapolis, MN. In 2010, Calix announced acquisition of one-time rival Occam Networks, Inc., based in Santa Barbara, California. The acquisition was completed in February, 2011. In Nov 2012, Calix completed acquisition of Ericsson’s fiber access assets. Other Resources References External links Companies listed on the New York Stock Exchange Companies based in Sonoma County, California Telecommunications companies established in 1999 Telecommunications companies of the United States Networking companies of the United States Networking hardware companies 1999 establishments in California Computer companies of the United States Computer hardware companies
Calix, Inc.
Technology
266
50,777,898
https://en.wikipedia.org/wiki/Belgrade%20IT%20sector
The IT sector of Belgrade is the concentration of information technology centers and service providers in the Serbian Capital of Belgrade, comprising 6,924 companies as of . The IT sector in Serbia is projected to become largest sector of the Serbian economy. Microsoft, Huawei, and Kaspersky have opened development centers Belgrade. Microsoft Development Center Serbia was, at the time of its establishment, the fifth such center in the world. Other global IT companies that choose Belgrade for their regional or European center include Asus, Intel, Dell, Huawei, and NCR. These major investments generated over €678.3 million in Serbia's exports in 2015. Startup community Nordeus, a local video game startup, is one of Europe's fastest-growing gaming companies. In five years of operation, Nordeus has grown to over 150 employees and €64 million of yearly sales. Another local startup, FishingBooker, was founded in 2013 and now employs over 90 people. FishingBooker has been described as "the world’s largest online travel company that enables [users] to find and book fishing trips." Like Nordeus, FishingBooker, is a bootstrapped startup. In the first quarter of 2016, more than US$65 million has been raised by Serbian startups including US$45 million for Seven Bridges (a Bioinformatics firm) and US$14 million for Vast (a data analysis firm). Also in 2016, a Belgrade-based website AskGamblers which generates over €810,000 in revenue and €620,000 in profits was sold to Catena Media for €15 million. The startup community is supported by a non-profit organization called Startit which acts as an incubator for new companies. Startit raised US$108,000 from its Kickstarter campaign in 2015, allowing it to expand its Belgrade center, build a second center in Inđija (completed in February 2016), and expand further to other cities with strong IT industry: Novi Sad, Zrenjanin, Vršac, Subotica, Šabac. Other developments include an agriculture drone startup that uses drones for land surveying, TeleSkin an app which can identify and track skin cancer and there was another successful Kickstarter for Hexiwear a customizable smartwatch for developers. References Serbia Economy of Belgrade
Belgrade IT sector
Technology
481
68,131,375
https://en.wikipedia.org/wiki/Maid%20abuse
Maid abuse is the maltreatment or neglect of a person hired as a domestic worker, especially by the employer or by a household member of the employer. It is any act or failure to act that results in harm to that employee. It takes on numerous forms, including physical, sexual, emotional, and economic abuse. The majority of perpetrators tend to be female employers and their children. These acts may be committed for a variety of reasons, including to instil fear in the victim, discipline them, or act in a way desired by the abuser. The United States Human Trafficking Hotline describes maid abuse as a form of human trafficking— it is "force, fraud, or coercion to maintain control over the worker and to cause the worker to believe that he or she has no other choice but to continue with the work," they stated. Although it can occur anywhere, it is most commonly experienced amongst domestic workers in Singapore. Prevalence Maid abuse, though a global phenomenon, is especially prevalent in Singapore. According to a study by Research Across Borders, six out of ten Singaporean domestic workers experience some form of abuse at work. One in four reported physical violence. Additionally, one in seven Singaporeans have witnessed maid abuse. Foreign domestic workers, who have come to the country seeking employment, are at high risk of abuse. As maids are the only migrant workers not protected under the country's Singapore's Employment Act, many end up in abusive situations. This is amplified due to the fact that foreign domestic worker contracts in Singapore lack live-out options; foreign maids reside in the same residence as their employers. Mistreatment of Singaporean foreign domestic workers is not uncommon and is widely detailed. They are subject to physical abuse, invasion of privacy, and sexual assault (including rape). Legislation Singapore In Singapore, it is against the law to abuse a foreign domestic worker. The Ministry of Manpower (MOM) says that perpetrators face severe penalties; if convicted, the perpetrator may face prison time, caning, or be fined as much as $20,000. The perpetrator will also be banned from further employment of foreign domestic workers. Malaysia In Malaysia, abused foreign domestic workers can obtain visas so that they may stay in the country to pursue legal complaints; the same is true in the United States. Notable cases On 2 December 2001, 19-year-old Indonesian maid Muawanatul Chasanah was found beaten to death in her house of employment in Chai Chee, Singapore. Her employer, Ng Hua Chye, was arrested and charged with her murder. It was revealed in Ng's 2-day trial that Ng had repeatedly punched, kicked and whipped the maid and even used burning cigarette butts and/or boiling hot water to burn the maid due to her supposed poor working performance and her stealing the food of Ng's infant daughter. He was sentenced to 18 years and six months in prison, along with 12 strokes of the cane. On 28 May 2002, Indonesian maid Sundarti Supriyanto killed her employer Angie Ng and Ng's daughter Crystal Poh, and set fire to Ng's Bukit Merah office in Singapore. Sundarti recounted that she was severely abused by Ng for minor mistakes, and even starved for days by Ng. She had endured much humiliation before she finally lost her control and fatally stabbed Ng (and her daughter) in a frenzied attack. The High Court of Singapore accepted that she indeed suffered from maid abuse and was not of her right mind when she was gravely provoked into committing the crime and lost control; therefore they acquitted Sundarti of murder and instead sentenced her to life imprisonment for culpable homicide not amounting to murder. On 26 July 2016, in Singapore, Myanmar maid Piang Ngaih Don was killed by her employer, 41-year-old Gaiyathiri Murugayan. Murugayan was sentenced to 30 years in prison on 22 June 2021. She had earlier pleaded guilty to 28 charges out of a total of 115 relating to the murder and abuse of the maid, who worked for her family for a few months. The murder charge was reduced to the next highest charge of culpable homicide as Gaiyathiri was suffering from a mental disorder at the time she murdered Piang, meaning she would not be sentenced to death (which was the mandatory penalty for murder in Singapore). The prosecution sought a life sentence for the convicted maid killer, and while judge See Kee Oon did not hand out a life term, he agreed by saying that the conduct of Gaiyathiri were an abhorrence and outrage to human and public conscience. Gaiyathiri's mother was given a 17-year jail term for maid abuse while Gayathiri's husband, who also abused the maid, is currently on trial since 2023. On 25 June 2018, at a flat in Singapore’s Choa Chu Kang, 17-year-old Zin Mar Nwe, a foreign maid from Myanmar, used a knife to stab her employer’s mother-in-law 26 times, resulting in the death of the 70-year-old elderly Indian citizen. Zin Mar Nwe told police and the court that the victim had hit her and reprimanded her on several occasions, and the threat of being sent back to her home country caused her to be triggered and thus stabbed the elderly woman to death. Although Zin Mar Nwe was found guilty of murder nonetheless at the end of her trial on 18 May 2023, some of her claims of being abused by the victim were accepted by the trial court. She was sentenced to life imprisonment in July 2023. See also Domestic worker References Abuse Crimes Violence against women
Maid abuse
Biology
1,166
17,795,435
https://en.wikipedia.org/wiki/Abbott-Firestone%20curve
The Abbott-Firestone curve or bearing area curve (BAC) describes the surface texture of an object. The curve can be found from a profile trace by drawing lines parallel to the datum and measuring the fraction of the line which lies within the profile. Mathematically it is the cumulative probability density function of the surface profile's height and can be calculated by integrating the probability density function. The Abbott-Firestone curve was first described by Ernest James Abbott and Floyd Firestone in 1933. It is useful for understanding the properties of sealing and bearing surfaces. It is commonly used in the engineering and manufacturing of piston cylinder bores of internal combustion engines. The shape of the curve is distilled into several of the surface roughness parameters, especially the Rk family of parameters. References Engineering mechanics Tribology
Abbott-Firestone curve
Chemistry,Materials_science,Engineering
165
51,119,650
https://en.wikipedia.org/wiki/%28523692%29%202014%20EZ51
(provisional designation ) is a large trans-Neptunian object in the scattered disc, approximately in diameter. It was discovered on 18 April 2010, by the Pan-STARRS 1 survey at Haleakala Observatory, Hawaii, United States. Orbit and classification orbits the Sun at a distance of 40.4–64.4 AU once every 379 years and 3 months (138,537 days; semi-major axis of 52.4 AU). Its orbit has an eccentricity of 0.23 and an inclination of with respect to the ecliptic. The body's observation arc begins with its official discovery observation at Haleakala in April 2010. Numbering and naming This minor planet was numbered by the Minor Planet Center on 25 September 2018 (). , it has not been named. Physical characteristics According to Michael Brown and the Johnston's archive, measures 626 and 770 kilometers in diameter, based on an absolute magnitude of 4.2 and 3.8, with an assumed albedo of 0.10 and 0.09, respectively. The MPC/JPL databases give an absolute magnitude of 3.92. On 25 February 2019, a stellar occultation by was observed in New Zealand. From these observations, a lower limit of 575 km was placed on its mean diameter. In 2023, a study on photometric observations of trans-Neptunian objects by the Kepler space telescope found that rotates with a period of 3.2 hours and exhibits a light curve amplitude of magnitudes, which indicates its shape must be elongated. References External links Discovery Circumstances: Numbered Minor Planets (520001)-(525000) – Minor Planet Center Scattered disc and detached objects Discoveries by Pan-STARRS Possible dwarf planets Objects observed by stellar occultation 20100418
(523692) 2014 EZ51
Physics,Astronomy
368
3,147,924
https://en.wikipedia.org/wiki/Photoexcitation
Photoexcitation is the production of an excited state of a quantum system by photon absorption. The excited state originates from the interaction between a photon and the quantum system. Photons carry energy that is determined by the wavelengths of the light that carries the photons. Objects that emit light with longer wavelengths, emit photons carrying less energy. In contrast to that, light with shorter wavelengths emit photons with more energy. When the photon interacts with a quantum system, it is therefore important to know what wavelength one is dealing with. A shorter wavelength will transfer more energy to the quantum system than longer wavelengths. On the atomic and molecular scale photoexcitation is the photoelectrochemical process of electron excitation by photon absorption, when the energy of the photon is too low to cause photoionization. The absorption of the photon takes place in accordance with Planck's quantum theory. Photoexcitation plays a role in photoisomerization and is exploited in different techniques: Dye-sensitized solar cells makes use of photoexcitation by exploiting it in cheaper inexpensive mass production solar cells. The solar cells rely on a large surface area in order to catch and absorb as many high energy photons as possible. Shorter wavelengths are more efficient for the energy conversion compared to longer wavelengths, since shorter wavelengths carry photons that are more energy rich. Light containing shorter wavelengths therefore cause a longer and less efficient conversion of energy in dye-sensitized solar cells. Photochemistry Luminescence Optically pumped lasers use photoexcitation in a way that the excited atoms in the lasers get an enormous direct-gap gain needed for the lasers. The density that is needed for the population inversion in the compound Ge, a material often used in lasers, must become 1020 cm−3, and this is acquired via photoexcitation. The photoexcitation causes the electrons in atoms to go to an excited state. The moment the amount of atoms in the excited state is higher than the amount in the normal ground state, the population inversion occurs. The inversion, like the one caused with germanium, makes it possible for materials to act as lasers. Photochromic applications. Photochromism causes a transformation of two forms of a molecule by absorbing a photon. For example, the BIPS molecule(2H-l-benzopyran-2,2-indolines) can convert from trans to cis and back by absorbing a photon. The different forms are associated with different absorption bands. In a cis-form of BIPS, the transient absorption band has a value of 21050 cm−1, in contrast to the band from the trans-form, that has a value of 16950 cm−1. The results were optically visible, where the BIPS in gels turned from a colorless appearance to a brown or pink color after repeatedly being exposed to a high energy UV pump beam. High energy photons cause a transformation in the BIPS molecule making the molecule change its structure. On the nuclear scale photoexcitation includes the production of nucleon and delta baryon resonances in nuclei. References Photochemistry Physical chemistry Time-resolved spectroscopy
Photoexcitation
Physics,Chemistry
650
1,618,887
https://en.wikipedia.org/wiki/Square%20yard
The square yard (Northern India: gaj, Pakistan: gaz) is an imperial unit and U.S. customary unit of area. It is in widespread use in most of the English-speaking world, particularly the United States, United Kingdom, Canada, Pakistan and India. It is defined as the area of a square with sides of one yard (three feet, thirty-six inches, 0.9144 metres) in length. Symbols There is no universally agreed symbol but the following are used: square yards, square yard, square yds, square yd sq yards, sq yard, sq yds, sq yd, sq.yd. yards/-2, yard/-2, yds/-2, yd/-2 yards^2, yard^2, yds^2, yd^2 yards², yard², yds², yd² Conversions One square yard is equivalent to: 1,296 square inches 9 square feet ≈0.00020661157 acres ≈0.000000322830579 square miles 836 127.36 square millimetres 8 361.2736 square centimetres 0.83612736 square metres 0.000083612736 hectares 0.00000083612736 square kilometres 1.00969 gaj See also 1 E-1 m² for a comparison with other areas Area (geometry) Conversion of units Cubic yard Metrication in Canada Orders of magnitude (area) Square (algebra), Square root References Units of area Imperial units Customary units of measurement in the United States
Square yard
Mathematics
326
71,916,621
https://en.wikipedia.org/wiki/Cell%20biomechanics
Cell biomechanics a branch of biomechanics that involves single molecules, molecular interactions, or cells as the system of interest. Cells generate and maintain mechanical forces within their environment as a part of their physiology. Cell biomechanics deals with how mRNA, protein production, and gene expression is affected by said environment and with mechanical properties of isolated molecules or interaction of proteins that make up molecular motors. It is known that minor alterations in mechanical properties of cells can be an indicator of an infected cell. By studying these mechanical properties, greater insight will be gained in regards to disease. Thus, the goal of understanding cell biomechanics is to combine theoretical, experimental, and computational approaches to construct a realistic description of cell mechanical behaviors to provide new insights on the role of mechanics in disease. History In the late seventeenth century, English polymath Robert Hooke and Dutch scientist Antonie van Leeuwenhoek looked into ciliate Vorticella with extreme fluid and cellular motion using a simple optical microscope. In 1702 on Christmas day, van Leeuwenhoek described his observations, “In structure these little animals were fashioned like a bell, and at the round opening they made such a stir, that the particles in the water thereabout were set in motion thereby…which sight I found mightily diverting” in a letter. Prior to this, Brownian motion of particles and organelles within living cells had been discovered as well as theories to measure viscosity. However, there were not enough accessible technical tools to perform these accurate experiments at the time. Thus, mechanical properties within cells were only supported qualitatively by observation. With these new discoveries, the role of mechanical forces within biology was not always naturally accepted. In 1850, English physician William Benjamin Carpenter wrote “many of the actions taking place in the living body are conformable to the laws of mechanics, has been hastily assumed as justifying the conclusion that all its actions are mechanical." Similarly, in 1917, Scottish mathematical biologist D'Arcy Wentworth Thompson noted “…though they resemble known physical phenomena, their nature is still the subject of much dubiety and discussion, and neither the forms produced nor the forces at work can yet be satisfactorily and simply explained” in his book On Growth and Form. In the nineteenth century industrialization era, the overall understanding of the cell and tissue mechanics finally developed as it related to the mechanical, structural testing and theory (indentation, beam bending, the Hertz model) of engines, boats, and bridges. At the end of the nineteenth century, the mechanical properties of living cells were able to be experimentally analyzed and examined using techniques provided by large scale engineering mechanics. Since 2008, the nanoscale testing and modeling remains to be fundamentally based on these nineteenth century practices. Research methods Various studies have been conducted to establish relationships between the structure, mechanical responses, and function of biological tissues (blood vessels, heart, cardiac muscle, lung). To conduct this research, there have been several developed tools and techniques which are sensitive to detect such small forces. At this time, these techniques are only applicable in a controlled environment (test tube, petri dish). All of these methods ultimately give insight on mechanical properties of cells. These techniques can generally be split up into two sections: active methods and passive methods. Active methods are methods that apply forces onto cells in some manner to deform the cell. Passive methods are methods that sense mechanical forces and do not apply any external forces to the cell. Active methods Atomic force microscopy Atomic force microscopy is an interaction between a tip attached to a flexible cantilever and the molecule on a cell surface. The sharp tip can be used to probe single molecular events and image live cells. The relative deformation of the cell and the tip can be used to estimate how much force was applied and how stiff the cell is. Since it is a high force measurement technique, large scale deformations and reorganizations can be observed and mapped. Some drawbacks of this technique include but are not limited to an overestimation of force-versus-indentation curve given no applied force, potential cell damage, variety of tip shapes that determine nature of force-deformation curve. Magnetic tweezers and magnetic twisting cytometry Magnetic twisting cytometry is mainly used to determine physical properties of biological tissues. They can also be used for micromanipulating cells. Beads are exposed to magnetizing coils leading to a magnetic dipole moment. A weaker directional magnetic field is then applied to twist the beads through a specific angle or to move the beads lineary. Some disadvantages to this system include the difficulty to control the region of the cell that the beads, no guarantee of complete binding to the cell surface, and loss of magnetization with time. A variation of this technique is named optical tweezers where linear forces are applied to cells rather than magnetic ones. A laser beam is used alongside dielectric beads of high refractive indices to generate optical forces. Drawbacks of this method include potential photo-induced damage and a limited amount of force that can be generated. Micropipette aspiration Micropipette aspiration is primarily used for measuring absolute values of mechanical properties. On a cellular scale, it can map in space and time surface tension of interfaces within a tissue. On a tissue scale, it can measure mechanical properties such as viscoelasticity and tissue surface tension. Like AFM, it is also a high force measurement technique, where large scale deformations and reorganizations can be observed and mapped. A micropipette gets placed on the surface of the cell and gently suctions the cell to deform it. The geometry of the deformation along with the applied pressure allows researchers to calculate the force applied along with mechanical properties of the cell. A dual micropipette assay can is also able to quantify the strength of cadherin-dependent cell-cell adhesion. Stretching devices Stretching devices were developed to study effects of tensile stress on cells and tissues. Cells are incubated on flexible silicone sheet elastic membranes with modifiable surfaces. They are then stretched either in an uniaxial, biaxial, or pressure-controlled manner. The stretching can also occur at different frequencies. The main downside to stretching devices is that they leave behind wrinkling patterns, distorting the actual forces that were applied on the sheets. They are also large in size and generate both heat and shock, hindering the real-time imaging of cells. Carbon fiber-based systems Carbon fibers are mounted in glass capillaries and attached to a position-control device with feedback control mechanism. The fibers then attach to cells and apply and record the active forces generated from the cell. This, however, may result in damage to the cells due to the attachment they have to the fibers, focus issues, and potential bias. Passive methods Elastic substratum method This method stems from the classical theory of small-strain, plane-stress elasticity. The elastic substratum method allows for analysis of the displacement field of the elastic substrate over the traction field. This method is also referred to as traction force microscopy. Cells are incubated onto a flexible silicone sheet substrate. The cells then apply force onto the sheets causing a wrinkling pattern and is analyzed through the number of wrinkles and patterns. The downside to this method is the difficulty in transforming the patterns into a traction force map leading to potential inaccuracy in identifying forces. Flexible sheets with embedded beads Latex or fluorescently tagged beads are embedded into elastic substratum where the position of the beads are recorded over time. Cellular forces can be assumed by these displacements. The uncertainty with this method is the interdependence of bead displacement. A more improved technique named flexible sheets with micropatterned dots or grids considers this drawback and instead has the dots imprinted onto the flexible sheet. The deformation of the grid from the original grid is then analyzed. The same assumptions, however, are required to be made where the forces originate from the measured location and do not spread from another area. Micromachined cantilever beam A horizontal cantilever beam with an attachment pad and a well is used to measure cell traction forces as cells are seeded onto substrates and crawl over the cantilevers. These cantilevers are set to measure force through cantilever deflection, stiffness, and stress gradient. Unlike the prior method, the uncertainty of no propagation is not an issue. Rather the cantilever beam can only move in only one direction leading to only one axis being measured. The array of vertical microcantilevers is a technique that overcomes the limitations of the typical micromachined cantilever beam where there are two axes of directions available rather than a single horizontal beam. Although there is an improvement in scale and resolution, it is not suited for rapid- mass production and is quite costly. With delicate properties, minor damage would require reproduction of the device. Applications and usage In the last half-century, several studies have been conducted using cell biomechanics leading to greater biological control. Majority of these newly created devices are built to either provide greater insight into the human body’s reaction to disease or attempt to eradicate the disease as a whole. Cardiovascular cell mechanics and microcirculation Quantitative passive biomechanical models have been developed to predict cell motion and deformation in the mammalian red blood cell, a cell with a membrane with bending and shearing properties that are dependent upon strain, strain rate, and strain history, and a cytoplasm that in the normal red cell is predominantly a Newtonian viscous fluid, within a living organism. Newly developed (2007) models constitutive to this one show that biomechanical analysis not only is a starting point for prediction of the whole cell and cell suspension behavior, but also provides a reference point for molecular models of cell membranes that originate from the crystal structure of its parts. Several generations of biomechanical models have also been developed for white blood cells, the basis of immune surveillance and inflammation. These models have been proven to effectively predict cell-cell interactions in microcirculation. Similar additional models have been created for endothelium, platelets and metastatic tumor cells. Biomechanical analyses of different cell types in the circulation has brought greater understanding of cell interactions in the circulation, making it possible to predict cell behavior in narrow vessels. As a result, several blood diseases like inflammation and cardiovascular disease now have biomechanical footing. Models have also been developing in organs like the lung, heart, skeletal muscle, and connective tissue that are able to predict basic aspects of organ perfusion. Cell enrichment and separation From cell biomechanics, technology has been created to separate targeted cells. For the case of disease diagnosis and detection, said technology is able to separate healthy cells from cancerous ones through the difference in stiffness of the cell. Deformability-based enrichment devices are an example of this technology. These devices mostly deal with cancer cells from blood. Their main feature is their ability to identify if cancer cells have separated themselves from the tumor and have entered into the bloodstream as CTCs (Circulating Tumor Cells). If they have, these devices have recently also become able to count the number of CTCs in a millimeter of blood. Using this value, medical professionals are able to determine the effectiveness of a chemotherapy treatment. More specific examples include Clare Boothe Luce Assistant Professor of Mechanical Engineering at the Whiting School of Engineering Soojung Claire Hur’s microfluidic device and Woodruff School of Mechanical Engineering Professor Gonghao Wang’s microfluidic device that both deal with breast cancer cells. Hur’s device improves metastatic breast cancer cells by balancing deformability-induced and inertial lift forces that pushes larger metastatic cancer cells to move towards the centerline of a microchannel compared to blood cells. Wang’s device separates stiffer less invasive breast cancer cells by having diagonal ridges where only more deformable and highly invasive breast cancer cells can squeeze through. Deformability-based enrichment devices, however, are not only exclusive to cancer cells. An example of this is Nanyang Technological University Researcher Han Wei Hou’s microfluidic device that separates and improves red blood cells from normal cells based on their stiffness through margination. Infected red blood cells are generally stiffer, so through his device, stiffer red blood cells would be closer to the vessel wall when normal red blood cells would stay in the center. This allows the deformed red blood cells to be collected via a separate outlet on the sides. Ongoing research concerns In the 1800’s, cells were initially thought to be of homogeneous gels, sols, viscoelastic and plastic fluids. Models currently have been developed into including a viscoelastic continuum, a combination of discrete mechanical elements, or a combination of viscoelastic fluid within a dense meshwork and have been proven to be highly accurate after experimentation. Despite these improved and more refined models, there still remain to be flaws as several experimental proofs (soft glass rheology rheology phenomenon) that refute current existing models. Thus, the time-dependent and predictive theoretical description of cell mechanics remains to be incomplete. It is also not fully understood whether mechanical phenomena are side products of biological processes or they are controlled at the genetic and physiological level through feedback loops, actuation and response pathways given our existing knowledge of cell physiology or neurophysiology. References Biomechanics
Cell biomechanics
Physics
2,775
3,498,318
https://en.wikipedia.org/wiki/Scott%20Shields%20%28activist%29
Scott Shields (born 1978) is an American blogger and Democratic Party political activist. Born in Englewood, New Jersey in 1978, he grew up in Morris County, NJ and graduated from Montville Township High School. He rose to prominence in 2005 as a front-page writer for MyDD covering, among other topics, health care politics, labor rights, and the 2005 New Jersey Governor's race. In 2006, Shields was employed with the campaign of US Senator Bob Menendez in New Jersey. During the campaign, his work was profiled in The Philadelphia Inquirer, The Record of Bergen County, and The New Yorker. He was named one of the years' "Top Political Operatives" by the website PoliticsNJ.com. His writing can also be seen on MyDD, Daily Kos, BlueJersey, and other prominent blogs. References 1978 births Living people American bloggers People from Englewood, New Jersey Montville Township High School alumni
Scott Shields (activist)
Technology
195
3,806,701
https://en.wikipedia.org/wiki/Leonard%20Rogers
Sir Leonard Rogers (18 January 1868 – 16 September 1962) was a founder member of the Royal Society of Tropical Medicine and Hygiene, and its President from 1933 to 1935. Biography Rogers studied at Plymouth College and worked at St Mary’s Hospital. He qualified M.R.C.S., L.R.C.P. (1891) F.R.C.S. (1892) in London. Rogers had a wide range of interests in tropical medicine, from the study of kala-azar epidemics to sea snake venoms, but is best known for pioneering the treatment of cholera with hypertonic saline, which has saved a multitude of lives. He also championed Indian chaulmoogra oil as a treatment for Hansen's disease (leprosy). Rogers was one of the pioneers in setting up the Calcutta School of Tropical Medicine (CSTM) in Calcutta, India. In 1929, Rogers was awarded the Cameron Prize for Therapeutics of the University of Edinburgh. He was president of the 1919 session of the Indian Science Congress. Vivisection Rogers defended vivisection and criticized the arguments of the anti-vivisection movement. He authored a book, The Truth about Vivisection in 1937. He was honorary treasurer of the Research Defence Society. Rogers played a leading part in obtaining a ruling from the High Court sustained by the Appeal Court and House of Lords that anti-vivisection organizations can not be regarded as charities. Selected publications . . . . . , with Ernest Muir. References 1868 births 1962 deaths 19th-century English medical doctors 20th-century English medical doctors British parasitologists British people in colonial India Fellows of the Royal Society Founders of Indian schools and colleges Indian Medical Service officers Knights Commander of the Order of the Indian Empire Knights Commander of the Order of the Star of India Manson medal winners People educated at Plymouth College People from Helston Presidents of the Royal Society of Tropical Medicine and Hygiene Presidents of The Asiatic Society Vivisection activists
Leonard Rogers
Chemistry
400
67,793,103
https://en.wikipedia.org/wiki/Emmonsiosis
Emmonsiosis, also known as emergomycosis, is a systemic fungal infection that can affect the lungs, generally always affects the skin and can become widespread. The lesions in the skin look like small red bumps and patches with a dip, ulcer and dead tissue in the centre. It is caused by the Emergomyces species, a novel dimorphic fungus, previously classified under the genus Emmonsia. These fungi are found in soil and transmitted by breathing in its spores from the air. Inside the body it converts to yeast-like cells which then cause disease and invade beyond the lungs. Diagnosis is by skin biopsy and its appearance under the microscope. It is difficult to distinguish from histoplasmosis. Treatment is usually with amphotericin B. Emmonsiosis can be fatal. The disseminated type is more prevalent in South Africa, particularly in people with HIV. Signs and symptoms Generally, all cases have involvement of the skin. The lesions look like small red bumps and patches with a dip, ulcer and dead tissue in the centre. There may be several lesions and their distribution can be widespread. The lungs may be affected. Cause It is caused by the Emergomyces species, a novel dimorphic fungus, previously classified under the genus Emmonsia. Following a revised taxonomy in 2017 based on DNA sequence analyses, five of these Emmonsia-like fungi have been placed under the separate genus Emergomyces. These include Emergomyces pasteurianus, Emergomyces africanus, Emergomyces canadensis, Emergomyces orientalis and Emergomyces europaeus. Emergomyces africanus was previously known as Emmonsia africanus, which has similar features to Histoplasma spp. and the family of Ajellomycetaceae. The disease has been observed among people who have a weakened immune system and risk factors include HIV, organ transplant and steroid use. Mechanism The fungus is found in soil and is released in the air. Transmission is by breathing in fungal spores from the air. Inside the body it converts to yeast-like cells which then cause disease and invade beyond the lungs. In people with HIV, Emmonsiosis has been associated with Immune reconstitution inflammatory syndrome following initiating antiretroviral treatment. Diagnosis Diagnosis is by skin biopsy and its appearance under the microscope. Differential diagnosis Generally, it is difficult to distinguish from histoplasmosis. Other conditions that appear similar include tuberculosis, blastomycosis, sporotrichosis, chicken pox, Kaposi's sarcoma and drug reactions. Treatment Treatment usually includes amphotericin B. Prognosis It can be fatal. Epidemiology The disseminated type is more prevalent in South Africa, particularly in people with HIV. History The disease was thought to be a rare condition of the lung. Early cases may have been misdiagnosed as histoplasmosis. Other animals The genus Emmonsia can cause adiaspiromycosis, a lung disease in wild animals. References Mycosis-related cutaneous conditions Rare diseases Rare infectious diseases Fungal diseases
Emmonsiosis
Biology
663
45,489,980
https://en.wikipedia.org/wiki/Anatoly%20Kondratenko
Anatoly Kondratenko (Russian: Анатолий Кондратенко, Ukrainian: Анатолій Кіндратенко; 6 December 1935, Leningrad, Soviet Union) is a Professor of Physics at University of Kharkiv. A theoretical physicist who has been working in the field of plasma and plasma electronics for over 50 years, Kondratenko has written over 270 papers. Under his supervision, 24 PhD and 7 Doctor of Science dissertations have been written Kondratenko, along with his students and co-authors, pioneered the study of the electrodynamics of plasma waveguides. He explained from a theoretical point of view the existence of surface ion acoustic waves, cyclotron waves and eigen waves on the plasma-metal interface. In addition to this, he laid the groundwork for the theory of plasma electronics. Kondratenko received his undergraduate degree in Theoretical physics in 1958 in the former Soviet Union (University of Kharkiv). He obtained his PhD (1965) and his Doctor of Science degree (1971) at UFTI, Kharkiv. His work has been featured in numerous newspaper and magazine articles in the United States, Europe, Ukraine, the Soviet Union and in many popular books. Kondratenko took and continues to take part in the political life of Ukraine. He was a principal member of the Kharkiv chapter of Rukh in the 1990s, and continues to co-chair the chapter. He is also the head editor of the all-Ukrainian children's newspaper "Zhuravlik". Books References 1935 births Living people Scientists from Kharkiv Soviet physicists Theoretical physicists 20th-century Ukrainian physicists National University of Kharkiv alumni Academic staff of the National University of Kharkiv School of Physics and Technology of University of Kharkiv alumni
Anatoly Kondratenko
Physics
402
42,673,974
https://en.wikipedia.org/wiki/Edible%20algae%20vaccine
Edible algae based vaccination is a vaccination strategy under preliminary research to combine a genetically engineered sub-unit vaccine and an immunologic adjuvant into Chlamydomonas reinhardtii microalgae. Microalgae can be freeze-dried and administered orally. While spirulina is accepted as safe to consume, edible algal vaccines remain under basic research with unconfirmed safety and efficacy as of 2018. In 2003, the first documented algal-based vaccine antigen was reported, consisting of a foot-and-mouth disease antigen complexed with the cholera toxin subunit B, which delivered the antigen to digestion mucosal surfaces in mice. The vaccine was grown in C. reinhardtii algae and provided oral vaccination in mice, but was hindered by low vaccine antigen expression levels. Proteins expressed inside the chloroplast of algae (the most common site of genetic engineering and protein production) do not undergo glycosylation, a form of posttranslational modification. Glycosylation of proteins that are not naturally modified like the malaria vaccine candidate pfs25 can occur in common expression systems like yeast. Notes References U.S. Food and Drug Administration (2002) GRAS Notification for Spirulina Microalgae Vaccines Edible algae
Edible algae vaccine
Biology
272
471,852
https://en.wikipedia.org/wiki/Dextroamphetamine
Dextroamphetamine (INN: dexamfetamine) is a potent central nervous system (CNS) stimulant and enantiomer of amphetamine that is primarily prescribed for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. It has also been used illicitly to enhance cognitive and athletic performance, as well as an aphrodisiac and euphoriant. Dextroamphetamine is generally regarded as the prototypical stimulant. The amphetamine molecule exists as two enantiomers, levoamphetamine and dextroamphetamine. Dextroamphetamine is the dextrorotatory, or 'right-handed', enantiomer and exhibits more pronounced effects on the central nervous system than levoamphetamine. Pharmaceutical dextroamphetamine sulfate is available as both a brand name and generic drug in a variety of dosage forms. Dextroamphetamine is sometimes prescribed as the inactive prodrug lisdexamfetamine, which is converted into dextroamphetamine after absorption. Side effects of dextroamphetamine at therapeutic doses include elevated mood, decreased appetite, dry mouth, excessive grinding of the teeth, headache, increased heart rate, increased wakefulness or insomnia, anxiety, and irritability, among others. At excessively high doses, psychosis (i.e., hallucinations, delusions), addiction, and rapid muscle breakdown may occur. However, for individuals with pre-existing psychotic disorders, there may be a risk of psychosis even at therapeutic doses. Dextroamphetamine, like other amphetamines, elicits its stimulating effects via several distinct actions: it inhibits or reverses the transporter proteins for the monoamine neurotransmitters (namely the serotonin, norepinephrine and dopamine transporters) either via trace amine-associated receptor 1 (TAAR1) or in a TAAR1 independent fashion when there are high cytosolic concentrations of the monoamine neurotransmitters and it releases these neurotransmitters from synaptic vesicles via vesicular monoamine transporter 2. It also shares many chemical and pharmacological properties with human trace amines, particularly phenethylamine and , the latter being an isomer of amphetamine produced within the human body. It is available as a generic medication. In 2022, mixed amphetamine salts (Adderall) was the 14th most commonly prescribed medication in the United States, with more than 34million prescriptions. Uses Medical Dextroamphetamine is used to treat attention deficit hyperactivity disorder (ADHD) and narcolepsy (a sleep disorder), and is sometimes prescribed for depression and obesity. ADHD Narcolepsy Enhancing performance Recreational Dextroamphetamine is also used recreationally as a euphoriant and aphrodisiac, and, like other amphetamines, is used as a club drug for its energetic and euphoric high. Dextroamphetamine is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). The reason for using crushed spansules for insufflation and injection methods is evidently due to the instant-release forms of the drug seen in tablet preparations often containing a sizable amount of inactive binders and fillers alongside the active d-amphetamine, such as dextrose. Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops. Contraindications Adverse effects Overdose Interactions Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of the enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans. Pharmacology Pharmacodynamics Amphetamine and its enantiomers have been identified as potent full agonists of trace amine-associated receptor 1 (TAAR1), a GPCR, discovered in 2001, that is important for regulation of monoaminergic systems in the brain. Activation of TAAR1 increases cAMP production via adenylyl cyclase activation and inhibits the function of the dopamine transporter, norepinephrine transporter, and serotonin transporter, as well as inducing the release of these monoamine neurotransmitters (effluxion). Amphetamine enantiomers are also substrates for a specific neuronal synaptic vesicle uptake transporter called VMAT2. When amphetamine is taken up by VMAT2, the vesicle releases (effluxes) dopamine, norepinephrine, and serotonin, among other monoamines, into the cytosol in exchange. Dextroamphetamine (the dextrorotary enantiomer) and levoamphetamine (the levorotary enantiomer) have identical pharmacodynamics, but their binding affinities to their biomolecular targets vary. Dextroamphetamine is a more potent agonist of TAAR1 than levoamphetamine. Consequently, dextroamphetamine produces roughly three to four times more central nervous system (CNS) stimulation than levoamphetamine; however, levoamphetamine has slightly greater cardiovascular and peripheral effects. Related endogenous compounds Pharmacokinetics History, society, and culture Racemic amphetamine was first synthesized under the chemical name "phenylisopropylamine" in Berlin, 1887 by the Romanian chemist Lazăr Edeleanu. It was not widely marketed until 1932, when the pharmaceutical company Smith, Kline & French (now known as GlaxoSmithKline) introduced it in the form of the Benzedrine inhaler for use as a bronchodilator. Notably, the amphetamine contained in the Benzedrine inhaler was the liquid free-base, not a chloride or sulfate salt. Three years later, in 1935, the medical community became aware of the stimulant properties of amphetamine, specifically the dextroamphetamine isomer, and in 1937 Smith, Kline, and French introduced tablets under the brand name Dexedrine. In the United States, Dexedrine was approved to treat narcolepsy and attention deficit hyperactivity disorder (ADHD). In Canada indications once included epilepsy and parkinsonism. Dextroamphetamine was marketed in various other forms in the following decades, primarily by Smith, Kline, and French, such as several combination medications including a mixture of dextroamphetamine and amobarbital (a barbiturate) sold under the brand name Dexamyl and, in the 1950s, an extended release capsule (the "Spansule"). Preparations containing dextroamphetamine were also used in World War II as a treatment against fatigue. It quickly became apparent that dextroamphetamine and other amphetamines had a high potential for misuse, although they were not heavily controlled until 1970, when the Comprehensive Drug Abuse Prevention and Control Act was passed by the United States Congress. Dextroamphetamine, along with other sympathomimetics, was eventually classified as Schedule II, the most restrictive category possible for a drug with a government-sanctioned, recognized medical use. Internationally, it has been available under the names AmfeDyn (Italy), Curban (US), Obetrol (Switzerland), Simpamina (Italy), Dexedrine/GSK (US & Canada), Dexedrine/UCB (United Kingdom), Dextropa (Portugal), and Stild (Spain). It became popular on the mod scene in England in the early 1960s, and carried through to the Northern Soul scene in the north of England to the end of the 1970s. In October 2010, GlaxoSmithKline sold the rights for Dexedrine Spansule to Amedra Pharmaceuticals (a subsidiary of CorePharma). The U.S. Air Force uses dextroamphetamine as one of its "go pills", given to pilots on long missions to help them remain focused and alert. Conversely, "no-go pills" are used after the mission is completed, to combat the effects of the mission and "go-pills". The Tarnak Farm incident was linked by media reports to the use of this drug on long term fatigued pilots. The military did not accept this explanation, citing the lack of similar incidents. Newer stimulant medications or awakeness promoting agents with different side effect profiles, such as modafinil, are being investigated and sometimes issued for this reason. Formulations Transdermal Dextroamphetamine Patches Dextroamphetamine is available as a transdermal patch containing dextroamphetamine base under the brand name Xelstrym. Dextroamphetamine sulfate In the United States, immediate release (IR) formulations of dextroamphetamine sulfate are available generically as 5 mg and 10 mg tablets, marketed by Barr (Teva Pharmaceutical Industries), Mallinckrodt Pharmaceuticals, Wilshire Pharmaceuticals, Aurobindo Pharmaceutical USA and CorePharma. Previous IR tablets sold under the brand names Dexedrine and Dextrostat have been discontinued but in 2015, IR tablets became available by the brand name Zenzedi, offered as 2.5 mg, 5 mg, 7.5 mg, 10 mg, 15 mg, 20 mg and 30 mg tablets. Dextroamphetamine sulfate is also available as a controlled-release (CR) capsule preparation in strengths of 5 mg, 10 mg, and 15 mg under the brand name Dexedrine Spansule, with generic versions marketed by Barr and Mallinckrodt. A bubblegum flavored oral solution is available under the brand name ProCentra, manufactured by FSC Pediatrics, which is designed to be an easier method of administration in children who have difficulty swallowing tablets, each 5 mL contains 5 mg dextroamphetamine. The conversion rate between dextroamphetamine sulfate to amphetamine free base is .728. In Australia, dexamfetamine is available in bottles of 100 instant release 5 mg tablets as a generic drug or slow release dextroamphetamine preparations may be compounded by individual chemists. In the United Kingdom, it is available in 5 mg instant release sulfate tablets under the generic name dexamfetamine sulfate as well as 10 mg and 20 mg strength tablets under the brand name Amfexa. It is also available in generic dexamfetamine sulfate 5 mg/ml oral sugar-free syrup. The brand name Dexedrine was available in the United Kingdom prior to UCB Pharma disinvesting the product to another pharmaceutical company (Auden Mckenzie). Lisdexamfetamine Dextroamphetamine is the active metabolite of the prodrug lisdexamfetamine (L-lysine-dextroamphetamine), available by the brand name Vyvanse (Elvanse in the European market) (Venvanse in the Brazil market) (lisdexamfetamine dimesylate). Dextroamphetamine is liberated from lisdexamfetamine enzymatically following contact with red blood cells. The conversion is rate-limited by the enzyme, which prevents high blood concentrations of dextroamphetamine and reduces lisdexamfetamine's drug liking and abuse potential at clinical doses. Vyvanse is marketed as once-a-day dosing as it provides a slow release of dextroamphetamine into the body. Vyvanse is available as capsules, and chewable tablets, and in seven strengths; 10 mg, 20 mg, 30 mg, 40 mg, 50 mg, 60 mg, and 70 mg. The conversion rate between lisdexamfetamine dimesylate (Vyvanse) to dextroamphetamine base is 29.5%. Adderall Another pharmaceutical that contains dextroamphetamine is commonly known by the brand name Adderall. It is available as immediate release (IR) tablets and extended release (XR) capsules. Adderall contains equal amounts of four amphetamine salts: One-quarter racemic (d,l-)amphetamine aspartate monohydrate One-quarter dextroamphetamine saccharate One-quarter dextroamphetamine sulfate One-quarter racemic (d,l-)amphetamine sulfate Adderall has a total amphetamine base equivalence of 63%. While the enantiomer ratio by dextroamphetamine salts to levoamphetamine salts is 3:1, the amphetamine base content is 75.9% dextroamphetamine, 24.1% levoamphetamine. Research Schizophrenia Dextroamphetamine reduces the negative symptoms of schizophrenia, and has been shown to enhance the effects of auditory discrimination training in schizophrenic patients. Notes Image legend Reference notes References External links Amphetamine Anorectics Aphrodisiacs Antihypotensive agents Attention deficit hyperactivity disorder management Drugs acting on the nervous system Enantiopure drugs Ergogenic aids Euphoriants Excitatory amino acid reuptake inhibitors Human drug metabolites Monoaminergic activity enhancers Nootropics Norepinephrine-dopamine releasing agents Phenethylamines Pro-motivational agents Stimulants Substituted amphetamines TAAR1 agonists VMAT inhibitors Wakefulness-promoting agents World Anti-Doping Agency prohibited substances
Dextroamphetamine
Chemistry
3,358
13,547,663
https://en.wikipedia.org/wiki/Jacobsthal%20number
In mathematics, the Jacobsthal numbers are an integer sequence named after the German mathematician Ernst Jacobsthal. Like the related Fibonacci numbers, they are a specific type of Lucas sequence for which P = 1, and Q = −2—and are defined by a similar recurrence relation: in simple terms, the sequence starts with 0 and 1, then each following number is found by adding the number before it to twice the number before that. The first Jacobsthal numbers are: 0, 1, 1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731, 5461, 10923, 21845, 43691, 87381, 174763, 349525, … A Jacobsthal prime is a Jacobsthal number that is also prime. The first Jacobsthal primes are: 3, 5, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, 201487636602438195784363, 845100400152152934331135470251, 56713727820156410577229101238628035243, … Jacobsthal numbers Jacobsthal numbers are defined by the recurrence relation: The next Jacobsthal number is also given by the recursion formula or by The second recursion formula above is also satisfied by the powers of 2. The Jacobsthal number at a specific point in the sequence may be calculated directly using the closed-form equation: The generating function for the Jacobsthal numbers is The sum of the reciprocals of the Jacobsthal numbers is approximately 2.7186, slightly larger than e. The Jacobsthal numbers can be extended to negative indices using the recurrence relation or the explicit formula, giving (see ) The following identities holds (see ) where is the nth Fibonacci number. Jacobsthal–Lucas numbers Jacobsthal–Lucas numbers represent the complementary Lucas sequence . They satisfy the same recurrence relation as Jacobsthal numbers but have different initial values: The following Jacobsthal–Lucas number also satisfies: The Jacobsthal–Lucas number at a specific point in the sequence may be calculated directly using the closed-form equation: The first Jacobsthal–Lucas numbers are: 2, 1, 5, 7, 17, 31, 65, 127, 257, 511, 1025, 2047, 4097, 8191, 16385, 32767, 65537, 131071, 262145, 524287, 1048577, … . Jacobsthal Oblong numbers The first Jacobsthal Oblong numbers are: 0, 1, 3, 15, 55, 231, 903, 3655, 14535, 58311, … References Eponymous numbers in mathematics Integer sequences Recurrence relations
Jacobsthal number
Mathematics
634
50,930,035
https://en.wikipedia.org/wiki/V419%20Cephei
V419 Cephei (BD+59°2342 or HIP 104719) is an irregular variable star in the constellation of Cepheus with an apparent magnitude that varies between 6.54 and 6.89. Distance The Hipparcos-measured parallax of is not well-constrained to evaluate its distance. Based on kinematic analysis, its most likely distance is , equal to . The Gaia Data Release 2 parallax of is consistent with this distance. It is a member of the stellar association Cepheus OB2-A. Characteristics V419 Cephei is a red supergiant of spectral type M2 Ib with an effective temperature around 3,700 K and an estimated radius of . The K-band angular diameter measurements equal 5.90 ± 0.70 milliarcseconds, which leads to a figure not much higher, although the uncertainty in its distance must also be taken into account. If placed at the Sun's location, it would engulf the orbits of Mercury, Venus, Earth, Mars, and roughly half of the asteroid belt. Published values for the mass of V419 Cephei vary from around to over , above the limit beyond which stars end their lives as supernovae. The life of such massive stars is very short. Despite its advanced evolutionary state, V419 Cephei is only 10 million years old. The variability of the brightness of the star was discovered when the Hipparcos data was analyzed. It was given its variable star designation, V419 Cephei, in 1999. Billed as an irregular variable star of type LC, V419 Cephei's brightness varies between magnitudes 6.54 and 6.89 with no apparent periodicity. References M-type supergiants Cepheus (constellation) Cephei, V419 Slow irregular variables 202380 104719 Durchmusterung objects
V419 Cephei
Astronomy
393
36,128,856
https://en.wikipedia.org/wiki/Open%20Smart%20Grid%20Protocol
The Open Smart Grid Protocol (OSGP) is a family of specifications published by the European Telecommunications Standards Institute (ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart grid applications. OSGP is optimized to provide reliable and efficient delivery of command and control information for smart meters, direct load control modules, solar panels, gateways, and other smart grid devices. With over 5 million OSGP based smart meters and devices deployed worldwide it is one of the most widely used smart meter and smart grid device networking standards. Protocol layers and features OSGP follows a modern, structured approach based on the OSI protocol model to meet the evolving challenges of the smart grid. At the application layer, ETSI TS 104 001 provides a table-oriented data storage based, in part, on the ANSI C12.19 / MC12.19 / 2012 / IEEE Std 1377 standards for Utility Industry End Device Data Tables and ANSI C12.18 / MC12.18 / IEEE Std 1701, standard Protocol Specification for ANSI Type 2 Optical Port for its services and payload encapsulation. This standard and command system that provides for not only smart meters and related data but for general purpose extension to other smart grid devices. ETSI TS 104 001 is an updated version of the application layer specification that incorporates enhanced security features, including AES 128 encryption, and replaces the previously ETSI GS OSG 001 version. OSGP is designed to be very bandwidth efficient, enabling it to offer high performance and low cost using bandwidth constrained media such as the power line. For example, just as SQL provides an efficient and flexible database query language for enterprise applications, OSGP provides an efficient and flexible query language for smart grid devices. As with SQL, OSGP supports reading and writing of single attributes, multiple elements, or even entire tables. As another example, OSGP includes capabilities for an adaptive, directed meshing system that enables any OSGP device to serve as a message repeater, further optimizing bandwidth use by repeating only those packets that need to be repeated. OSGP also includes authentication and encryption for all exchanges to protect the integrity and privacy of data as is required in the smart grid. The intermediate layers of the OSGP stack leverage the ISO/IEC 14908 control networking standard, a field-proven multi-application widely used in smart grid, smart city, and smart building applications with more than 100 million devices deployed worldwide. ISO/IEC 14908 is highly optimized for efficient, reliable, and scalable control networking applications. The low overhead of ISO/IEC 14908 enables it to deliver high performance without requiring high bandwidth. Since it builds on ISO/IEC 14908, which is media independent, OSGP has the possibility to be used with any current or future physical media. OSGP today uses ETSI TS 103 908 (PowerLine Telecommunications) as its physical layer. Although a new standard, products that conform to ETSI TS 103 908 prior to its formal adoption have been on the market for many years, with over 40 million smart meter and grid devices deployed. In 2020, IEC approved and published an International Standard (IEC 62056-8-8) defining the OSGP Communication Profile for the DLMS/COSEM suite of standards. In addition, CEN/CENELEC approved and published a standard (CLC/TS 50586) for OSGP that describes its data interface model, application-level communication, management functionalities, and security mechanism for the exchange of data with smart-grid devices. Both of these standards were part of the outcomes of the EU Smart Metering Mandate M/441 and its decision identifying OSGP as one of the protocols that can be used for Smart Metering deployments in Europe. It is also important to define interoperability between information systems and applications, and this needs to be ensured independent of the physical layers. This is achieved using NTA 8150, which defines APIs higher level web services protocols (e.g. SOAP and xml). The NTA 8150 consists of two parts; 1) System Software API, description of the architecture and the API for AMI; 2) API usage per use case, description for specific AMI use cases, as examples. Standards OSGP is built upon the following open standards. ETSI Technical specification TS 104 001: Open Smart Grid Protocol. Produced by the ETSI Technical Committee for Powerline Telecommunications (TC PLT ), this application layer protocol can be used with multiple communication media. ISO/IEC 14908-1: Information technology—Control network protocol—Part 1: Protocol stack. Published through ISO/IEC JTC 1/SC 6, this standard specifies a multi-purpose control network protocol stack optimized for smart grid, smart building, and smart city applications. ETSI Technical specification TS 103 908: Powerline Telecommunications (PLT); BPSK Narrow Band Power Line Channel for Smart Metering Applications. This specification defines a high-performance narrow band powerline channel for control networking in the smart grid that can be used with multiple smart grid devices. It was produced by the ETSI Technical Committee for Powerline Telecommunications (TC PLT ). IEC 62056-8-8: This standard specifies an OSGP and ISO/IEC 14908 communication profile based on the OSGP lower layer stacks as part of the IEC 62056 DLMS/COSEM Suite series. CLC/TS 50586: This standard describes the OSGP data interface model, application-level communication, management functionalities, and security mechanism for the exchange of data with smart-grid devices. ANSI C12.19: This standard (as are its companions MC12.19 and IEEE Std 1377™ standards) concepts, data types, and basic tables and procedures were mapped into Clause 6, "OSGP Device data representation" and Normative Annex A, "Basic Tables" in ETSI Technical specification TS 104 001. ANSI C12.18: (as are its companions MC12.18 and IEEE Std 1701™ standards) services and payload encapsulation were mapped into clause "Basic OSGP services" in ETSI Technical specification TS 104 001. NTA 8150 Part 1: This standard specifies the System Software API, description of the architecture and the API for AMI NTA 8150 Part 2 This standard specifies the API usage per use case, description for specific AMI use cases, as examples. OSGP is supported and maintained by the OSGP Alliance (formerly known as Energy Services Network Association), a non-profit corporation composed of utilities, manufacturers and system integrators. See also Distributed generation Smart grid Smart meter Virtual power plant References External links OSGP Alliance European Telecommunications Standards Institute Energy Services Network Association LonWorks - Echelon's connection with OSGP International Organization for Standardization Frost & Sullivan Recognizes OSGP market share Communications protocols ETSI Open standards
Open Smart Grid Protocol
Technology
1,412
65,253,869
https://en.wikipedia.org/wiki/New%20Towns%20for%20Old
"New Towns for Old" is a 1942 British promotional short film promoting the clearance of old historic "slum towns" and replacement with "new towns". It promotes the then new concept of town planning. It was directed by John Eldridge and scripted by the poet Dylan Thomas. The film was produced by the Ministry of Information and was one of the few wartime documentary to focus on a topic unrelated to the war. The title alludes to a line from Aladdin: New Lamps for Old. Synopsis Two civil servants wander around various vantage points looking at the fictional town of Smokedale. One with a bowler hat and carrying an umbrella has a refined London accent; the other in a trilby and smoking a pipe has a Yorkshire accent. The footage itself is largely in and around Sheffield and Manchester and the civic functions discussed are in Manchester Town Hall. They proudly look at the new housing completed so far. The film discusses true figures from the replanning of the Manchester slums from 1922 onward: 26,000 condemned; 14,000 demolished (plus "some help from Hitler"); 30,000 new houses planned. It stresses the need for a Green Belt around each town. Schools, hospitals and play areas are to be part of the plan. Old industries are discussed (and were in truth the hardest issue to address). The solution for relocating industries is not fully explained, and simply states that new areas will be zoned for industrial use - away from the housing. It explains how war has delayed the plan. It states "we've got to rebuild all our big towns". The men then point to the camera and address the viewer, saying it is up to You. Remember it's your town. Later Recognition The title was repeated in the 1962 guide to town planning "New Towns for Old" by J. B. Cullingworth. References External links New Towns for Old at Screenonline 1942 films British short documentary films Urban planning Films directed by John Eldridge 1940s British films
New Towns for Old
Engineering
406
1,797,388
https://en.wikipedia.org/wiki/Glycerophospholipid
Glycerophospholipids or phosphoglycerides are glycerol-based phospholipids. They are the main component of biological membranes in eukaryotic cells. They are a type of lipid, of which its composition affects membrane structure and properties. Two major classes are known: those for bacteria and eukaryotes and a separate family for archaea. Structures Glycerophospholipids are derived from glycerol-3-phosphate in a de novo pathway. The term glycerophospholipid signifies any derivative of glycerophosphoric acid that contains at least one O-acyl, or O-alkyl, or O-alk-1'-enyl residue attached to the glycerol moiety. The phosphate group forms an ester linkage to the glycerol. The long-chained hydrocarbons are typically attached through ester linkages in bacteria/eukaryotes and by ether linkages in archaea. In bacteria and procaryotes, the lipids consist of diesters commonly of C16 or C18 fatty acids. These acids are straight-chained and, especially for the C18 members, can be unsaturated. For archaea, the hydrocarbon chains have chain lengths of C10, C15, C20 etc. since they are derived from isoprene units. These chains are branched, with one methyl substituent per C5 subunit. These chains are linked to the glycerol phosphate by ether linkages. The two hydrocarbon chains attached to the glycerol are hydrophobic while the polar head, which mainly consists of the phosphate group attached to the third carbon of the glycerol backbone, is hydrophilic. This dual characteristic leads to the amphipathic nature of glycerophospholipids. They are usually organized into a bilayer in membranes with the polar hydrophilic heads sticking outwards to the aqueous environment and the non-polar hydrophobic tails pointing inwards. Glycerophospholipids consist of various diverse species which usually differ slightly in structure. The most basic structure is a phosphatidate. This species is an important intermediate in the synthesis of many phosphoglycerides. The presence of an additional group attached to the phosphate allows for many different phosphoglycerides. By convention, structures of these compounds show the 3 glycerol carbon atoms vertically with the phosphate attached to carbon atom number three (at the bottom). Plasmalogens and phosphatidates are examples. Nomenclature and stereochemistry In general, glycerophospholipids use an "sn" notation, which stands for stereospecific numbering. When the letters "sn" appear in the nomenclature, by convention the hydroxyl group of the second carbon of glycerol (2-sn) is on the left on a Fischer projection. The numbering follows the one of Fischer's projections, being 1-sn the carbon at the top and 3-sn the one at the bottom. The advantage of this particular notation is that the spatial configuration (D or L) of the glycero-molecule is determined intuitively by the residues on the positions sn-1 and sn-3. For example sn-glycero-3-phosphoric acid and sn-glycero-1-phosphoric acid are enantiomers. Most vegetable oils have unsaturated fatty acids in the sn-2 position, with saturated fatty acids in the 1-sn and/or 3-sn position. Animal fats more often have saturated fatty acids in the 2-sn, with unsaturated fatty acids in the 1-sn and/or 3-sn position. Examples Plasmalogens Plasmalogens are a type of phosphoglyceride. The first carbon of glycerol has a hydrocarbon chain attached via an ether, not ester, linkage. The linkages are more resistant to chemical attack than ester linkages are. The second (central) carbon atom has a fatty acid linked by an ester. The third carbon links to an ethanolamine or choline by means of a phosphate ester. These compounds are key components of the membranes of muscles and nerves. Phosphatidates Phosphatidates are lipids in which the first two carbon atoms of the glycerol are fatty acid esters, and the 3 is a phosphate ester. The phosphate serves as a link to another alcohol-usually ethanolamine, choline, serine, or a carbohydrate. The identity of the alcohol determines the subcategory of the phosphatidate. There is a negative charge on the phosphate and, in the case of choline or serine, a positive quaternary ammonium ion. (Serine also has a negative carboxylate group.) The presence of charges give a "head" with an overall charge. The phosphate ester portion ("head") is hydrophilic, whereas the remainder of the molecule, the fatty acid "tail", is hydrophobic. These are important components for the formation of lipid bilayers. Phosphatidylethanolamines, phosphatidylcholines, and other phospholipids are examples of phosphatidates. Phosphatidylcholines Phosphatidylcholines are lecithins. Choline is the alcohol, with a positively charged quaternary ammonium, bound to the phosphate, with a negative charge. Lecithins are present in all living organisms. An egg yolk has a high concentration of lecithins, which are commercially important as an emulsifying agent in products such as mayonnaise. Lecithins are also present in brain and nerve tissue. Phosphatidylinositol Phosphatidylinositol makes up a small component of the cytosol in eukaryotic cell membranes and gives molecules a negative charge. Its importance relies in its role in activating sensory receptors that correlate with taste functions. Phosphatidylserine Phosphatidylserine is important in cell signaling, specifically apoptosis. Cells will use this phosphatidylserine to enter cells via apoptotic mimicry. The structure of this lipid differs in plants and animals, regarding fatty acid composition. In addition, phosphatidylserine plays an important role in the human brain content, as it makes up 13–15% of the phospholipids in the human cerebral cortex. This lipid is found in a wide range of places. For example, in the human diet, about 130 mg are derived from phosphatidylserine. This has been said to have a positive impact on the brain, as it helps with reduced stress and improved memory. Sphingomyelin Sphingomyelin is a type of sphingolipid, which contains a backbone of sphingoid bases. It can be found in the myelin sheath of nerve cell axons in animal cell membranes. Sphingomyelin can be found in eggs or bovine brain. This sphingolipid is synthesized at the endoplasmic reticulum and is enriched at the plasma membrane with a larger concentration on the outside. Other phospholipids There are many other phospholipids, some of which are glycolipids. The glycolipids include phosphatidyl sugars where the alcohol functional group is part of a carbohydrate. Phosphatidyl sugars are present in plants and certain microorganisms. A carbohydrate is very hydrophilic due to the large number of hydroxyl groups present. Uses Functions and use in membranes Glycerophospholipids are the main structural component of biological membranes. Their amphipathic nature drives the formation of the lipid bilayer structure of membranes. The cell membrane seen under the electron microscope consists of two identifiable layers, or "leaflets", each of which is made up of an ordered row of glycerophospholipid molecules. The composition of each layer can vary widely depending on the type of cell. For example, in human erythrocytes the cytosolic side (the side facing the cytosol) of the plasma membrane consists mainly of phosphatidylethanolamine, phosphatidylserine, and phosphatidylinositol. By contrast, the exoplasmic side (the side on the exterior of the cell) consists mainly of phosphatidylcholine and sphingomyelin, a type of sphingolipid. Each glycerophospholipid molecule consists of a small polar head group and two long hydrophobic chains. In the cell membrane, the two layers of phospholipids are arranged as follows: the hydrophobic tails point to each other and form a fatty, hydrophobic center the ionic head groups are placed at the inner and outer surfaces of the cell membrane Apart from their function in cell membranes, they function in other cellular processes such as signal induction and transport. In regards to signaling, they provide the precursors for prostanglandins and other leukotrienes. It is their specific distribution and catabolism that enables them carry out the biological response processes listed above. Their roles as storage centers for secondary messengers in the membrane is also a contributing factor to their ability to act as transporters. They also influence protein function. For example, they are important constituents of lipoproteins (soluble proteins that transport fat in the blood) hence affect their metabolism and function. Use in emulsification Glycerophospholipids can also act as an emulsifying agent to promote dispersal of one substance into another. This is sometimes used in candy making and ice-cream making. Presence in the brain Neural membranes contain several classes of glycerophospholipids which turnover at different rates with respect to their structure and localization in different cells and membranes. There are three major classes namely; 1-alkyl-2-acyl glycerophospholipid, 1,2-diacyl glycerophospholipid and plasmalogen. The main function of these classes of glycerophospholipids in the neural membranes is to provide stability, permeability and fluidity through specific alterations in their compositions. The glycerophospholipid composition of neural membranes greatly alters their functional efficacy. The length of glycerophospholipid acyl chain and the degree of saturation are important determinants of many membrane characteristics including the formation of lateral domains that are rich in polyunsaturated fatty acids. Receptor-mediated degradation of glycerophospholipids by phospholipases A(l), A(2), C, and D results in generation of second messengers, such as prostaglandins, eicosanoids, platelet activating factor and diacylglycerol. Thus, neural membrane phospholipids are a reservoir for second messengers. They are also involved in apoptosis, modulation of activities of transporters, and membrane-bound enzymes. Marked alterations in neural membrane glycerophospholipid composition have been reported to occur in neurological disorders. These alterations result in changes in membrane fluidity and permeability. These processes along with the accumulation of lipid peroxides and compromised energy metabolism may be responsible for the neurodegeneration observed in neurological disorders. Metabolism The metabolism of glycerophospholipids is different in eukaryotes, tumor cells, and prokaryotes. Synthesis in prokaryotes involves the synthesis of glycerophospholipids phosphatidic acid and polar head groups. Phosphatidic acid synthesis in eukaryotes is different, there are two routes, one to the other toward phosphatidylcholine and phosphatidylethanolamine. Glycerophospholipids are generally metabolized in several steps with different intermediates. The very first step in this metabolism involves the addition or transfer of the fatty acid chains to the glycerol backbone to form the first intermediate, lysophosphatidic acid (LPA). LPA then becomes acylated to form the next intermediate phosphatidic acid (PA). PA can be dephosphorylated leading to the formation of diacylglycerol which is essential in the synthesis of phosphatidylcholine (PC). PC is one of the many species of glycerophospholipids. In a pathway called the Kennedy pathway, the polar heads are added to complete the formation of the entire structure consisting of the polar head regions, the two fatty acid chains and the phosphate group attached to the glycerol backbone. In this Kennedy pathway, Choline is converted to CDP-Choline which drives the transfer of the polar head groups to complete the formation of PC. PC can then be further converted to other species of glycerophospholipids such as phosphatidylserine (PS) and phosphatidylethanolamine (PE). See also Biological membrane Phospholipid Glycerolipid 1,2-Dioleoyl-sn-glycerophosphoethanolamine References External links Diagram at uca.edu Phospholipids Membrane biology Glycerol esters
Glycerophospholipid
Chemistry
2,920
62,441,095
https://en.wikipedia.org/wiki/Herbivore%20effects%20on%20plant%20diversity
Herbivores' effects on plant diversity vary across environmental changes. Herbivores could increase plant diversity or decrease plant diversity. Loss of plant diversity due to climate change can also affect herbivore and plant community relationships Herbivores are crucial in determining the distribution, abundance, and diversity of plant populations. Research indicates that by consuming large amounts of plant biomass, herbivores can directly reduce the local abundance of plants, thereby affecting the spatial distribution of different plant species. For example, the impact of herbivory is typically more pronounced in grassland species than in woodland forbs, especially in environments that undergo frequent disturbances. Dominant species effect People used to think herbivores increase plant diversity by avoiding dominance. Dominant species tend to exclude subordinate species as competitive exclusion. However, the effects on plant diversity caused by variation in dominance could be beneficial or negative. Herbivores do increase bio-diversity by consuming dominant plant species, but they can also prefer eating subordinate species according to plants’ palatability and quality. Plant palatability also heavily affects which plant species becomes dominant and which becomes subordinate, as palatability is a huge factor in whether herbivores choose to consume a certain plant more or less and hence affects its course of growth. In addition to the preference of herbivores, herbivores' effects on plant diversity are also influenced by other factors, defense trade-off theory, the predator-prey interaction, and inner traits of the environment and herbivores. Defense trade-off theory effect One way that plants could differ in their susceptibility to herbivores is through defense trade-off. Defense trade-off theory is commonly used to be seen as a fundamental theory to maintain ecological evenness. Plants can make a trade-off response to resource allocation, such as between defense and growth. Defenses against herbivores on plant diversity can vary in different situations. It can be neutral, detrimental or beneficial for plant fitness. Defense trade-offs can be used to change plant phenotype based on environmental challenges (such as herbivory). Even in the absence of defensive trade-offs, herbivores may still be able to increase plant diversity, such as herbivores prefer subordinate species rather than dominant species. The predator-prey interaction, especially the “top-down” regulation. Some of the consequences of high grazing pressure, is that plant productivity is reduced due to photosynthetic tissue removal thus, reducing their richness and/or abundance in the ecosystem. Herbivore damage to non-photosynthetic plant tissue has also been found to reduce flowering plant productivity due to its detrimental effect on plant attractiveness to pollinators. This is what we know as the top-down effect that in this case focuses on the herbivore population and plant communities. The predator-prey interaction encourages the adaptation in plant species which the predator prefers. The theory of “top-down” ecological regulation disproportionately manipulates the biomass of dominant species to increase diversity. The herbivore effect on plant is universal but still significantly distinguish on each site, can be positive or negative. Overall, herbivory and its overarching effect on plant diversity can fluctuate due to many variables, such as herbivore population, plant phenology and palatability to herbivores. Productivity effect In a highly productive system, the environment provides an organism with adequate resources to grow. The effects of herbivores competing for resources on the plant are more complicated. Moderate levels of herbivory can increase the productivity of biomass, including plants. The existence of herbivores can increase plant diversity by reducing the abundance of dominant species, redundant resources then can be used by subordinate species. Therefore, in a highly productive system, direct consumption of dominant plants could indirectly benefit those herbivory-resistant and unpalatable species. But the less productive system can support limited herbivores because of lack of resources. Herbivory boosts the abundance of most tolerant species and decreases the less-tolerant species’ existence which accelerates the plant extinction. Moderate productive system sometimes barely has long-term effects on plant diversity. Because the environment provides a stable coexistence of different organisms. Even when herbivores create some disturbances to the community. The system is still able to recover to the original state. Light is one of the most important resources in environments for plant species. Competition for light availability and predator avoidance are equally important. With the addition of the resources, more competition arises among plant species. But herbivores can buffer the diversity reduction. Especially large herbivores can enhance the bio-diversity by selectively excluding tall, dominant plant species, and increase light availability. Plants can sense being touched, and they can use several strategies to defend against damage caused by herbivores, including the production of secondary metabolites known as allelochemicals, altering their attractiveness, and employing various defensive strategies such as escaping or avoiding herbivores, diverting herbivores toward non-essential parts, and encouraging the presence of natural enemies of herbivores. Body size of herbivores effect Body size of herbivores is a key reason underlying the interaction between herbivores and plant diversity, and the body size explains many of the phenomena connected to herbivore-plant interaction. An increase of body size means it requires more nutrients and energy to sustain itself. Small herbivores are less likely to decrease plant diversity. Because small non-digging animals may not cause too many disturbances to the environment. Intermediate-sized herbivores mostly increase plant diversity by consuming or influencing the dominant plant species, such as herbivore birds, that can directly use dominant plant species. While some herbivores enhance plant diversity by indirect effects on plant competition. Some digging animals at this size local community environmental fluctuations. And the adaptation of plant species to avoid predators can also adjust the vegetation structure and increase diversity. Larger herbivores often increase plant diversity. They use competitively dominant plant species, and disperse seeds and create disorder of the soil. Besides, their urine position also adjusts the local plant distribution, and prevent light competition. With a larger body size, large herbivores tend to consume higher-quality and more plants to gain back the required amount of nutrition and energy. Larger herbivores also leave behind larger amounts of fecal matter, which tends to increase the nutrients needed to grow plants in herbivore dominated areas such as grasslands, such as nitrogen and phosphorus. Plant diversity can be highly variable when in the presence of herbivores, however studies have shown that grazing that occurs in herbivore assemblages, such as a mixture of cattle and sheep can increase plant diversity. Therefore, the mechanisms of herbivores’ effects on plant diversity are complicated. Generally, the existence of herbivores increases plant diversity. Moderate herbivore enhances plant productivity as it reduces self-shading and accelerates nutrient cycling. But varies according to different environmental factors, multiple factors combined to affect how herbivores influence plant diversity. References Herbivory Plant ecology
Herbivore effects on plant diversity
Biology
1,439
48,827,727
https://en.wikipedia.org/wiki/Learnable%20function%20class
In statistical learning theory, a learnable function class is a set of functions for which an algorithm can be devised to asymptotically minimize the expected risk, uniformly over all probability distributions. The concept of learnable classes are closely related to regularization in machine learning, and provides large sample justifications for certain learning algorithms. Definition Background Let be the sample space, where are the labels and are the covariates (predictors). is a collection of mappings (functions) under consideration to link to . is a pre-given loss function (usually non-negative). Given a probability distribution on , define the expected risk to be: The general goal in statistical learning is to find the function in that minimizes the expected risk. That is, to find solutions to the following problem: But in practice the distribution is unknown, and any learning task can only be based on finite samples. Thus we seek instead to find an algorithm that asymptotically minimizes the empirical risk, i.e., to find a sequence of functions that satisfies One usual algorithm to find such a sequence is through empirical risk minimization. Learnable function class We can make the condition given in the above equation stronger by requiring that the convergence is uniform for all probability distributions. That is: The intuition behind the more strict requirement is as such: the rate at which sequence converges to the minimizer of the expected risk can be very different for different . Because in real world the true distribution is always unknown, we would want to select a sequence that performs well under all cases. However, by the no free lunch theorem, such a sequence that satisfies () does not exist if is too complex. This means we need to be careful and not allow too "many" functions in if we want () to be a meaningful requirement. Specifically, function classes that ensure the existence of a sequence that satisfies () are known as learnable classes. It is worth noting that at least for supervised classification and regression problems, if a function class is learnable, then the empirical risk minimization automatically satisfies (). Thus in these settings not only do we know that the problem posed by () is solvable, we also immediately have an algorithm that gives the solution. Interpretations If the true relationship between and is , then by selecting the appropriate loss function, can always be expressed as the minimizer of the expected loss across all possible functions. That is, Here we let be the collection of all possible functions mapping onto . can be interpreted as the actual data generating mechanism. However, the no free lunch theorem tells us that in practice, with finite samples we cannot hope to search for the expected risk minimizer over . Thus we often consider a subset of , , to carry out searches on. By doing so, we risk that might not be an element of . This tradeoff can be mathematically expressed as In the above decomposition, part does not depend on the data and is non-stochastic. It describes how far away our assumptions () are from the truth (). will be strictly greater than 0 if we make assumptions that are too strong ( too small). On the other hand, failing to put enough restrictions on will cause it to be not learnable, and part will not stochastically converge to 0. This is the well-known overfitting problem in statistics and machine learning literature. Example: Tikhonov regularization A good example where learnable classes are used is the so-called Tikhonov regularization in reproducing kernel Hilbert space (RKHS). Specifically, let be an RKHS, and be the norm on given by its inner product. It is shown in that is a learnable class for any finite, positive . The empirical minimization algorithm to the dual form of this problem is This was first introduced by Tikhonov to solve ill-posed problems. Many statistical learning algorithms can be expressed in such a form (for example, the well-known ridge regression). The tradeoff between and in () is geometrically more intuitive with Tikhonov regularization in RKHS. We can consider a sequence of , which are essentially balls in with centers at 0. As gets larger, gets closer to the entire space, and is likely to become smaller. However we will also suffer smaller convergence rates in . The way to choose an optimal in finite sample settings is usually through cross-validation. Relationship to empirical process theory Part in () is closely linked to empirical process theory in statistics, where the empirical risk are known as empirical processes. In this field, the function class that satisfies the stochastic convergence are known as uniform Glivenko–Cantelli classes. It has been shown that under certain regularity conditions, learnable classes and uniformly Glivenko-Cantelli classes are equivalent. Interplay between and in statistics literature is often known as the bias-variance tradeoff. However, note that in the authors gave an example of stochastic convex optimization for General Setting of Learning where learnability is not equivalent with uniform convergence. References Machine learning
Learnable function class
Engineering
1,055
3,990,817
https://en.wikipedia.org/wiki/Differential%20technological%20development
Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. Using this strategy, societies would strive to delay the development of harmful technologies and their applications while accelerating the development of beneficial technologies, especially those that offer protection against harmful technologies. History of the idea Differential technological development was initially proposed by philosopher Nick Bostrom in 2002 and he applied the idea to the governance of artificial intelligence in his 2014 book Superintelligence: Paths, Dangers, Strategies. The strategy was also endorsed by philosopher Toby Ord in his 2020 book The Precipice: Existential Risk and the Future of Humanity, who writes that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones." Informal discussion Paul Christiano believes that while accelerating technological progress appears to be one of the best ways to improve human welfare in the next few decades, a faster rate of growth cannot be equally important for the far future because growth must eventually saturate due to physical limits. Hence, from the perspective of the far future, differential technological development appears more crucial. Inspired by Bostrom's proposal, Luke Muehlhauser and Anna Salamon suggested a more general project of "differential intellectual progress", in which society advances its wisdom, philosophical sophistication, and understanding of risks faster than its technological power. Brian Tomasik has expanded on this notion. See also Existential risk References Technology forecasting Transhumanism Technological change Existential risk
Differential technological development
Technology,Engineering,Biology
328
23,820,493
https://en.wikipedia.org/wiki/Gymnopilus%20communis
Gymnopilus communis is a species of agaric fungus in the family Hymenogastraceae. Found in Veracruz, Mexico, it was described as new to science in 1994. Taxonomy The type collection of Gymnopilus communis was discovered fruiting on wood in a pine-oak forest in Veracruz, Mexico in July 1992. Mycologist Laura Guzmán Dávalos described it and five other novel Mexican Gymnopilus species in the journal Mycotaxon in 1994. The specific epithet communis refers to its common habit. Description The bell-shaped to convex cap is about in diameter, and has a broad umbo. It is brownish orange with a smooth to finely fibrillose surface texture. The narrow gills are closely spaced, and orange-yellow with a yellowish edge. The stipe, roughly the same color as the cap or lighter, measures long by wide. The spores are ellipsoid, with surfaces covered in small warts, and measure 6–8.4 by 4–4.8 μm. The basidia (spore-bearing cells) are club-shaped and measure 24.8 by 7.2 μm. They are four-spored, with sterigmata up to 3.2 μm long. Similar species include Gymnopilus longipes, G. liquiritiae, and G. subsapineus. Habitat and distribution Gymnopilus communis is known only from the type locality in Veracruz. See also List of Gymnopilus species References Fungi described in 1994 Fungi of Mexico tuxtlensis Fungi without expected TNC conservation status Fungus species
Gymnopilus communis
Biology
342
73,134,451
https://en.wikipedia.org/wiki/Offset%20filtration
The offset filtration (also called the "union-of-balls" or "union-of-disks" filtration) is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space. The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters (i.e., a growing sequence of balls), in order to observe the stability of topological features with respect to attractors. Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper Topological Persistence and Simplification. Since then, the offset filtration has become a primary example in the study of computational topology and data analysis. Definition Let be a finite set in a metric space , and for any let be the closed ball of radius centered at . Then the union is known as the offset of with respect to the parameter (or simply the -offset of ). By considering the collection of offsets over all we get a family of spaces where whenever . So is a family of nested topological spaces indexed over , which defines a filtration known as the offset filtration on . Note that it is also possible to view the offset filtration as a functor from the poset category of non-negative real numbers to the category of topological spaces and continuous maps. There are some advantages to the categorical viewpoint, as explored by Bubenik and others. Properties A standard application of the nerve theorem shows that the union of balls has the same homotopy type as its nerve, since closed balls are convex and the intersection of convex sets is convex. The nerve of the union of balls is also known as the Čech complex, which is a subcomplex of the Vietoris-Rips complex. Therefore the offset filtration is weakly equivalent to the Čech filtration (defined as the nerve of each offset across all scale parameters), so their homology groups are isomorphic. Although the Vietoris-Rips filtration is not identical to the Čech filtration in general, it is an approximation in a sense. In particular, for a set we have a chain of inclusions between the Rips and Čech complexes on whenever . In general metric spaces, we have that for all , implying that the Rips and Cech filtrations are 2-interleaved with respect to the interleaving distance as introduced by Chazal et al. in 2009. It is a well-known result of Niyogi, Smale, and Weinberger that given a sufficiently dense random point cloud sample of a smooth submanifold in Euclidean space, the union of balls of a certain radius recovers the homology of the object via a deformation retraction of the Čech complex. The offset filtration is also known to be stable with respect to perturbations of the underlying data set. This follows from the fact that the offset filtration can be viewed as a sublevel-set filtration with respect to the distance function of the metric space. The stability of sublevel-set filtrations can be stated as follows: Given any two real-valued functions on a topological space such that for all , the -dimensional homology modules on the sublevel-set filtrations with respect to are point-wise finite dimensional, we have where and denote the bottleneck and sup-norm distances, respectively, and denotes the -dimensional persistent homology barcode. While first stated in 2005, this sublevel stability result also follows directly from an algebraic stability property sometimes known as the "Isometry Theorem," which was proved in one direction in 2009, and the other direction in 2011. A multiparameter extension of the offset filtration defined by considering points covered by multiple balls is given by the multicover bifiltration, and has also been an object of interest in persistent homology and computational geometry. References Applied mathematics Computational topology Geometric topology Data analysis
Offset filtration
Mathematics
894
33,898,892
https://en.wikipedia.org/wiki/Cathedral%20arch
A cathedral arch is an arch used in bridge design. It consists of an arched structural system, wherein vertical load bearing occurs only at the crown, or peak of the arch. As applied to bridge design, cathedral arch bridges feature no intermediary spandrel column elements between the foundation abutments and the crown of the arch system, where the roadway superstructure is constrained to the substructure. The largest cathedral arch bridge in the world is the Galena Creek Bridge near Reno, Nevada. Outside the bridge design the term refers to arches similar to the stilted arches used in cathedrals. References Architectural elements
Cathedral arch
Technology,Engineering
125
708,839
https://en.wikipedia.org/wiki/Loschmidt%27s%20paradox
In physics, Loschmidt's paradox (named for J.J. Loschmidt), also known as the reversibility paradox, irreversibility paradox, or (), is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the time reversal symmetry of (almost) all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behaviour of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict, hence the paradox. Origin Josef Loschmidt's criticism was provoked by the H-theorem of Boltzmann, which employed kinetic theory to explain the increase of entropy in an ideal gas from a non-equilibrium state, when the molecules of the gas are allowed to collide. In 1876, Loschmidt pointed out that if there is a motion of a system from time t0 to time t1 to time t2 that leads to a steady decrease of H (increase of entropy) with time, then there is another allowed state of motion of the system at t1, found by reversing all the velocities, in which H must increase. This revealed that one of Boltzmann's key assumptions, molecular chaos, or, the Stosszahlansatz, that all particle velocities were completely uncorrelated, did not follow from Newtonian dynamics. One can assert that possible correlations are uninteresting, and therefore decide to ignore them; but if one does so, one has changed the conceptual system, injecting an element of time-asymmetry by that very action. Reversible laws of motion cannot explain why we experience our world to be in such a comparatively low state of entropy at the moment (compared to the equilibrium entropy of universal heat death); and to have been at even lower entropy in the past. Later authors have coined the term "Loschmitz's demon" (in analogy to Maxwell's demon, see below) for an entity that is able to reverse time evolution in a microscopic system, in their case of nuclear spins, which is indeed, if only for a short time, experimentally possible. Before Loschmidt In 1874, two years before the Loschmidt paper, William Thomson defended the second law against the time reversal objection in his paper "The kinetic theory of the dissipation of energy". Arrow of time Any process that happens regularly in the forward direction of time but rarely or never in the opposite direction, such as entropy increasing in an isolated system, defines what physicists call an arrow of time in nature. This term only refers to an observation of an asymmetry in time; it is not meant to suggest an explanation for such asymmetries. Loschmidt's paradox is equivalent to the question of how it is possible that there could be a thermodynamic arrow of time given time-symmetric fundamental laws, since time-symmetry implies that for any process compatible with these fundamental laws, a reversed version that looked exactly like a film of the first process played backwards would be equally compatible with the same fundamental laws, and would even be equally probable if one were to pick the system's initial state randomly from the phase space of all possible states for that system. Although most of the arrows of time described by physicists are thought to be special cases of the thermodynamic arrow, there are a few that are believed to be unconnected, like the cosmological arrow of time based on the fact that the universe is expanding rather than contracting, and the fact that a few processes in particle physics actually violate time-symmetry, while they respect a related symmetry known as CPT symmetry. In the case of the cosmological arrow, most physicists believe that entropy would continue to increase even if the universe began to contract (although the physicist Thomas Gold once proposed a model in which the thermodynamic arrow would reverse in this phase). In the case of the violations of time-symmetry in particle physics, the situations in which they occur are rare and are only known to involve a few types of meson particles. Furthermore, due to CPT symmetry, reversal of the direction of time is equivalent to renaming particles as antiparticles and vice versa. Therefore, this cannot explain Loschmidt's paradox. Dynamical systems Current research in dynamical systems offers one possible mechanism for obtaining irreversibility from reversible systems. The central argument is based on the claim that the correct way to study the dynamics of macroscopic systems is to study the transfer operator corresponding to the microscopic equations of motion. It is then argued that the transfer operator is not unitary (i.e. is not reversible) but has eigenvalues whose magnitude is strictly less than one; these eigenvalues corresponding to decaying physical states. This approach is fraught with various difficulties; it works well for only a handful of exactly solvable models. Abstract mathematical tools used in the study of dissipative systems include definitions of mixing, wandering sets, and ergodic theory in general. Fluctuation theorem One approach to handling Loschmidt's paradox is the fluctuation theorem, derived heuristically by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain value for the dissipation function (often an entropy like property) over a certain amount of time. The result is obtained with the exact time reversible dynamical equations of motion and the universal causation proposition. The fluctuation theorem is obtained using the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus. This theorem is applicable for transient systems, which may initially be in equilibrium and then driven away (as was the case for the first experiment by Sevick et al.) or some other arbitrary initial state, including relaxation towards equilibrium. There is also an asymptotic result for systems which are in a nonequilibrium steady state at all times. There is a crucial point in the fluctuation theorem, that differs from how Loschmidt framed the paradox. Loschmidt considered the probability of observing a single trajectory, which is analogous to enquiring about the probability of observing a single point in phase space. In both of these cases the probability is always zero. To be able to effectively address this you must consider the probability density for a set of points in a small region of phase space, or a set of trajectories. The fluctuation theorem considers the probability density for all of the trajectories that are initially in an infinitesimally small region of phase space. This leads directly to the probability of finding a trajectory, in either the forward or the reverse trajectory sets, depending upon the initial probability distribution as well as the dissipation which is done as the system evolves. It is this crucial difference in approach that allows the fluctuation theorem to correctly solve the paradox. Information theory A more recent proposal concentrates on the step of the paradox in which velocities are reversed. At that moment the gas becomes an open system, and in order to reverse the velocities, position and velocity measurements have to be made. Without this, no reversal is possible. These measurements are themselves either irreversible, or reversible. In the first case, they require an increase of entropy in the measuring device that will at least offset the decrease during the reversed evolution of the gas. In the second case, Landauer's principle can be evoked to reach the same conclusion. Hence, the gas+measuring device system obeys the Second Law of Thermodynamics. It is not a coincidence that this argument mirrors closely another one given by Bennett to explain away Maxwell’s demon. The difference is that the role of measurement is obvious in Maxwell’s demon, but not in Loschmidt’s paradox, which may explain the 40-year gap between both explanations. In the case of the single-trajectory paradox, this argument preempts the need for any other explanation, although some of them make valid points. The broader paradox, “an irreversible process cannot be deduced from reversible dynamics,” is not covered by the argument given in this section. Big Bang Another way of dealing with Loschmidt's paradox is to see the second law as an expression of a set of boundary conditions, in which our universe's time coordinate has a low-entropy starting point: the Big Bang. From this point of view, the arrow of time is determined entirely by the direction that leads away from the Big Bang, and a hypothetical universe with a maximum-entropy Big Bang would have no arrow of time. The theory of cosmic inflation tries to give reason why the early universe had such a low entropy. See also Maximum entropy thermodynamics for one particular perspective on entropy, reversibility and the Second Law Poincaré recurrence theorem Reversibility Statistical mechanics References J. Loschmidt, Sitzungsber. Kais. Akad. Wiss. Wien, Math. Naturwiss. Classe 73, 128–142 (1876) External links Reversible laws of motion and the arrow of time by Mark Tuckerman Toy systems with time-reversible discrete dynamics showing entropy increase Fibonacci Iterated Map ; Ising-Conway Game Philosophy of thermal and statistical physics Non-equilibrium thermodynamics Physical paradoxes
Loschmidt's paradox
Physics,Chemistry,Mathematics
2,033
49,409,001
https://en.wikipedia.org/wiki/Aspergillus%20pseudocaelatus
Aspergillus pseudocaelatus is a species of fungus in the genus Aspergillus. It was first isolated from an Arachis burkartii leaf in Argentina. It is most related to the non-aflatoxin producing Aspergillus caelatus, producing aflatoxins B and G, as well as cyclopiazonic acid and kojic acid. Growth and morphology A. pseudocaelatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References Further reading pseudocaelatus Fungus species Fungi described in 2011
Aspergillus pseudocaelatus
Biology
154
50,399,438
https://en.wikipedia.org/wiki/Fulmer%20Research%20Institute
Fulmer Research Institute was founded in 1945 as a UK contract research and development organization specializing in materials technology and related areas of physics and chemistry. It was modelled on American contract research companies such as Battelle Memorial Institute and The Mellon Institute of Industrial Research. In 1965 it was acquired by The Institute of Physics and the Physical Society, a rare case of a contract research company being owned by a Learned Society. Through the 1970s and 80s Fulmer evolved. Its services in testing, consultancy and certification were greatly strengthened while academic research declined. It continued to make important developments and innovations for industry and government until in 1990 it was split up and sold to other R & D and testing organizations. A few of the landmark achievements during its forty five years were: The extraction of aluminium using sub-halide sublimation Aluminium-tin and aluminium-lead alloys for plain-bearings Chemical Vapour Deposition of metals and ceramics to produce coatings, tubes, crucibles etc. Fundamental research into aluminium copper alloys, leading to high strength formulations for the skin of high performance aircraft YQAF, a subsidiary company authorised to assess and accredit organizations to quality standards. Origins (1945 and 1946) Fulmer Research Institute was founded in 1945 by Col W C (Dev) Devereux and incorporated in 1946. He had been a pioneer in the use of light metal alloys in aero engines and, in the Second World War, he had an important role in the UK Ministry of Aircraft Production, organizing the assembly in Britain of American aircraft and reorganizing the repair of aircraft and aero-engines. After the war, in 1945, he set up a company called Almin Ltd (Associated Light Metal Industries) which brought together a group of companies mostly concerned with the production and processing of aluminium and magnesium alloys. He wanted Almin to have research facilities but he recognised that Almin's R&D needs alone were not sufficient to justify the investment in staff and capital equipment required for properly equipped laboratories. His answer was to establish a contract research organization along the lines of Battelle Memorial Institute and The Mellon Institute of Industrial Research in the USA. Thus he founded Fulmer Research Institute as one of the first contract research companies in Britain. Initially it was in temporary accommodation but he soon found a permanent base by purchasing a large Edwardian country house with ten acres of grounds, in the Buckinghamshire village of Stoke Poges. The name 'Fulmer' was the name of the local telephone exchange and that of a nearby village. Building the team Devereux recruited E A G Liddiard from The British Non-Ferrous Metals Research Association (BNF) to be Fulmer's director of research Among other senior staff recruited were: Philipp Gross a refugee from Vienna who was an expert in chemical thermodynamics and had been working at International Alloys (another Almin company) on the direct reduction of magnesite to magnesium, appointed principal scientist Ted Calnan, appointed principal physicist Arthur Sully, recruited from Special Metals Wiggin Limited, was an expert on the creep of jet engine turbine blades. He established Fulmer's reputation in physical metallurgy Harold Hardy, another metallurgist, worked on the development of new aluminium alloys Gordon Metcalfe, recruited from the Royal Aircraft Establishment to head the corrosion section Tom Heal, a physicist who had served in the Navy working on counter measures against acoustic and magnetic mines, became head of physics Eric Brandes, a process metallurgist from the Ford Motor Company Leon Levi, a physical chemist By the end of 1946 Fulmer had about 40 Staff. Policy From the start Fulmer was a commercial enterprise aiming to make a surplus for investment in its own development. It received no grant or membership fees. Its income was solely from projects, each with defined objectives and time and cost limits agreed with individual sponsors from Government or Industry. Normally, the project contract would provide that all results would belong in confidence to the sponsor, who would also own any patents arising from the investigation. Early years (1946 to 1960) Context Fulmer benefited from the immediate post-war climate which was favourable to Research and Development. The UK Government and its agencies continued to spend heavily on R&D. This was despite the fact that Britain was essentially bankrupt and hugely indebted to the United States and Canada. The technological advances which had been made on both sides of the conflict had been impressive: radar, the jet engine, the V-2 rocket and the atomic bomb are just a few examples. The Cold War soon added urgency to further military development and there was enthusiasm for developing peaceful uses of atomic energy. Market In the years up to 1960 the following approximate division of Fulmer's income applied: about 25% of project work was for UK Government defence agencies; 25% for the Atomic Energy Authority and another 10% for other Government agencies. About 10% was for US Government agencies (the US Air Force and the Office of Aerospace Research); 30% was for British Industry. Growth Fulmer grew steadily so that by 1960 there were about 100 staff. Individual research investigators were often recruited to work on specific projects as contracts were obtained. Each recruit was also expected to develop proposals for work in his or her areas of expertise, whether or not these fitted into Fulmer's existing pattern of work. This system resulted in a progressive evolution and wide diversification of Fulmer's skills base. This was a strength in that faced with a materials problem Fulmer could usually help the client with new perspectives. Some notable projects and activities 1946 to 1960 Phase diagram determination for the Aluminium-Copper age-hardening alloy system. This was to lead to further discoveries at Fulmer on the influence of trace elements on nucleation and grain growth. Fulmer used this to formulate and patent alloys with high strength and toughness and good creep resistance for the skins of high-performance aircraft. The sub-halide catalytic distillation method for primary aluminium production. Dr Gross began this work at International Alloys and, when Fulmer was founded, this became Fulmer's first contract. He speculated and then proved that aluminium has a subhalide AlCl. He devised a method by which pure Aluminium can be catalytically distilled from any aluminium-containing alloy or mixture or from scrap aluminium using the following reversible reaction: 2Al(solid) + AlCl3(gas) \rightleftharpoons 3AlCl(gas) The forward reaction is favoured at high temperature and low partial pressure of AlCl3. On cooling, the reaction reverses; aluminium condenses and the trichloride can be recirculated. Analogous methods were developed for the extraction of beryllium and titanium. Determination of thermodynamic data by high accuracy calorimetry. Heats of formation and free energies of formation were needed for the assessment of potential rocket fuels. Fulmer established equipment and skills for very accurate measurement of heats of formation and heats of reaction. Reaction conditions could be extreme: burning in fluorine might have to be contained or temperatures up to 2000 °C might be needed. Calorimeter fluid temperatures were measured to 2×10−4 K. Over the many years that this work continued, Fulmer established thermodynamic data for a wide range of metal halides, intermetallics, mixed oxides and other compounds. Aluminium-tin alloys for plain bearing shells. Fulmer's researchers applied to the aluminum-tin alloy system the fundamental work of C S Smith on the relationship of interfacial energy and microstructure. They established a process of cold work and recrystallization which converts the weak as-cast structure, in which aluminium grains are surrounded by tin, to one in which tin is dispersed in a strong aluminium matrix. The aluminium provides the load-bearing required for a plain bearing while the tin gives the required bearing properties. For many years these aluminium-tin bearings were used in most diesel engines and they are still in current use. X-ray diffraction crystallography of metals and alloys for phase diagram determination. Interfacial tension and wetting behaviour of liquid sodium. Vitreous enamelling of aluminium to give high integrity electrically resistant coatings. Measurements of stress corrosion and corrosion fatigue of light alloys. Studies of deformation processes in difficult metals such as beryllium and chromium. Impalco ownership (1960 to 1964) In 1960 Almin was bought by Imperial Aluminium Company (Impalco), a company formed between the Aluminium Company of America (Alcoa) and Imperial Chemical Industries (ICI) which incorporated the whole of ICI's aluminium facilities. Impalco's primary interest in buying Almin was to acquire the facilities of International Alloys, a member of the Almin group. Thus Fulmer was acquired incidentally and it did not fit easily into the Impalco group. Since Impalco had huge research facilities in-house, it had no need of Fulmer's services. Impalco's rival companies were also reluctant to place large contracts with Fulmer under this ownership. Context As a background to this change of ownership, the general climate for science and technology was becoming less favourable. Faith in scientific and technological solutions was diminished by some spectacular failures: the de Havilland Comet suffered catastrophic in-flight failures; thalidomide caused tragic birth defects. Government procurement projects were frequently out of control: the BAC TSR-2 was cancelled after enormous overspend in development and only 24 test flights; the de Havilland Blue Streak missile was also abandoned in 1960 after great expense. With Government budgets under severe pressure, contracts from government agencies were becoming harder to obtain, Thus most of Fulmer's markets were becoming difficult and Fulmer's long term viability was in doubt. Some notable projects and activities 1960 to 1964 Measurement of the emissivities of gases at temperatures up to 1000 °C. Emissivity values were required for gaseous aluminium chlorides as part of the development of the sub-halide distillation process mentioned above. A flowing column of the gas to be measured, heated in a refractory tube, was maintained at a fixed length by gas barriers at each end, formed by balanced opposing streams of argon. The radiation emitted by the gas was measured by a thermopile. A diaphragm was set up to shield this sensor from radiation emitted by the furnace and other hot parts of the equipment. The whole apparatus was mounted on a water-cooled optical bench. X-ray diffraction determination of the structures of liquid metals. There was a need for structural studies of liquid sodium and sodium-potassium alloys because these were used as coolants in fast-breeder reactors. Fulmer developed a high temperature x-ray diffractometer for investigating the structures of liquid metals and alloys. In addition to its studies of liquid alkali metals, Fulmer discovered that certain eutectics, such as those in the gold-silicon and gold-germanium systems, have a structure in the liquid phase that has to be disrupted on crystallization. This gives rise to considerable supercooling which results in multiple nucleation, and hence a very fine grain size in the resulting polycrystalline alloy. Production of high purity austenitic stainless steel. High purity austenitic stainless steel was of interest as a potential cladding material for nuclear fuel elements. Fulmer produced high purity chromium by electro-deposition from a fluoride bath. Zone refining using induction heating was used to produce high-purity iron and nickel and to remove oxygen from chromium. Impurity levels of 1-40 parts per million were achieved. Chromium with improved ductility. Uses of chromium as a high temperature material are limited by its brittleness. Starting with electro-deposited flakes of high purity chromium, investigators at Fulmer used argon-arc melting to form electrodes for ingot production in a consumable electrode furnace. Ingots were then heated in an inert or hydrogen atmosphere and extruded to give a fine grained structure. Critical warm working, below the recrystallization temperature, then gave improved room-temperature ductility. Statistical studies of the strength of ceramics. The strength of brittle materials such as ceramics is inherently variable. Fulmer undertook numerous strength tests on sets of nominally identical specimens of engineering ceramics such as silicon nitride and silicon carbide. They devised graphical techniques for finding the probability distribution of test results and contributed to criteria for engineering design with these materials. The Institute of Physics period (1965 to 1990) In 1964 Impalco decided to offer Fulmer for sale. At that time Dr (later Sir) James Taylor, who was Chairman of Imperial Metal Industries (IMI), was also the Honorary Treasurer of the Institute of Physics and the Physical Society (IOP). He proposed that IOP should acquire Fulmer and thus become the first Learned Society to own a commercial research company. The Council of the IOP, in recommending the purchase of Fulmer to its membership, expressed the intention that, after providing for equipment needs, income from the investment in Fulmer was to be used to support the scientific and educational work of the IOP. The purchase was made possible by a grant from ICI, to be repaid over ten years from Fulmer profits. Thus, in 1965, IOP became the owner of Fulmer. 1965 to 1970 With its future thus assured, in 1966 additional laboratories in a new building were opened on the Stoke Poges site. Also in that year Fulmer strengthened its expertise, particularly in electron metallography, by recruiting several key staff who transferred from Aeon Laboratories of Egham, Surrey. In 1969 Mr Liddiard retired as director of research and Dr W E Duckworth was recruited from the British Iron and Steel Research Association and appointed in his place. In 1970 Fulmer set up a new unit, Fulmer Technical Services (FTS), to provide a focus for its testing and consultancy services to industry. During this period there was a gradual increase in income and a modest profit while staff numbers remained at about 120. 1970 to 1985 Context By the early 1970s the climate for R&D was again changing. Government R&D budgets continued to tighten. The earlier pattern of Fulmer sponsorship, with a large proportion of contracts from UK ministries and government agencies, no longer applied. In 1955 this proportion had been 70% but by 1970 it had fallen to 45%. By 1985 it was to become less than 5%. Meanwhile, contract R&D was becoming a familiar concept in the UK. Following Fulmer, many other contract R&D companies had been formed, important examples being Huntingdon Life Sciences(1957) and Cambridge Consultants(1960). This gave Fulmer opportunities for collaboration but also increased competition. Fulmer promoted contract R&D by publishing Register of Consulting Scientists and Contract Research Organizations. In 1971 Lord Rothschild published his report on Government R&D in which a major recommendation was that "applied R&D ... must be done on a customer-contractor basis. The customer says what he wants; the contractor does it (if he can); and the customer pays". Despite Rothschild's recommendations, government procurement was slow to change. By 1975, leading independent research companies felt that they were not getting a fair share of government R&D contracts and needed a stronger voice. Fulmer joined with six other companies in setting up the Association of Independent Contract Research Organizations (AICRO). The journal New Scientist published a special supplement on Contract Research in 1974 There were two major developments that intensified competition in Fulmer's market. Firstly, organizations such as Harwell, which had been fully government funded, were seeking contracts from industry to make good their declining government income. Secondly by 1969, following the Robbins Report(1963) on higher education, nine completely new universities had been founded and the ten existing Colleges of Advanced Technology had been converted into full universities. Robbins found that in the existing universities, teachers spent a third of their time on teaching and rather less than a third on research. He recommended that "The balance between teaching and research in the universities should in general be maintained." The net effect was a huge expansion of R&D facilities in universities, funded by their block grants, and they were naturally keen to supplement their incomes with contracts using these facilities. Policy In response to these market changes Eric Duckworth initiated changes of policy. Fulmer sought to extend its services to include the full range from R&D and testing to small scale manufacture, to extend its area of expertise to cover a wider range of materials and to develop new markets. It sought to collaborate with or to acquire organisations with complementary skills and facilities. The aim was to be able to offer to industrial companies a comprehensive service in all aspects of materials technology. Fulmer also changed its policy on intellectual property. Previously patents were applied for as part of sponsored projects so that all rights belonged to the sponsor. Beginning in 1970, the policy also included the patenting of worthwhile ideas developed in-house before applying for sponsorship so that Fulmer could retain rights and benefit from subsequent exploitation. Another new approach was to launch projects in which a number of clients jointly sponsored a development (multi-client projects). There was also a change of management style. Early in his career Eric Duckworth had spent ten years at the Glacier Metal Company at the time when the Glacier Project - a pioneering new approach to management-staff relations - was being developed there by Wilfred (later Lord) Brown, the managing director, and Elliott Jaques of the Tavistock Institute of Human Relations. When he joined Fulmer Eric Duckworth introduced a style of management heavily influenced by his experience of the Glacier Project. Over time this evolved into an open style with features such as a company council with representatives from all staff, regular management briefing of staff and transparent grading and pay scales against which individual staff were appraised annually. The grading system enabled parity of career progression between managers and people who focussed on developing their technical expertise. Growth by acquisition 1973 to 1977 The first and most important of the complementary organizations to link with Fulmer was Yarsley, whose expertise was particularly strong in plastics and polymers and their applications. The Yarsley organization was founded by Dr Victor Yarsley a pioneer expert in plastics and an entrepreneur. Before the Second World War he had been a consultant in this new field and, starting in 1941 he had built a series of laboratories, mostly by converting and extending domestic premises, just as in the case of Fulmer. By 1970 his group consisted of Yarsley Research Laboratories (YRL) at Chessington, Surrey and Yarsley Testing Laboratories (YTL) at Ashtead, Surrey. A collaboration agreement was signed in 1970 and in 1973 Fulmer purchased Yarsley. By early 1974, most of the Chessington activities had been moved to another new building on the Stoke Poges site and the others to Ashtead. Also in 1973 Fulmer purchased the engineering activities of Aeon Laboratories, Englefield Green, Surrey. Aeon's engineering work focussed on the manufacture of ancillary equipment for electron microscopes and for computers. In 1975 Fulmer strengthened Yarsley's plastics processing capability by acquiring IPEC (Independent Plastics Engineering Centre) of Newhaven, Sussex. The Newhaven activities were combined with Yarsley's own plastics processing operation to form a new company: Yarsley Polymer Engineering Centre (YPEC). In 1977 a new site was acquired at Redhill, Surrey to accommodate YPEC and the Yarsley research and testing facilities. This involved progressively transferring all the staff and equipment from Newhaven and Ashtead and the polymer facilities from Stoke Poges. A new company Yarsley Technical Centre Limited (YTEC) was set up to embrace all the activities carried out by YRL, YTL and YPEC. In 1982 Fulmer established Fulmer Research & Development (Singapore) Pte Ltd, a joint venture with the Singapore-based company Chemical Laboratories Pte Ltd. The joint venture offered metallurgical and polymer-based technical services. A second overseas company Fulmer Research (SA) Pty Ltd was set up in Johannesburg, South Africa in 1985. This was not successful and was closed after a few years. The early 1980s: testing, accreditation and quality From their earliest days both Fulmer and Yarsley Testing Laboratories had carried out a wide variety of tests for clients and had designed and constructed specialized test equipment. In 1982 both Fulmer Technical Services and Yarsley Technical Centre were awarded accreditation from the National Testing Laboratory Accreditation Scheme (NATLAS). By the late 1970s American and European governments and business leaders had become increasingly concerned about competition from Japan. Many decided to adopt some Japanese industrial practices, including quality management, which was thought to have played a large part in the Japanese economic miracle. Beginning in the early 1980s, the quality standard BS 5750 (1979) became widely adopted by British companies. In 1985, Yarsley Technical Centre, which already had a strong background in standards and accreditation, established Yarsley Quality Assured Firms (YQAF) as an independent certification body, supported by the UK Department of Trade and Industry. YQAF assessed conformity to BS 5750 and certified conforming companies. Its certification service was overseen by an independent Certification Board under an independent chairman, thus ensuring that there was no conflict of interest with YQAF's consultancy services. YQAF was successful and grew rapidly by establishing a network of regional offices throughout the UK. It was incorporated in 1987 and gained accreditation from the National Accreditation Council for Certification Bodies (NACCB). Some notable projects and activities 1965 to 1989 Chemical Vapour Deposition (CVD). This was a major development area at Fulmer. A wide range of metals and inorganic compounds were deposited. Examples are: tungsten coating of graphite rocket nozzles for ablation resistance, boron nitride crucibles for melting gallium arsenide, alumina coatings on carbon fibres for reinforcement of aluminium, zinc sulphide infrared radomes for heat-seeking missiles. Fulmer's profound understanding of subhalide disproportionation led its chemists to devise a process in which halide vapour, pulsed at low partial pressure could be used to put uniform oxidation resistant coatings of aluminium or chromium on gas turbine blades. This was especially difficult because the coated surface had to include the insides of the blades' long narrow cooling passages – 2mm diameter and 180 mm long for example. In 1975 Fulmer hosted the fifth International Conference on Chemical Vapor Deposition. The Fulmer tension meter is a device for measuring the tension in ropes and cables. A fixed length of cable is displaced at right angles using a lever and cam. The tension in the cable is arrived at by measuring the consequent displacement in the frame of the meter. In 1971 Fulmer set up a joint company with the sponsor of this development and subsequently acquired all the shares. The meter continues to be produced and marketed by a successor company. Fulmer devised the RPD system for project planning under uncertainty and gave about a hundred training seminars to R&D investigators in the UK and abroad. The Fulmer Materials Optimizer (FMO). This was an information system designed to enable a rapid comparison of materials competing for any given application. Many of Fulmer's technical staff contributed information to the FMO and many clients subscribed to support its preparation. It was published in 1974 as four loose-leaf large format files. The FMO included many data sheets, nomograms and other charts. It illustrates the approach needed in 1974, before the days of hypertext and the World Wide Web. Ion Engine. In the early 1970s Fulmer participated in a collaborative programme on the development of ion thrusters for space propulsion. They constructed a Type T4A mercury ion thruster and a high-vacuum test facility. Grid life testing totalling over 2000 hours was successfully completed. In 1975 Fulmer obtained a two-year contract from UNIDO to set up a Metals Advisory Service (MAS) in Lahore Pakistan. The laboratories established then are now the Technical Service Centre of the Pakistan Standards and Quality Control Authority (PSQCA). Solar water-heating trials. In 1976 Fulmer built a solar laboratory on the Stoke Poges site. This was the approximate size and shape of a two-storey domestic dwelling and was mounted on a circular track so that it could be rotated to any orientation. Solar hot-water panels were mounted on the roof. Investigations determined the economical viability of various systems for space and water heating and which materials and processes should be used. The development of frame-to-hull bonding methods in GRP ships. The project enabled the construction of for the Royal Navy, (Wilton was a prototype coastal minesweeper/minehunter and the first warship in the world to be constructed from glass-reinforced plastic) and supported the development of the Royal Navy's Hunt class Mine countermeasures vessels. Shape Memory alloys. When an object made of a shape-memory alloy is deformed under suitable conditions it can be made to return to its original shape by heating. Researchers at Fulmer discovered that this phenomenon is not confined to intermetallic compounds such as NiTi, but is exhibited in many metal solid solutions also. They did extensive work on many alloy systems. Two example applications developed at Fulmer are: heat-shrinkabe sleeves for use as pipe couplings and an actuator for the deployment of solar panels on spacecraft. Starting in 1977 YRL undertook small scale synthesis of specified organic chemicals many of them the organo-fluorine compounds widely used in pharmaceutical research and as precursors in drug manufacture. This was successful and in 1988 a joint venture with Shell Chemicals UK was launched as Yarsley Fluorochemicals Ltd. This was later purchased by Shell. After a management buy-out, it now continues as JRD Fluorochemicals Ltd. Superdart. A marksman training system in which the point of impact of a rifle round on a target is computed by triangulation from the signals received from a number of acoustic sensors and is then displayed on a screen next to the firing point. This gives the marksman instant feedback on his accuracy. This is an example of a multi-disciplinary project. It involved ballistics, sensor technology and mathematical modelling as well as the development of new materials. Acoustic emission monitoring. Hydrophilic polymers for soft contact lenses. YTEC devised novel homopolymer and copolymer systems for soft contact lens preparations. A polymer system was formulated to exhibit a high degree of water containment in the swollen state and yet be sufficiently stable to form a precision lens to an individual prescription. YTEC developed a process to full production scale and commissioned the production facility on the client's premises. Body Armour. Fabrication of targets for the ISIS neutron source at the Rutherford-Appleton Laboratory. These consisted of an assembly of depleted uranium discs clad in zircalloy. The production process involved machining the uranium discs, sealing their zircalloy containers by electron-beam welding, hot isostatic pressing to develop a diffusion bond between the zircalloy and the uranium and then ultrasonic testing to verify the integrity of the bond before final assembly. Fulmer devised techniques for probabilistic mathematical modelling and in 1986 hosted the first international conference on Modelling under Uncertainty. The gathering storm (1985 to 1989) In accordance with the terms of IOP purchase, Fulmer's capital investment in new facilities was expected to be financed from profit and Fulmer would make a modest annual contribution to IOP funds. However, Fulmer's recent expansion and its large investment in capital equipment required increasing bank borrowing. Considerable management effort and other resources had been taken up with the transfer of facilities between Fulmer, Chessington, Ashtead, Redhill and Slough and there had been a damaging fire at Ashtead. It was clear that alternative sources of finance were needed. A management buyout was explored and found to be not feasible. Preparations were made for a stock exchange flotation but, in the late 1980s Fulmer sustained large losses and plans to float were postponed. The balance of Fulmer's activities had changed. Academic research was now a minor part of its work. Most of its income came from testing, consultancy and small scale manufacture. The IOP were becoming concerned that their ownership of Fulmer as a commercial organization might be judged incompatible with their charitable status as a learned society. They were also concerned that Fulmer was making losses and had a growing overdraft. The IOP Council finally decided to sell Fulmer. Close (1990) Initially IOP attempted to sell the company as a complete unit but when this was unsuccessful they decided to sell the Fulmer companies at Stoke Poges and Slough, and the Yarsley operation at Redhill as separate entities. In 1989 exploratory talks with an American testing and consultancy company were held regarding a merger with Yarsley but no agreement could be reached. An approach was then made to the UK subsidiary of the Swiss company Societe Generale de Surveillance S.A. (SGS), who were particularly interested in strengthening their activities in quality assurance consultancy and certification. Agreement was soon reached for them to purchase Yarsley, and the sale took place on November 30, 1990. The Fulmer activities at Stoke Poges were merged with BNF Metals Technology Centre at Wantage Oxfordshire, and the manufacturing unit at Slough was acquired by Sintek of Germany. Legacy Fulmer was a pioneer of Contract R&D in the UK. During its forty five years it provided technical solutions and research results as well as testing and consultancy for hundreds of companies and national and international agencies across the whole field of materials technology and related areas of physics and chemistry. Many papers were published in learned journals and books and many patents were granted to Fulmer authors. Fulmer sponsored the further education of its technicians and helped many young graduates in metallurgy, physics and other sciences on the road to successful careers. In the 1970s and 80s Fulmer undertook curriculum development projects in Berkshire and Buckinghamshire primary and secondary schools. It thus introduced many young people to engineering, to problem solving methods and to working in teams.[49] A senior staff member joined the Berkshire education advisory service from Fulmer to continue and extend work of this kind. Among the companies and organizations that owe their origins to Fulmer are: Applied Microengineering Limited. In-situ aligned wafer bonding machines and services Archer Technicoat Limited. Chemical vapour deposition and infiltration; manufacture and supply of related equipment Building Investigation and Testing Services Limited Chemlab Technology (Singapore) Pte Ltd. Set up in 1982 as a joint venture between Fulmer and Chemlab International (Singapore) Pte Ltd. Hansford Sensors Limited. Manufacture and supply of vibration measurement equipment IPH Fulmer Rope Tension Meters JRD Fluorochemicals Limited M4 Technologies Ltd – a Nottingham University spin-out. Research, consultancy and technology transfer services in the fields of materials and surface engineering, metallurgy, manufacturing and project management. Phoenix Scientific Industries Limited. Gas atomization for the production of metal powders; manufacture and supply of related equipment Questans Limited. Software development and consultancy specializing in thesaurus management and R&D management. Traded until December 2007 Quo-tec Limited. Consultancy on the management of innovation. Sold in 2003 to CSIR (South Africa). the Technical Service Centre of The Pakistan Standards and Quality Control Authority (PSQCA). USL Ultrasonic Sciences. A major supplier to industry of automated and semi-automated ultrasonic testing systems and instruments, worldwide. Fulmer people Chairmen of the board Directors of Research Long-standing members of the senior management team Grev Brook; Bill Bowyer; David Davies; Mike Dewey; Bill Flavell; Philipp Gross; Eddie Sugars; GI Williams Technical staff Over the life of Fulmer about 500 people were members of staff. Among these, because of the wide range of projects that Fulmer undertook, investigators and other technical staff had to be able to adapt their specialist skills and to innovate. They were also expected to play a part in attracting the necessary funding from business or Government. Other notable Fulmer alumni Marjorie Caserio née Beckett John Coiley Ian Polmear David Trefgarne In popular culture In 1969, Pinewood film studios hired a chemistry laboratory at Fulmer for use as a film set for the film "The Chairman" (also known as "The Most Dangerous Man in the World"), starring Gregory Peck. References Notes External links www.fulmerresearchinstitute.uk 1945 establishments in the United Kingdom Defunct technology companies of the United Kingdom History of Buckinghamshire History of science and technology in England Institute of Physics Materials science organizations Metallurgical industry of the United Kingdom Metallurgical organizations Research and development organizations Research institutes in Buckinghamshire
Fulmer Research Institute
Chemistry,Materials_science,Engineering
6,737
9,522,575
https://en.wikipedia.org/wiki/Via%20Net%20Loss
Via Net Loss (VNL) is a network architecture of telephone systems using circuit switching technologies deployed in the 1950s with Direct Distance Dialing and used until the late 1980s. The purpose of the VNL plan and a five-level long-distance switching hierarchy was to minimize the number of trunk circuits used during a call and maximize the voice quality achieved on each circuit. Excessive noise or loss meant that subscribers may have difficulty hearing each other. This was particularly important in the 1960s when dial-up data applications were developed using analog modems. The five levels of PSTN switching systems used with VNL were: Class 1 - Regional long-distance switching systems Class 2 - Sectional long-distance switching systems Class 3 - Primary long-distance switching systems Class 4 - Toll-access switching systems Class 5 - End-office switching systems Class 5 end-office switches provide local telephone service and dialtone to residential, business, and government subscribers, as well as telephone company payphones. Residential service includes message rate and flat rate local calling plans with extra charges for long-distance calls and supplementary services such as call waiting, 3-way calling, and call forwarding. Business service is mostly message rate local calling plans with extra charges for long distance and supplementary services. Message Rate calling means that subscribers pay for calls based on duration of the call and distance to the called party. Government subscribers include cities, counties, state, and federal agencies and often included Centrex service. Pay phones were traditionally provided exclusively by telephone companies but during the early 1980s Customer-owned coin-operated telephone services were established. Class 4 toll access switches provide long-distance (toll) telephone service including intrastate calling and inter-state calling. Intrastate calls are generally more expensive than inter-state calls due to favorable tariffs with price plans approved by the Public Utilities Commission or Public Service Commission for each state. Inter-state calls are generally less expensive than intrastate calls since tariffs are filed with the Federal Communications Commission because of the inter-state commerce aspect of the service. Class 4 switches provide access to long-distance service in rural areas. In addition, Class 4 switches traditionally provided operator assisted calls such as person-to-person, collect, and calls billed to third parties. However, many operator services are now automated with minimum human intervention. Class 3 primary switches provided the first layer of the AT&T long-distance switching network. VNL routing methods preferred trunk connections between Class 3 switches to minimize class 1 and class 2 connections. Class 3 switches also act as Service Switching Points or SSP's that provide access to Intelligent Network services such as Toll-Free, Virtual Private Network, Calling Card, and Credit Card calls. If circuits to other Class 3 switches were unavailable, the call was routed to the Class 2 (and/or Class 1) switch in the same region. Calls were not routed "up-chain" to Class 2 or Class 1 switches in a different region. Analog circuits between AT&T long-distance switches are known as Inter-Toll trunks while circuits from a long-distance switch to local switches are known as Toll Completing trunks or toll switching trunks. Trunks between long-distance switches in other carrier networks are known as Inter-Machine Trunks or IMT's. Class 2 sectional switches provide the second layer of long-distance switching. VNL routing methods preferred trunk connections between the originating Class 2 switch and a Class 3 or Class 2 switch in a different region. Calls were not routed "up-chain" to a Class 1 switch in a different region. Class 1 regional switches provide the final layer of long-distance switching. VNL routing methods preferred "down-chain" trunk connections between the originating Class 1 switch and a Class 3, Class 2, or Class 1 switch in a different region. Analog trunk connections between Class 1 switches were required to have a loss of zero decibels. The VNL architecture was gradually phased out due to the conversion of network circuits from analog to digital and the related conversion to a non-hierarchical network routing schemes such as AT&T's Dynamic Non-Hierarchical Routing or Nortel's Dynamically Controlled Routing methods. See IEEE publications for details on DNHR and DCR. See also Service Evaluation System Communication circuits
Via Net Loss
Engineering
860
15,767,582
https://en.wikipedia.org/wiki/Comparison%20of%20web%20map%20services
See also List of online map services GraphHopper Navteq Petal Maps Online virtual globes Tencent Maps Traffic Message Channel (TMC) References External links Google Maps Bing Maps MapQuest Maps Mapy.cz OpenStreetMap Here Apple Maps Yandex.Maps Web mapping Street view services Transportation geography Web mapping
Comparison of web map services
Physics,Technology
68
31,224,455
https://en.wikipedia.org/wiki/Applications%20of%20the%20Stirling%20engine
Applications of the Stirling engine range from mechanical propulsion to heating and cooling to electrical generation systems. A Stirling engine is a heat engine operating by cyclic compression and expansion of air or other gas, the "working fluid", at different temperature levels such that there is a net conversion of heat to mechanical work. The Stirling cycle heat engine can also be driven in reverse, using a mechanical energy input to drive heat transfer in a reversed direction (i.e. a heat pump, or refrigerator). There are several design configurations for Stirling engines that can be built (many of which require rotary or sliding seals) which can introduce difficult tradeoffs between frictional losses and refrigerant leakage. A free-piston variant of the Stirling engine can be built, which can be completely hermetically sealed, reducing friction losses and completely eliminating refrigerant leakage. For example, a free-piston Stirling cooler (FPSC) can convert an electrical energy input into a practical heat pump effect, used for high-efficiency portable refrigerators and freezers. Conversely, a free-piston electrical generator could be built, converting a heat flow into mechanical energy, and then into electricity. In both cases, energy is usually converted from/to electrical energy using magnetic fields in a way that avoids compromising the hermetic seal. Mechanical output and propulsion Automotive engines It is often claimed that the Stirling engine has too low a power/weight ratio, too high a cost, and too long a starting time for automotive applications. They also have complex and expensive heat exchangers. A Stirling cooler must reject twice as much heat as an Otto engine or diesel engine radiator. The heater must be made of stainless steel, exotic alloy, or ceramic to support high heating temperatures needed for high power density, and to contain hydrogen gas that is often used in automotive Stirlings to maximize power. The main difficulties involved in using the Stirling engine in an automotive application are startup time, acceleration response, shutdown time, and weight, not all of which have ready-made solutions. However, a modified Stirling engine has been introduced that uses concepts taken from a patented internal-combustion engine with a sidewall combustion chamber (US patent 7,387,093) that promises to overcome the deficient power-density and specific-power problems, as well as the slow acceleration-response problem inherent in all Stirling engines. It could be possible to use these in co-generation systems that use waste heat from a conventional piston or gas turbine engine's exhaust and use this either to power the ancillaries (e.g.: the alternator) or even as a turbo-compound system that adds power and torque to the crankshaft. Automobiles exclusively powered by Stirling engines were developed in test projects by NASA, as well as earlier projects by the Ford Motor Company using engines provided by Philips, and by American Motors Corporation (AMC) with several cars equipped with units from Sweden's United Stirling built under a license from Philips. The NASA vehicle test projects were designed by contractors and designated MOD I and MOD II. NASA's Stirling MOD 1 powered engineering vehicles were built in partnership with the United States Department of Energy (DOE) and NASA, under contract by AMC's AM General to develop and demonstrate practical alternatives for standard engines. The United Stirling AB's P-40 powered AMC Spirit was tested extensively for over and achieved average fuel efficiency up to . A 1980 4-door liftback VAM Lerma was also converted to United Stirling P-40 power to demonstrate the Stirling engine to the public and to promote the U.S. government's alternative engine program. Tests conducted with the 1979 AMC Spirit, as well as a 1977 Opel and a 1980 AMC Concord, revealed the Stirling engines "could be developed into an automotive power train for passenger vehicles and that it could produce favorable results." However, progress was achieved with equal-power spark-ignition engines since 1977, and the Corporate Average Fuel Economy (CAFE) requirements that were to be achieved by automobiles sold in the U.S. were being increased. Moreover, the Stirling engine design continued to exhibit a shortfall in fuel efficiency There were also two major drawbacks for consumers using the Stirling engines: first was the time needed to warm up – because most drivers do not like to wait to start driving; and second was the difficulty in changing the engine's speed – thus limiting driving flexibility on the road and traffic. The process of auto manufacturers converting their existing facilities and tooling for the mass production of a completely new design and type of powerplant was also questioned. The MOD II project in 1980 produced one of the most efficient automotive engines ever made. The engine reached a peak thermal efficiency of 38.5%, compared to a modern spark-ignition (gasoline) engine, which has a peak efficiency of 20–25%. The Mod II project replaced the normal spark-ignition engine in a 1985 4-door Chevrolet Celebrity notchback. In the 1986 MOD II Design Report (Appendix A) the results showed that highway gas mileage was increased from and achieved an urban range of with no change in vehicle gross weight. Startup time in the NASA vehicle was a maximum of 30 seconds, while Ford's research vehicle used an internal electric heater to quickly start the engine, giving a start time of only a few seconds. The high torque output of the Stirling engine at low speed eliminated the need for a torque converter in the transmission resulting in decreased weight and transmission drivetrain losses negating somewhat the weight disadvantage of the Stirling in auto use. This resulted in increased efficiencies being mentioned in the test results. The experiments indicated that the Stirling engine could improve vehicle operational efficiency by ideally detaching the Stirling from direct power demands, eliminating a direct mechanical linkage as used in most current vehicles. Its prime function used in an extended-range series electric hybrid vehicle would be as a generator providing electricity to drive the electric vehicle traction motors and charging a buffer battery set. In a petro-hydraulic hybrid the Stirling would perform a similar function as in a petro-electric series-hybrid turning a pump charging a hydraulic buffer tank. Although successful in the MOD 1 and MOD 2 phases of the experiments, cutbacks in funding further research and lack of interest by automakers ended possible commercialization of the Automotive Stirling Engine Program. Electric vehicles Stirling engines as part of a hybrid electric drive system may be able to bypass the design challenges or disadvantages of a non-hybrid Stirling automobile. In November 2007, a prototype hybrid car using solid biofuel and a Stirling engine was announced by the Precer project in Sweden. The New Hampshire Union Leader reported that Dean Kamen developed a series plug-in hybrid car using a Ford Think. Called the DEKA Revolt, the car can reach approximately on a single charge of its lithium battery. Aircraft engines Robert McConaghy created the first flying Stirling engine-powered aircraft in August 1986. The Beta type engine weighed 360 grams, and produced only 20 watts of power. The engine was attached to the front of a modified Super Malibu radio control glider with a gross takeoff weight of 1 kg. The best-published test flight lasted 6 minutes and exhibited "barely enough power to make the occasional gentle turn and maintain altitude". Marine engines The Stirling engine could be well suited for underwater power systems where electrical work or mechanical power is required on an intermittent or continuous level. General Motors has undertaken work on advanced Stirling cycle engines which include thermal storage for underwater applications. United Stirling, in Malmö, Sweden, are developing an experimental four–cylinder engine using hydrogen peroxide as an oxidant in underwater power systems. The SAGA (Submarine Assistance Great Autonomy) submarine became operational in the 1990s and is driven by two Stirling engines supplied with diesel fuel and liquid oxygen. This system also has potential for surface-ship propulsion, as the engine's size is less of a concern, and placing the radiator section in seawater rather than open air (as a land-based engine would be) allows for it to be smaller. Swedish shipbuilder Kockums has built eight successful Stirling-powered submarines since the late 1980s. They carry compressed oxygen to allow fuel combustion submerged, providing heat for the Stirling engine. They are currently used on submarines of the Gotland and Södermanland classes. They are the first submarines in the world to feature Stirling air-independent propulsion (AIP), which extends their underwater endurance from a few days to several weeks. The Kockums engine also powers the Japanese Sōryū-class submarine. This capability has previously only been available with nuclear-powered submarines. Pump engines Stirling engines can power pumps to move fluids like water, air and gasses. For instance the ST-5 from Stirling Technology Inc. power output of that can run a 3 kW generator or a centrifugal water pump. Electrical power generation Combined heat and power In a combined heat and power (CHP) system, mechanical or electrical power is generated in the usual way, however, the waste heat given off by the engine is used to supply a secondary heating application. This can be virtually anything that uses low-temperature heat. It is often a pre-existing energy use, such as commercial space heating, residential water heating, or an industrial process. Thermal power stations on the electric grid use fuel to produce electricity. However, there are large quantities of waste heat produced which often go unused. In other situations, high-grade fuel is burned at high temperatures for a low-temperature application. According to the second law of thermodynamics, a heat engine can generate power from this temperature difference. In a CHP system, the high-temperature primary heat enters the Stirling engine heater, then some of the energy is converted to mechanical power in the engine, and the rest passes through to the cooler, where it exits at a low temperature. The "waste" heat actually comes from the engine's main cooler, and possibly from other sources such as the exhaust of the burner, if there is one. The power produced by the engine can be used to run an industrial or agricultural process, which in turn creates biomass waste refuse that can be used as free fuel for the engine, thus reducing waste removal costs. The overall process can be efficient and cost-effective. Inspirit Energy, a UK-based company have a gas fired CHP unit called the Inspirit Charger which is on sale in 2016. The floor standing unit generates 3 kW of electrical and 15 kW of thermal energy. WhisperGen, a New Zealand firm with offices in Christchurch, has developed an "AC Micro Combined Heat and Power" Stirling cycle engine. These microCHP units are gas-fired central heating boilers that sell unused power back into the electricity grid. WhisperGen announced in 2004 that they were producing 80,000 units for the residential market in the United Kingdom. A 20 unit trial in Germany was conducted in 2006. Solar power generation Placed at the focus of a parabolic mirror, a Stirling engine can convert solar energy to electricity with an efficiency better than non-concentrated photovoltaic cells, and comparable to concentrated photovoltaics. On August 11, 2005, Southern California Edison announced an agreement with Stirling Energy Systems (SES) to purchase electricity created using over 30,000 Solar Powered Stirling Engines over a twenty-year period sufficient to generate 850 MW of electricity. These systems, on an 8,000 acre (19 km2) solar farm will use mirrors to direct and concentrate sunlight onto the engines which will in turn drive generators. "In January, 2010, four months after breaking ground, Stirling Energy partner company Tessara Solar completed the 1.5 MW Maricopa Solar power plant in Peoria, Arizona, just outside Phoenix. The power plant is composed of 60 SES SunCatchers." The SunCatcher is described as "a large, tracking, concentrating solar power (CSP) dish collector that generates 25 kilowatts (kW) of electricity in full sun. Each of the 38-foot-diameter collectors contains over 300 curved mirrors (heliostats) that focus sunlight onto a power conversion unit, which contains the Stirling engine. The dish uses dual-axis tracking to follow the sun precisely as it moves across the sky." There have been disputes over the project due to concerns of environmental impact on animals living on the site. The Maricopa Solar Plant has been closed. Nuclear power There is a potential for nuclear-powered Stirling engines in electric power generation plants. Replacing the steam turbines of nuclear power plants with Stirling engines might simplify the plant, yield greater efficiency, and reduce the radioactive byproducts. A number of breeder reactor designs use liquid sodium as a coolant. If the heat is to be employed in a steam plant, a water/sodium heat exchanger is required, which raises some concern if a leak were to occur, as sodium reacts violently with water. A Stirling engine eliminates the need for water anywhere in the cycle. This would have advantages for nuclear installations in dry regions. United States government labs have developed a modern Stirling engine design known as the Stirling radioisotope generator for use in space exploration. It is designed to generate electricity for deep space probes on missions lasting decades. The engine uses a single displacer to reduce moving parts and uses high energy acoustics to transfer energy. The heat source is a dry solid nuclear fuel slug, and the heat sink is radiation into free space itself. Heating and cooling If supplied with mechanical power, a Stirling engine can function in reverse as a heat pump for heating or cooling. In the late 1930s, the Philips Corporation of the Netherlands successfully utilized the Stirling cycle in cryogenic applications. During the Space Shuttle program, NASA successfully lofted a Stirling cycle cooler in a form "similar in size and shape to the small domestic units often used in college dormitories" for use in the Life Science Laboratory. Further research on this unit for domestic use led to a Carnot coefficient-of-performance gain by a factor of three and a weight reduction of 1kg for the unit. Experiments have been performed using wind power driving a Stirling cycle heat pump for domestic heating and air conditioning. Stirling cryocoolers Any Stirling engine will also work in reverse as a heat pump: when mechanical energy is applied to the shaft, a temperature difference appears between the reservoirs. The essential mechanical components of a Stirling cryocooler are identical to a Stirling engine. In both the engine and the heat pump, heat flows from the expansion space to the compression space; however, input work is required in order for heat to flow "uphill" against a thermal gradient, specifically when the compression space is hotter than the expansion space. The external side of the expansion-space heat exchanger may be placed inside a thermally insulated compartment such as a vacuum flask. Heat is in effect pumped out of this compartment, through the working gas of the cryocooler and into the compression space. The compression space will be above ambient temperature, and so heat will flow out into the environment. One of their modern uses is in cryogenics and, to a lesser extent, refrigeration. At typical refrigeration temperatures, Stirling coolers are generally not economically competitive with the less expensive mainstream Rankine cooling systems, because they are less energy-efficient. However, below about −40...−30 °C, Rankine cooling is not effective because there are no suitable refrigerants with boiling points this low. Stirling cryocoolers are able to "lift" heat down to −200 °C (73 K), which is sufficient to liquefy air (specifically the primary constituent gases oxygen, nitrogen and argon). They can go as low as 40–60 K for single-stage machines, depending on the particular design. Two-stage Stirling cryocoolers can reach temperatures of 20 K, sufficient to liquify hydrogen and neon. Cryocoolers for this purpose are more or less competitive with other cryocooler technologies. The coefficient of performance at cryogenic temperatures is typically 0.04–0.05 (corresponding to a 4–5% efficiency). Empirically, the devices show a linear trend, typically with the , where Tc is the cryogenic temperature. At these temperatures, solid materials have lower specific heat values, so the regenerator must be made from unexpected materials, such as cotton. The first Stirling-cycle cryocooler was developed at Philips in the 1950s and commercialized in such places as liquid air production plants. The Philips Cryogenics business evolved until it was split off in 1990 to form the Stirling Cryogenics BV, The Netherlands. This company is still active in the development and manufacturing of Stirling cryocoolers and cryogenic cooling systems. A wide variety of smaller Stirling cryocoolers are commercially available for tasks such as the cooling of electronic sensors and sometimes microprocessors. For this application, Stirling cryocoolers are the highest-performance technology available, due to their ability to lift heat efficiently at very low temperatures. They are silent, vibration-free, can be scaled down to small sizes, and have very high reliability and low maintenance. As of 2009, cryocoolers were considered to be the only widely deployed commercially successful Stirling devices. Heat pumps A Stirling heat pump is very similar to a Stirling cryocooler, the main difference being that it usually operates at room temperature. At present, its principal application is to pump heat from the outside of a building to the inside, thus heating it at lowered energy costs. As with any other Stirling device, heat flow is from the expansion space to the compression space. However, in contrast to the Stirling engine, the expansion space is at a lower temperature than the compression space, so instead of producing work, an input of mechanical work is required by the system (in order to satisfy the second law of thermodynamics). The mechanical energy input can be supplied by an electrical motor, or an internal combustion engine, for example. When the mechanical work for the heat pump is provided by a second Stirling engine, then the overall system is called a "heat-driven heatpump". The expansion side of the heat pump is thermally coupled to the heat source, which is often the external environment. The compression side of the Stirling device is placed in the environment to be heated, for example, a building, and heat is "pumped" into it. Typically there will be thermal insulation between the two sides so there will be a temperature rise inside the insulated space. Heat pumps are by far the most energy-efficient types of heating systems, since they "harvest" heat from the environment, rather than only turning their input energy into heat. In accordance with the second law of thermodynamics, heat pumps always require the additional input of some external energy to "pump" the collected heat "uphill" against a temperature differential. Compared to conventional heat pumps, Stirling heat pumps often have a higher coefficient of performance. Stirling systems have seen limited commercial use; however, use is expected to increase along with market demand for energy conservation, and adoption will likely be accelerated by technological refinements. Portable refrigeration The free-piston Stirling cooler (FPSC) is a completely sealed heat transfer system that has only two moving parts (a piston and a displacer), and which can use helium as the working fluid. The piston is typically driven by an oscillating magnetic field that is the source of the power needed to drive the refrigeration cycle. The magnetic drive allows the piston to be driven without requiring any seals, gaskets, O-rings, or other compromises to the hermetically sealed system. Claimed advantages for the system include improved efficiency and cooling capacity, lighter weight, smaller size and better controllability. The FPSC was invented in 1964 by William Beale (1928–2016), a professor of mechanical engineering at Ohio University in Athens, Ohio. He founded Sunpower Inc., which researches and develops FPSC systems for military, aerospace, industrial, and commercial applications. A FPSC cooler made by Sunpower was used by NASA to cool instrumentation in satellites. The firm was sold by the Beale family in 2015 to become a unit of Ametek. Other suppliers of FPSC technology include the Twinbird Corporation of Japan and Global Cooling of the Netherlands, which (like Sunpower) has a research center in Athens, Ohio. For several years starting around 2004, the Coleman Company sold a version of the Twinbird "SC-C925 Portable Freezer Cooler 25L" under its own brand name, but it has since discontinued offering the product. The portable cooler can be operated more than a day, maintaining sub-freezing temperatures while powered by an automotive battery. This cooler is still being manufactured, with Global Cooling now coordinating distribution to North America and Europe. Other variants offered by Twinbird include a portable deep freezer (to −80 °C), collapsible coolers, and a model for transporting blood and vaccine. Low temperature difference engines A low temperature difference (LTD, or Low Delta T (LDT)) Stirling engine will run on any low-temperature differential, for example, the difference between the palm of a hand and room temperature, or room temperature and an ice cube. A record of only 0.5 °C temperature differential was achieved in 1990. Usually they are designed in a gamma configuration for simplicity, and without a regenerator, although some have slits in the displacer typically made of foam for partial regeneration. They are typically unpressurized, running at pressure close to 1 atmosphere. The power produced is less than 1 W, and they are intended for demonstration purposes only. They are sold as toys and educational models. However, larger (typically 1 m square) low temperature engines have been built for pumping water using direct sunlight with minimal or no magnification. Other applications Acoustic Stirling Heat Engine Los Alamos National Laboratory has developed an "Acoustic Stirling Heat Engine" with no moving parts. It converts heat into intense acoustic power which (quoted from given source) "can be used directly in acoustic refrigerators or pulse-tube refrigerators to provide heat-driven refrigeration with no moving parts, or ... to generate electricity via a linear alternator or other electro-acoustic power transducer". MicroCHP WhisperGen, (bankruptcy 2012) a New Zealand-based company has developed Stirling engines that can be powered by natural gas or diesel. An agreement has been signed with Mondragon Corporación Cooperativa, a Spanish firm, to produce WhisperGen's microCHP (Combined Heat and Power) and make them available for the domestic market in Europe. Some time ago E.ON UK announced a similar initiative for the UK. Domestic Stirling engines would supply the client with hot water, space heating, and a surplus electric power that could be fed back into the electric grid. Based on the companies' published performance specifications, the off-grid diesel-fueled unit produces combined heat (5.5 kW heat) and electric (800W electric) output, from a unit being fed 0.75 liters of automotive-grade diesel fuel per hour. Whispergen units are claimed to operate as a combined co-generation unit reaching as high as ~80% operating efficiency. However the preliminary results of an Energy Saving Trust review of the performance of the WhisperGen microCHP units suggested that their advantages were marginal at best in most homes. However another author shows that Stirling engine microgeneration is the most cost effective of various microgeneration technologies in terms of reducing CO2. Chip cooling MSI (Taiwan) developed a miniature Stirling engine cooling system for personal computer chips that uses the waste heat from the chip to drive a fan. Desalination In all thermal power plants there has to be an exhaust of waste heat. However, there's no reason that the waste heat cannot be diverted to run Stirling engines to pump seawater through reverse osmosis assemblies except that any additional use of the heat raises the effective heat sink temperature for the thermal power plant resulting in some loss of energy conversion efficiency. In a typical nuclear power plant, two-thirds of the thermal energy produced by the reactor is waste heat. In a Stirling assembly the waste heat has the potential to be used as an additional source of electricity. References Cooling technology Heat pumps Stirling engines Piston engines External combustion engines
Applications of the Stirling engine
Technology
4,981
18,043,938
https://en.wikipedia.org/wiki/Revers
A revers or rever is a part of a garment that is reversed to display the lining or facing outside. The word is borrowed from French revers, which is reflected in the final s being silent. The most common form of revers is the lapel. The revers emerged in the 1860s in France as soldiers began unbuttoning the fronts of their uniforms. When the revers became dirty, the uniform could be buttoned up to show a clean front again. References Parts of clothing Neckwear nl:Revers
Revers
Technology
109